All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 0/8] i915 PMU and engine busy stats
@ 2017-09-25 15:15 Tvrtko Ursulin
  2017-09-25 15:15 ` [PATCH 1/8] drm/i915: Convert intel_rc6_residency_us to ns Tvrtko Ursulin
                   ` (9 more replies)
  0 siblings, 10 replies; 30+ messages in thread
From: Tvrtko Ursulin @ 2017-09-25 15:15 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Fifth spin of the i915 PMU series.

Just some small cleanups across a few patches and a rebase to latest drm-tip.

Patches 1-2 are small refactors to make the following work easier.

Patch 3 is the main bit.

Patch 4 is a small optimisation on top, to only run the sampling timer when it
is required. (Depending on GPU awake status and active PMU counters.)

Patches 5-7 add software engine busyness tracking, which is then used from the
PMU in patch 7. This allows more efficient and more accurate engine busyness
metric.

And finally patch 8 is an additional optimisation which hides the whole cost
of the software engine busyness tracking behind a single nop instruction in
the off case.

Tvrtko Ursulin (8):
  drm/i915: Convert intel_rc6_residency_us to ns
  drm/i915: Extract intel_get_cagf
  drm/i915/pmu: Expose a PMU interface for perf queries
  drm/i915/pmu: Suspend sampling when GPU is idle
  drm/i915: Wrap context schedule notification
  drm/i915: Engine busy time tracking
  drm/i915/pmu: Wire up engine busy stats to PMU
  drm/i915: Gate engine stats collection with a static key

 drivers/gpu/drm/i915/Makefile           |   1 +
 drivers/gpu/drm/i915/i915_debugfs.c     |   9 +-
 drivers/gpu/drm/i915/i915_drv.c         |   2 +
 drivers/gpu/drm/i915/i915_drv.h         |  96 +++-
 drivers/gpu/drm/i915/i915_gem.c         |   1 +
 drivers/gpu/drm/i915/i915_gem_request.c |   1 +
 drivers/gpu/drm/i915/i915_pmu.c         | 855 ++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_reg.h         |   3 +
 drivers/gpu/drm/i915/i915_sysfs.c       |  20 +-
 drivers/gpu/drm/i915/intel_engine_cs.c  | 136 +++++
 drivers/gpu/drm/i915/intel_lrc.c        |  19 +-
 drivers/gpu/drm/i915/intel_pm.c         |  41 +-
 drivers/gpu/drm/i915/intel_ringbuffer.h | 147 ++++++
 include/uapi/drm/i915_drm.h             |  57 +++
 14 files changed, 1353 insertions(+), 35 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_pmu.c

-- 
2.9.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 1/8] drm/i915: Convert intel_rc6_residency_us to ns
  2017-09-25 15:15 [PATCH v5 0/8] i915 PMU and engine busy stats Tvrtko Ursulin
@ 2017-09-25 15:15 ` Tvrtko Ursulin
  2017-09-25 17:08   ` Chris Wilson
  2017-09-25 15:15 ` [PATCH 2/8] drm/i915: Extract intel_get_cagf Tvrtko Ursulin
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 30+ messages in thread
From: Tvrtko Ursulin @ 2017-09-25 15:15 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Will be used for exposing the PMU counters.

v2:
 * Move intel_runtime_pm_get/put to the callers. (Chris Wilson)
 * Restore full unit conversion precision.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h   |  8 +++++++-
 drivers/gpu/drm/i915/i915_sysfs.c |  9 +++++++--
 drivers/gpu/drm/i915/intel_pm.c   | 27 +++++++++++++--------------
 3 files changed, 27 insertions(+), 17 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index c1e93a61d81b..ee03e839356f 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -4128,9 +4128,15 @@ void vlv_phy_reset_lanes(struct intel_encoder *encoder);
 
 int intel_gpu_freq(struct drm_i915_private *dev_priv, int val);
 int intel_freq_opcode(struct drm_i915_private *dev_priv, int val);
-u64 intel_rc6_residency_us(struct drm_i915_private *dev_priv,
+u64 intel_rc6_residency_ns(struct drm_i915_private *dev_priv,
 			   const i915_reg_t reg);
 
+static inline u64 intel_rc6_residency_us(struct drm_i915_private *dev_priv,
+					 const i915_reg_t reg)
+{
+	return DIV_ROUND_UP_ULL(intel_rc6_residency_ns(dev_priv, reg), 1000);
+}
+
 #define I915_READ8(reg)		dev_priv->uncore.funcs.mmio_readb(dev_priv, (reg), true)
 #define I915_WRITE8(reg, val)	dev_priv->uncore.funcs.mmio_writeb(dev_priv, (reg), (val), true)
 
diff --git a/drivers/gpu/drm/i915/i915_sysfs.c b/drivers/gpu/drm/i915/i915_sysfs.c
index d61c8727f756..1a34d32d0092 100644
--- a/drivers/gpu/drm/i915/i915_sysfs.c
+++ b/drivers/gpu/drm/i915/i915_sysfs.c
@@ -42,8 +42,13 @@ static inline struct drm_i915_private *kdev_minor_to_i915(struct device *kdev)
 static u32 calc_residency(struct drm_i915_private *dev_priv,
 			  i915_reg_t reg)
 {
-	return DIV_ROUND_CLOSEST_ULL(intel_rc6_residency_us(dev_priv, reg),
-				     1000);
+	u64 res;
+
+	intel_runtime_pm_get(dev_priv);
+	res = intel_rc6_residency_us(dev_priv, reg);
+	intel_runtime_pm_put(dev_priv);
+
+	return DIV_ROUND_CLOSEST_ULL(res, 1000);
 }
 
 static ssize_t
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index c66af09e27a7..8b4ffff11cf7 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -9343,34 +9343,33 @@ static u64 vlv_residency_raw(struct drm_i915_private *dev_priv,
 	return lower | (u64)upper << 8;
 }
 
-u64 intel_rc6_residency_us(struct drm_i915_private *dev_priv,
+u64 intel_rc6_residency_ns(struct drm_i915_private *dev_priv,
 			   const i915_reg_t reg)
 {
-	u64 time_hw, units, div;
+	u64 time_hw;
+	u32 mul, div;
 
 	if (!intel_enable_rc6())
 		return 0;
 
-	intel_runtime_pm_get(dev_priv);
-
 	/* On VLV and CHV, residency time is in CZ units rather than 1.28us */
 	if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) {
-		units = 1000;
+		mul = 1000000;
 		div = dev_priv->czclk_freq;
-
 		time_hw = vlv_residency_raw(dev_priv, reg);
-	} else if (IS_GEN9_LP(dev_priv)) {
-		units = 1000;
-		div = 1200;		/* 833.33ns */
 
-		time_hw = I915_READ(reg);
 	} else {
-		units = 128000; /* 1.28us */
-		div = 100000;
+		/* 833.33ns units on Gen9LP, 1.28us elsewhere. */
+		if (IS_GEN9_LP(dev_priv)) {
+			mul = 10000;
+			div = 12;
+		} else {
+			mul = 1280;
+			div = 1;
+		}
 
 		time_hw = I915_READ(reg);
 	}
 
-	intel_runtime_pm_put(dev_priv);
-	return DIV_ROUND_UP_ULL(time_hw * units, div);
+	return DIV_ROUND_UP_ULL(time_hw * mul, div);
 }
-- 
2.9.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 2/8] drm/i915: Extract intel_get_cagf
  2017-09-25 15:15 [PATCH v5 0/8] i915 PMU and engine busy stats Tvrtko Ursulin
  2017-09-25 15:15 ` [PATCH 1/8] drm/i915: Convert intel_rc6_residency_us to ns Tvrtko Ursulin
@ 2017-09-25 15:15 ` Tvrtko Ursulin
  2017-09-25 15:15 ` [PATCH 3/8] drm/i915/pmu: Expose a PMU interface for perf queries Tvrtko Ursulin
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 30+ messages in thread
From: Tvrtko Ursulin @ 2017-09-25 15:15 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Code to be shared between debugfs and the PMU implementation.

v2: Checkpatch cleanup.
v3: Also consolidate i915_sysfs.c/gt_act_freq_mhz_show.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> (v2)
---
 drivers/gpu/drm/i915/i915_debugfs.c |  9 ++-------
 drivers/gpu/drm/i915/i915_drv.h     |  2 ++
 drivers/gpu/drm/i915/i915_sysfs.c   | 11 +++--------
 drivers/gpu/drm/i915/intel_pm.c     | 14 ++++++++++++++
 4 files changed, 21 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 847f8e8d6b58..e5aaac8579b3 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -1113,13 +1113,8 @@ static int i915_frequency_info(struct seq_file *m, void *unused)
 		rpdownei = I915_READ(GEN6_RP_CUR_DOWN_EI) & GEN6_CURIAVG_MASK;
 		rpcurdown = I915_READ(GEN6_RP_CUR_DOWN) & GEN6_CURBSYTAVG_MASK;
 		rpprevdown = I915_READ(GEN6_RP_PREV_DOWN) & GEN6_CURBSYTAVG_MASK;
-		if (INTEL_GEN(dev_priv) >= 9)
-			cagf = (rpstat & GEN9_CAGF_MASK) >> GEN9_CAGF_SHIFT;
-		else if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv))
-			cagf = (rpstat & HSW_CAGF_MASK) >> HSW_CAGF_SHIFT;
-		else
-			cagf = (rpstat & GEN6_CAGF_MASK) >> GEN6_CAGF_SHIFT;
-		cagf = intel_gpu_freq(dev_priv, cagf);
+		cagf = intel_gpu_freq(dev_priv,
+				      intel_get_cagf(dev_priv, rpstat));
 
 		intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
 
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index ee03e839356f..6143a709c605 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -4137,6 +4137,8 @@ static inline u64 intel_rc6_residency_us(struct drm_i915_private *dev_priv,
 	return DIV_ROUND_UP_ULL(intel_rc6_residency_ns(dev_priv, reg), 1000);
 }
 
+u32 intel_get_cagf(struct drm_i915_private *dev_priv, u32 rpstat1);
+
 #define I915_READ8(reg)		dev_priv->uncore.funcs.mmio_readb(dev_priv, (reg), true)
 #define I915_WRITE8(reg, val)	dev_priv->uncore.funcs.mmio_writeb(dev_priv, (reg), (val), true)
 
diff --git a/drivers/gpu/drm/i915/i915_sysfs.c b/drivers/gpu/drm/i915/i915_sysfs.c
index 1a34d32d0092..2f74ae55332d 100644
--- a/drivers/gpu/drm/i915/i915_sysfs.c
+++ b/drivers/gpu/drm/i915/i915_sysfs.c
@@ -257,14 +257,9 @@ static ssize_t gt_act_freq_mhz_show(struct device *kdev,
 		freq = vlv_punit_read(dev_priv, PUNIT_REG_GPU_FREQ_STS);
 		ret = intel_gpu_freq(dev_priv, (freq >> 8) & 0xff);
 	} else {
-		u32 rpstat = I915_READ(GEN6_RPSTAT1);
-		if (INTEL_GEN(dev_priv) >= 9)
-			ret = (rpstat & GEN9_CAGF_MASK) >> GEN9_CAGF_SHIFT;
-		else if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv))
-			ret = (rpstat & HSW_CAGF_MASK) >> HSW_CAGF_SHIFT;
-		else
-			ret = (rpstat & GEN6_CAGF_MASK) >> GEN6_CAGF_SHIFT;
-		ret = intel_gpu_freq(dev_priv, ret);
+		ret = intel_gpu_freq(dev_priv,
+				     intel_get_cagf(dev_priv,
+						    I915_READ(GEN6_RPSTAT1)));
 	}
 	mutex_unlock(&dev_priv->rps.hw_lock);
 
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index 8b4ffff11cf7..e765c553e17e 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -9373,3 +9373,17 @@ u64 intel_rc6_residency_ns(struct drm_i915_private *dev_priv,
 
 	return DIV_ROUND_UP_ULL(time_hw * mul, div);
 }
+
+u32 intel_get_cagf(struct drm_i915_private *dev_priv, u32 rpstat)
+{
+	u32 cagf;
+
+	if (INTEL_GEN(dev_priv) >= 9)
+		cagf = (rpstat & GEN9_CAGF_MASK) >> GEN9_CAGF_SHIFT;
+	else if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv))
+		cagf = (rpstat & HSW_CAGF_MASK) >> HSW_CAGF_SHIFT;
+	else
+		cagf = (rpstat & GEN6_CAGF_MASK) >> GEN6_CAGF_SHIFT;
+
+	return  cagf;
+}
-- 
2.9.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 3/8] drm/i915/pmu: Expose a PMU interface for perf queries
  2017-09-25 15:15 [PATCH v5 0/8] i915 PMU and engine busy stats Tvrtko Ursulin
  2017-09-25 15:15 ` [PATCH 1/8] drm/i915: Convert intel_rc6_residency_us to ns Tvrtko Ursulin
  2017-09-25 15:15 ` [PATCH 2/8] drm/i915: Extract intel_get_cagf Tvrtko Ursulin
@ 2017-09-25 15:15 ` Tvrtko Ursulin
  2017-09-25 17:37   ` Chris Wilson
  2017-09-25 15:15 ` [PATCH 4/8] drm/i915/pmu: Suspend sampling when GPU is idle Tvrtko Ursulin
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 30+ messages in thread
From: Tvrtko Ursulin @ 2017-09-25 15:15 UTC (permalink / raw)
  To: Intel-gfx; +Cc: Peter Zijlstra

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

From: Chris Wilson <chris@chris-wilson.co.uk>
From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
From: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>

The first goal is to be able to measure GPU (and invidual ring) busyness
without having to poll registers from userspace. (Which not only incurs
holding the forcewake lock indefinitely, perturbing the system, but also
runs the risk of hanging the machine.) As an alternative we can use the
perf event counter interface to sample the ring registers periodically
and send those results to userspace.

Functionality we are exporting to userspace is via the existing perf PMU
API and can be exercised via the existing tools. For example:

  perf stat -a -e i915/rcs0-busy/ -I 1000

Will print the render engine busynnes once per second. All the performance
counters can be enumerated (perf list) and have their unit of measure
correctly reported in sysfs.

v1-v2 (Chris Wilson):

v2: Use a common timer for the ring sampling.

v3: (Tvrtko Ursulin)
 * Decouple uAPI from i915 engine ids.
 * Complete uAPI defines.
 * Refactor some code to helpers for clarity.
 * Skip sampling disabled engines.
 * Expose counters in sysfs.
 * Pass in fake regs to avoid null ptr deref in perf core.
 * Convert to class/instance uAPI.
 * Use shared driver code for rc6 residency, power and frequency.

v4: (Dmitry Rogozhkin)
 * Register PMU with .task_ctx_nr=perf_invalid_context
 * Expose cpumask for the PMU with the single CPU in the mask
 * Properly support pmu->stop(): it should call pmu->read()
 * Properly support pmu->del(): it should call stop(event, PERF_EF_UPDATE)
 * Introduce refcounting of event subscriptions.
 * Make pmu.busy_stats a refcounter to avoid busy stats going away
   with some deleted event.
 * Expose cpumask for i915 PMU to avoid multiple events creation of
   the same type followed by counter aggregation by perf-stat.
 * Track CPUs getting online/offline to migrate perf context. If (likely)
   cpumask will initially set CPU0, CONFIG_BOOTPARAM_HOTPLUG_CPU0 will be
   needed to see effect of CPU status tracking.
 * End result is that only global events are supported and perf stat
   works correctly.
 * Deny perf driver level sampling - it is prohibited for uncore PMU.

v5: (Tvrtko Ursulin)

 * Don't hardcode number of engine samplers.
 * Rewrite event ref-counting for correctness and simplicity.
 * Store initial counter value when starting already enabled events
   to correctly report values to all listeners.
 * Fix RC6 residency readout.
 * Comments, GPL header.

v6:
 * Add missing entry to v4 changelog.
 * Fix accounting in CPU hotplug case by copying the approach from
   arch/x86/events/intel/cstate.c. (Dmitry Rogozhkin)

v7:
 * Log failure message only on failure.
 * Remove CPU hotplug notification state on unregister.

v8:
 * Fix error unwind on failed registration.
 * Checkpatch cleanup.

v9:
 * Drop the energy metric, it is available via intel_rapl_perf.
   (Ville Syrjälä)
 * Use HAS_RC6(p). (Chris Wilson)
 * Handle unsupported non-engine events. (Dmitry Rogozhkin)
 * Rebase for intel_rc6_residency_ns needing caller managed
   runtime pm.
 * Drop HAS_RC6 checks from the read callback since creating those
   events will be rejected at init time already.
 * Add counter units to sysfs so perf stat output is nicer.
 * Cleanup the attribute tables for brevity and readability.

v10:
 * Fixed queued accounting.

v11:
 * Move intel_engine_lookup_user to intel_engine_cs.c
 * Commit update. (Joonas Lahtinen)

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
---
 drivers/gpu/drm/i915/Makefile           |   1 +
 drivers/gpu/drm/i915/i915_drv.c         |   2 +
 drivers/gpu/drm/i915/i915_drv.h         |  78 ++++
 drivers/gpu/drm/i915/i915_pmu.c         | 707 ++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_reg.h         |   3 +
 drivers/gpu/drm/i915/intel_engine_cs.c  |  35 ++
 drivers/gpu/drm/i915/intel_ringbuffer.h |  25 ++
 include/uapi/drm/i915_drm.h             |  57 +++
 8 files changed, 908 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/i915_pmu.c

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index 5182e3d5557d..5c6013961b3b 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -26,6 +26,7 @@ i915-y := i915_drv.o \
 
 i915-$(CONFIG_COMPAT)   += i915_ioc32.o
 i915-$(CONFIG_DEBUG_FS) += i915_debugfs.o intel_pipe_crc.o
+i915-$(CONFIG_PERF_EVENTS) += i915_pmu.o
 
 # GEM code
 i915-y += i915_cmd_parser.o \
diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index 7056bb299dc6..03bbe23e4df8 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -1200,6 +1200,7 @@ static void i915_driver_register(struct drm_i915_private *dev_priv)
 	struct drm_device *dev = &dev_priv->drm;
 
 	i915_gem_shrinker_init(dev_priv);
+	i915_pmu_register(dev_priv);
 
 	/*
 	 * Notify a valid surface after modesetting,
@@ -1254,6 +1255,7 @@ static void i915_driver_unregister(struct drm_i915_private *dev_priv)
 	intel_opregion_unregister(dev_priv);
 
 	i915_perf_unregister(dev_priv);
+	i915_pmu_unregister(dev_priv);
 
 	i915_teardown_sysfs(dev_priv);
 	i915_guc_log_unregister(dev_priv);
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 6143a709c605..a40c314eed5f 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -40,6 +40,7 @@
 #include <linux/hash.h>
 #include <linux/intel-iommu.h>
 #include <linux/kref.h>
+#include <linux/perf_event.h>
 #include <linux/pm_qos.h>
 #include <linux/reservation.h>
 #include <linux/shmem_fs.h>
@@ -2198,6 +2199,70 @@ struct intel_cdclk_state {
 	unsigned int cdclk, vco, ref;
 };
 
+enum {
+	__I915_SAMPLE_FREQ_ACT = 0,
+	__I915_SAMPLE_FREQ_REQ,
+	__I915_NUM_PMU_SAMPLERS
+};
+
+/**
+ * How many different events we track in the global PMU mask.
+ *
+ * It is also used to know to needed number of event reference counters.
+ */
+#define I915_PMU_MASK_BITS \
+	((1 << I915_PMU_SAMPLE_BITS) + \
+	 (I915_PMU_LAST + 1 - __I915_PMU_OTHER(0)))
+
+struct i915_pmu {
+	/**
+	 * @node: List node for CPU hotplug handling.
+	 */
+	struct hlist_node node;
+	/**
+	 * @base: PMU base.
+	 */
+	struct pmu base;
+	/**
+	 * @lock: Lock protecting enable mask and ref count handling.
+	 */
+	spinlock_t lock;
+	/**
+	 * @timer: Timer for internal i915 PMU sampling.
+	 */
+	struct hrtimer timer;
+	/**
+	 * @enable: Bitmask of all currently enabled events.
+	 *
+	 * Bits are derived from uAPI event numbers in a way that low 16 bits
+	 * correspond to engine event _sample_ _type_ (I915_SAMPLE_QUEUED is
+	 * bit 0), and higher bits correspond to other events (for instance
+	 * I915_PMU_ACTUAL_FREQUENCY is bit 16 etc).
+	 *
+	 * In other words, low 16 bits are not per engine but per engine
+	 * sampler type, while the upper bits are directly mapped to other
+	 * event types.
+	 */
+	u64 enable;
+	/**
+	 * @enable_count: Reference counts for the enabled events.
+	 *
+	 * Array indices are mapped in the same way as bits in the @enable field
+	 * and they are used to control sampling on/off when multiple clients
+	 * are using the PMU API.
+	 */
+	unsigned int enable_count[I915_PMU_MASK_BITS];
+	/**
+	 * @sample: Current counter value for i915 events which need sampling.
+	 *
+	 * These counters are updated from the i915 PMU sampling timer.
+	 *
+	 * Only global counters are held here, while the per-engine ones are in
+	 * struct intel_engine_cs.
+	 */
+	u64 sample[__I915_NUM_PMU_SAMPLERS];
+};
+
 struct drm_i915_private {
 	struct drm_device drm;
 
@@ -2246,6 +2311,8 @@ struct drm_i915_private {
 	struct pci_dev *bridge_dev;
 	struct i915_gem_context *kernel_context;
 	struct intel_engine_cs *engine[I915_NUM_ENGINES];
+	struct intel_engine_cs *engine_class[MAX_ENGINE_CLASS + 1]
+					    [MAX_ENGINE_INSTANCE + 1];
 	struct i915_vma *semaphore;
 
 	struct drm_dma_handle *status_page_dmah;
@@ -2708,6 +2775,8 @@ struct drm_i915_private {
 		int	irq;
 	} lpe_audio;
 
+	struct i915_pmu pmu;
+
 	/*
 	 * NOTE: This is the dri1/ums dungeon, don't add stuff here. Your patch
 	 * will be rejected. Instead look for a better place.
@@ -3932,6 +4001,15 @@ extern void i915_perf_fini(struct drm_i915_private *dev_priv);
 extern void i915_perf_register(struct drm_i915_private *dev_priv);
 extern void i915_perf_unregister(struct drm_i915_private *dev_priv);
 
+/* i915_pmu.c */
+#ifdef CONFIG_PERF_EVENTS
+void i915_pmu_register(struct drm_i915_private *i915);
+void i915_pmu_unregister(struct drm_i915_private *i915);
+#else
+static inline void i915_pmu_register(struct drm_i915_private *i915) {}
+static inline void i915_pmu_unregister(struct drm_i915_private *i915) {}
+#endif
+
 /* i915_suspend.c */
 extern int i915_save_state(struct drm_i915_private *dev_priv);
 extern int i915_restore_state(struct drm_i915_private *dev_priv);
diff --git a/drivers/gpu/drm/i915/i915_pmu.c b/drivers/gpu/drm/i915/i915_pmu.c
new file mode 100644
index 000000000000..cf1406fbc215
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_pmu.c
@@ -0,0 +1,707 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/perf_event.h>
+#include <linux/pm_runtime.h>
+
+#include "i915_drv.h"
+#include "intel_ringbuffer.h"
+
+/* Frequency for the sampling timer for events which need it. */
+#define FREQUENCY 200
+#define PERIOD max_t(u64, 10000, NSEC_PER_SEC / FREQUENCY)
+
+#define ENGINE_SAMPLE_MASK \
+	(BIT(I915_SAMPLE_QUEUED) | \
+	 BIT(I915_SAMPLE_BUSY) | \
+	 BIT(I915_SAMPLE_WAIT) | \
+	 BIT(I915_SAMPLE_SEMA))
+
+#define ENGINE_SAMPLE_BITS (1 << I915_PMU_SAMPLE_BITS)
+
+static cpumask_t i915_pmu_cpumask = CPU_MASK_NONE;
+
+static u8 engine_config_sample(u64 config)
+{
+	return config & I915_PMU_SAMPLE_MASK;
+}
+
+static u8 engine_event_sample(struct perf_event *event)
+{
+	return engine_config_sample(event->attr.config);
+}
+
+static u8 engine_event_class(struct perf_event *event)
+{
+	return (event->attr.config >> I915_PMU_CLASS_SHIFT) & 0xff;
+}
+
+static u8 engine_event_instance(struct perf_event *event)
+{
+	return (event->attr.config >> I915_PMU_SAMPLE_BITS) & 0xff;
+}
+
+static bool is_engine_config(u64 config)
+{
+	return config < __I915_PMU_OTHER(0);
+}
+
+static unsigned int config_enabled_bit(u64 config)
+{
+	if (is_engine_config(config))
+		return engine_config_sample(config);
+	else
+		return ENGINE_SAMPLE_BITS + (config - __I915_PMU_OTHER(0));
+}
+
+static u64 config_enabled_mask(u64 config)
+{
+	return BIT_ULL(config_enabled_bit(config));
+}
+
+static bool is_engine_event(struct perf_event *event)
+{
+	return is_engine_config(event->attr.config);
+}
+
+static unsigned int event_enabled_bit(struct perf_event *event)
+{
+	return config_enabled_bit(event->attr.config);
+}
+
+static bool grab_forcewake(struct drm_i915_private *i915, bool fw)
+{
+	if (!fw)
+		intel_uncore_forcewake_get(i915, FORCEWAKE_ALL);
+
+	return true;
+}
+
+static void engines_sample(struct drm_i915_private *dev_priv)
+{
+	struct intel_engine_cs *engine;
+	enum intel_engine_id id;
+	bool fw = false;
+
+	if ((dev_priv->pmu.enable & ENGINE_SAMPLE_MASK) == 0)
+		return;
+
+	if (!dev_priv->gt.awake)
+		return;
+
+	if (!intel_runtime_pm_get_if_in_use(dev_priv))
+		return;
+
+	for_each_engine(engine, dev_priv, id) {
+		u32 current_seqno = intel_engine_get_seqno(engine);
+		u32 last_seqno = intel_engine_last_submit(engine);
+		u32 enable = engine->pmu.enable;
+
+		if (i915_seqno_passed(current_seqno, last_seqno))
+			continue;
+
+		if ((enable & BIT(I915_SAMPLE_QUEUED)) &&
+		    (last_seqno - 1 > current_seqno))
+			engine->pmu.sample[I915_SAMPLE_QUEUED] += PERIOD;
+
+		if (enable & BIT(I915_SAMPLE_BUSY)) {
+			u32 val;
+
+			fw = grab_forcewake(dev_priv, fw);
+			val = I915_READ_FW(RING_MI_MODE(engine->mmio_base));
+			if (!(val & MODE_IDLE))
+				engine->pmu.sample[I915_SAMPLE_BUSY] += PERIOD;
+		}
+
+		if (enable & (BIT(I915_SAMPLE_WAIT) | BIT(I915_SAMPLE_SEMA))) {
+			u32 val;
+
+			fw = grab_forcewake(dev_priv, fw);
+			val = I915_READ_FW(RING_CTL(engine->mmio_base));
+			if ((enable & BIT(I915_SAMPLE_WAIT)) &&
+			    (val & RING_WAIT))
+				engine->pmu.sample[I915_SAMPLE_WAIT] += PERIOD;
+			if ((enable & BIT(I915_SAMPLE_SEMA)) &&
+			    (val & RING_WAIT_SEMAPHORE))
+				engine->pmu.sample[I915_SAMPLE_SEMA] += PERIOD;
+		}
+	}
+
+	if (fw)
+		intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
+	intel_runtime_pm_put(dev_priv);
+}
+
+static void frequency_sample(struct drm_i915_private *dev_priv)
+{
+	if (dev_priv->pmu.enable &
+	    config_enabled_mask(I915_PMU_ACTUAL_FREQUENCY)) {
+		u64 val;
+
+		val = dev_priv->rps.cur_freq;
+		if (dev_priv->gt.awake &&
+		    intel_runtime_pm_get_if_in_use(dev_priv)) {
+			val = intel_get_cagf(dev_priv,
+					     I915_READ_NOTRACE(GEN6_RPSTAT1));
+			intel_runtime_pm_put(dev_priv);
+		}
+		val = intel_gpu_freq(dev_priv, val);
+		dev_priv->pmu.sample[__I915_SAMPLE_FREQ_ACT] += val * PERIOD;
+	}
+
+	if (dev_priv->pmu.enable &
+	    config_enabled_mask(I915_PMU_REQUESTED_FREQUENCY)) {
+		u64 val = intel_gpu_freq(dev_priv, dev_priv->rps.cur_freq);
+
+		dev_priv->pmu.sample[__I915_SAMPLE_FREQ_REQ] += val * PERIOD;
+	}
+}
+
+static enum hrtimer_restart i915_sample(struct hrtimer *hrtimer)
+{
+	struct drm_i915_private *i915 =
+		container_of(hrtimer, struct drm_i915_private, pmu.timer);
+
+	if (i915->pmu.enable == 0)
+		return HRTIMER_NORESTART;
+
+	engines_sample(i915);
+	frequency_sample(i915);
+
+	hrtimer_forward_now(hrtimer, ns_to_ktime(PERIOD));
+	return HRTIMER_RESTART;
+}
+
+static u64 count_interrupts(struct drm_i915_private *i915)
+{
+	/* open-coded kstat_irqs() */
+	struct irq_desc *desc = irq_to_desc(i915->drm.pdev->irq);
+	u64 sum = 0;
+	int cpu;
+
+	if (!desc || !desc->kstat_irqs)
+		return 0;
+
+	for_each_possible_cpu(cpu)
+		sum += *per_cpu_ptr(desc->kstat_irqs, cpu);
+
+	return sum;
+}
+
+static void i915_pmu_event_destroy(struct perf_event *event)
+{
+	WARN_ON(event->parent);
+}
+
+static int engine_event_init(struct perf_event *event)
+{
+	struct drm_i915_private *i915 =
+		container_of(event->pmu, typeof(*i915), pmu.base);
+
+	if (!intel_engine_lookup_user(i915, engine_event_class(event),
+				      engine_event_instance(event)))
+		return -ENODEV;
+
+	switch (engine_event_sample(event)) {
+	case I915_SAMPLE_QUEUED:
+	case I915_SAMPLE_BUSY:
+	case I915_SAMPLE_WAIT:
+		break;
+	case I915_SAMPLE_SEMA:
+		if (INTEL_GEN(i915) < 6)
+			return -ENODEV;
+		break;
+	default:
+		return -ENOENT;
+	}
+
+	return 0;
+}
+
+static int i915_pmu_event_init(struct perf_event *event)
+{
+	struct drm_i915_private *i915 =
+		container_of(event->pmu, typeof(*i915), pmu.base);
+	int cpu, ret;
+
+	if (event->attr.type != event->pmu->type)
+		return -ENOENT;
+
+	/* unsupported modes and filters */
+	if (event->attr.sample_period) /* no sampling */
+		return -EINVAL;
+
+	if (has_branch_stack(event))
+		return -EOPNOTSUPP;
+
+	if (event->cpu < 0)
+		return -EINVAL;
+
+	cpu = cpumask_any_and(&i915_pmu_cpumask,
+			      topology_sibling_cpumask(event->cpu));
+	if (cpu >= nr_cpu_ids)
+		return -ENODEV;
+
+	if (is_engine_event(event)) {
+		ret = engine_event_init(event);
+	} else {
+		ret = 0;
+		switch (event->attr.config) {
+		case I915_PMU_INTERRUPTS:
+			break;
+		case I915_PMU_ACTUAL_FREQUENCY:
+			if (IS_VALLEYVIEW(i915) || IS_CHERRYVIEW(i915))
+				 /* Requires a mutex for sampling! */
+				ret = -ENODEV;
+		case I915_PMU_REQUESTED_FREQUENCY:
+			if (INTEL_GEN(i915) < 6)
+				ret = -ENODEV;
+			break;
+		case I915_PMU_RC6_RESIDENCY:
+			if (!HAS_RC6(i915))
+				ret = -ENODEV;
+			break;
+		case I915_PMU_RC6p_RESIDENCY:
+		case I915_PMU_RC6pp_RESIDENCY:
+			if (!HAS_RC6p(i915))
+				ret = -ENODEV;
+			break;
+		default:
+			ret = -ENOENT;
+			break;
+		}
+	}
+	if (ret)
+		return ret;
+
+	event->cpu = cpu;
+	if (!event->parent)
+		event->destroy = i915_pmu_event_destroy;
+
+	return 0;
+}
+
+static u64 __i915_pmu_event_read(struct perf_event *event)
+{
+	struct drm_i915_private *i915 =
+		container_of(event->pmu, typeof(*i915), pmu.base);
+	u64 val = 0;
+
+	if (is_engine_event(event)) {
+		u8 sample = engine_event_sample(event);
+		struct intel_engine_cs *engine;
+
+		engine = intel_engine_lookup_user(i915,
+						  engine_event_class(event),
+						  engine_event_instance(event));
+
+		if (WARN_ON_ONCE(!engine)) {
+			/* Do nothing */
+		} else {
+			val = engine->pmu.sample[sample];
+		}
+	} else {
+		switch (event->attr.config) {
+		case I915_PMU_ACTUAL_FREQUENCY:
+			val = i915->pmu.sample[__I915_SAMPLE_FREQ_ACT];
+			break;
+		case I915_PMU_REQUESTED_FREQUENCY:
+			val = i915->pmu.sample[__I915_SAMPLE_FREQ_REQ];
+			break;
+		case I915_PMU_INTERRUPTS:
+			val = count_interrupts(i915);
+			break;
+		case I915_PMU_RC6_RESIDENCY:
+			intel_runtime_pm_get(i915);
+			val = intel_rc6_residency_ns(i915,
+						     IS_VALLEYVIEW(i915) ?
+						     VLV_GT_RENDER_RC6 :
+						     GEN6_GT_GFX_RC6);
+			intel_runtime_pm_put(i915);
+			break;
+		case I915_PMU_RC6p_RESIDENCY:
+			intel_runtime_pm_get(i915);
+			val = intel_rc6_residency_ns(i915, GEN6_GT_GFX_RC6p);
+			intel_runtime_pm_put(i915);
+			break;
+		case I915_PMU_RC6pp_RESIDENCY:
+			intel_runtime_pm_get(i915);
+			val = intel_rc6_residency_ns(i915, GEN6_GT_GFX_RC6pp);
+			intel_runtime_pm_put(i915);
+			break;
+		}
+	}
+
+	return val;
+}
+
+static void i915_pmu_event_read(struct perf_event *event)
+{
+	struct hw_perf_event *hwc = &event->hw;
+	u64 prev, new;
+
+again:
+	prev = local64_read(&hwc->prev_count);
+	new = __i915_pmu_event_read(event);
+
+	if (local64_cmpxchg(&hwc->prev_count, prev, new) != prev)
+		goto again;
+
+	local64_add(new - prev, &event->count);
+}
+
+static void i915_pmu_enable(struct perf_event *event)
+{
+	struct drm_i915_private *i915 =
+		container_of(event->pmu, typeof(*i915), pmu.base);
+	unsigned int bit = event_enabled_bit(event);
+	unsigned long flags;
+
+	spin_lock_irqsave(&i915->pmu.lock, flags);
+
+	/*
+	 * Start the sampling timer when enabling the first event.
+	 */
+	if (i915->pmu.enable == 0)
+		hrtimer_start_range_ns(&i915->pmu.timer,
+				       ns_to_ktime(PERIOD), 0,
+				       HRTIMER_MODE_REL_PINNED);
+
+	/*
+	 * Update the bitmask of enabled events and increment
+	 * the event reference counter.
+	 */
+	GEM_BUG_ON(bit >= I915_PMU_MASK_BITS);
+	GEM_BUG_ON(i915->pmu.enable_count[bit] == ~0);
+	i915->pmu.enable |= BIT_ULL(bit);
+	i915->pmu.enable_count[bit]++;
+
+	/*
+	 * For per-engine events the bitmask and reference counting
+	 * is stored per engine.
+	 */
+	if (is_engine_event(event)) {
+		u8 sample = engine_event_sample(event);
+		struct intel_engine_cs *engine;
+
+		engine = intel_engine_lookup_user(i915,
+						  engine_event_class(event),
+						  engine_event_instance(event));
+		GEM_BUG_ON(!engine);
+		engine->pmu.enable |= BIT(sample);
+
+		GEM_BUG_ON(sample >= I915_PMU_SAMPLE_BITS);
+		GEM_BUG_ON(engine->pmu.enable_count[sample] == ~0);
+		engine->pmu.enable_count[sample]++;
+	}
+
+	/*
+	 * Store the current counter value so we can report the correct delta
+	 * for all listeners. Even when the event was already enabled and has
+	 * an existing non-zero value.
+	 */
+	local64_set(&event->hw.prev_count, __i915_pmu_event_read(event));
+
+	spin_unlock_irqrestore(&i915->pmu.lock, flags);
+}
+
+static void i915_pmu_disable(struct perf_event *event)
+{
+	struct drm_i915_private *i915 =
+		container_of(event->pmu, typeof(*i915), pmu.base);
+	unsigned int bit = event_enabled_bit(event);
+	unsigned long flags;
+
+	spin_lock_irqsave(&i915->pmu.lock, flags);
+
+	if (is_engine_event(event)) {
+		u8 sample = engine_event_sample(event);
+		struct intel_engine_cs *engine;
+
+		engine = intel_engine_lookup_user(i915,
+						  engine_event_class(event),
+						  engine_event_instance(event));
+		GEM_BUG_ON(!engine);
+		GEM_BUG_ON(sample >= I915_PMU_SAMPLE_BITS);
+		GEM_BUG_ON(engine->pmu.enable_count[sample] == 0);
+		/*
+		 * Decrement the reference count and clear the enabled
+		 * bitmask when the last listener on an event goes away.
+		 */
+		if (--engine->pmu.enable_count[sample] == 0)
+			engine->pmu.enable &= ~BIT(sample);
+	}
+
+	GEM_BUG_ON(bit >= I915_PMU_MASK_BITS);
+	GEM_BUG_ON(i915->pmu.enable_count[bit] == 0);
+	/*
+	 * Decrement the reference count and clear the enabled
+	 * bitmask when the last listener on an event goes away.
+	 */
+	if (--i915->pmu.enable_count[bit] == 0)
+		i915->pmu.enable &= ~BIT_ULL(bit);
+
+	spin_unlock_irqrestore(&i915->pmu.lock, flags);
+}
+
+static void i915_pmu_event_start(struct perf_event *event, int flags)
+{
+	i915_pmu_enable(event);
+	event->hw.state = 0;
+}
+
+static void i915_pmu_event_stop(struct perf_event *event, int flags)
+{
+	if (flags & PERF_EF_UPDATE)
+		i915_pmu_event_read(event);
+	i915_pmu_disable(event);
+	event->hw.state = PERF_HES_STOPPED;
+}
+
+static int i915_pmu_event_add(struct perf_event *event, int flags)
+{
+	if (flags & PERF_EF_START)
+		i915_pmu_event_start(event, flags);
+
+	return 0;
+}
+
+static void i915_pmu_event_del(struct perf_event *event, int flags)
+{
+	i915_pmu_event_stop(event, PERF_EF_UPDATE);
+}
+
+static int i915_pmu_event_event_idx(struct perf_event *event)
+{
+	return 0;
+}
+
+static ssize_t i915_pmu_format_show(struct device *dev,
+				    struct device_attribute *attr, char *buf)
+{
+	struct dev_ext_attribute *eattr;
+
+	eattr = container_of(attr, struct dev_ext_attribute, attr);
+	return sprintf(buf, "%s\n", (char *)eattr->var);
+}
+
+#define I915_PMU_FORMAT_ATTR(_name, _config) \
+	(&((struct dev_ext_attribute[]) { \
+		{ .attr = __ATTR(_name, 0444, i915_pmu_format_show, NULL), \
+		  .var = (void *)_config, } \
+	})[0].attr.attr)
+
+static struct attribute *i915_pmu_format_attrs[] = {
+	I915_PMU_FORMAT_ATTR(i915_eventid, "config:0-42"),
+	NULL,
+};
+
+static const struct attribute_group i915_pmu_format_attr_group = {
+	.name = "format",
+	.attrs = i915_pmu_format_attrs,
+};
+
+static ssize_t i915_pmu_event_show(struct device *dev,
+				   struct device_attribute *attr, char *buf)
+{
+	struct dev_ext_attribute *eattr;
+
+	eattr = container_of(attr, struct dev_ext_attribute, attr);
+	return sprintf(buf, "config=0x%lx\n", (unsigned long)eattr->var);
+}
+
+#define I915_EVENT_ATTR(_name, _config) \
+	(&((struct dev_ext_attribute[]) { \
+		{ .attr = __ATTR(_name, 0444, i915_pmu_event_show, NULL), \
+		  .var = (void *)_config, } \
+	})[0].attr.attr)
+
+#define I915_EVENT_STR(_name, _str) \
+	(&((struct perf_pmu_events_attr[]) { \
+		{ .attr	     = __ATTR(_name, 0444, perf_event_sysfs_show, NULL), \
+		  .id	     = 0, \
+		  .event_str = _str, } \
+	})[0].attr.attr)
+
+#define I915_EVENT(_name, _config, _unit) \
+	I915_EVENT_ATTR(_name, _config), \
+	I915_EVENT_STR(_name.unit, _unit)
+
+#define I915_ENGINE_EVENT(_name, _class, _instance, _sample) \
+	I915_EVENT_ATTR(_name, __I915_PMU_ENGINE(_class, _instance, _sample)), \
+	I915_EVENT_STR(_name.unit, "ns")
+
+#define I915_ENGINE_EVENTS(_name, _class, _instance) \
+	I915_ENGINE_EVENT(_name##_instance-queued, _class, _instance, I915_SAMPLE_QUEUED), \
+	I915_ENGINE_EVENT(_name##_instance-busy, _class, _instance, I915_SAMPLE_BUSY), \
+	I915_ENGINE_EVENT(_name##_instance-sema, _class, _instance, I915_SAMPLE_SEMA), \
+	I915_ENGINE_EVENT(_name##_instance-wait, _class, _instance, I915_SAMPLE_WAIT)
+
+static struct attribute *i915_pmu_events_attrs[] = {
+	I915_ENGINE_EVENTS(rcs, I915_ENGINE_CLASS_RENDER, 0),
+	I915_ENGINE_EVENTS(bcs, I915_ENGINE_CLASS_COPY, 0),
+	I915_ENGINE_EVENTS(vcs, I915_ENGINE_CLASS_VIDEO, 0),
+	I915_ENGINE_EVENTS(vcs, I915_ENGINE_CLASS_VIDEO, 1),
+	I915_ENGINE_EVENTS(vecs, I915_ENGINE_CLASS_VIDEO_ENHANCE, 0),
+
+	I915_EVENT(actual-frequency,    I915_PMU_ACTUAL_FREQUENCY,    "Hz"),
+	I915_EVENT(requested-frequency, I915_PMU_REQUESTED_FREQUENCY, "Hz"),
+
+	I915_EVENT_ATTR(interrupts, I915_PMU_INTERRUPTS),
+
+	I915_EVENT(rc6-residency,   I915_PMU_RC6_RESIDENCY,   "ns"),
+	I915_EVENT(rc6p-residency,  I915_PMU_RC6p_RESIDENCY,  "ns"),
+	I915_EVENT(rc6pp-residency, I915_PMU_RC6pp_RESIDENCY, "ns"),
+
+	NULL,
+};
+
+static const struct attribute_group i915_pmu_events_attr_group = {
+	.name = "events",
+	.attrs = i915_pmu_events_attrs,
+};
+
+static ssize_t
+i915_pmu_get_attr_cpumask(struct device *dev,
+			  struct device_attribute *attr,
+			  char *buf)
+{
+	return cpumap_print_to_pagebuf(true, buf, &i915_pmu_cpumask);
+}
+
+static DEVICE_ATTR(cpumask, 0444, i915_pmu_get_attr_cpumask, NULL);
+
+static struct attribute *i915_cpumask_attrs[] = {
+	&dev_attr_cpumask.attr,
+	NULL,
+};
+
+static struct attribute_group i915_pmu_cpumask_attr_group = {
+	.attrs = i915_cpumask_attrs,
+};
+
+static const struct attribute_group *i915_pmu_attr_groups[] = {
+	&i915_pmu_format_attr_group,
+	&i915_pmu_events_attr_group,
+	&i915_pmu_cpumask_attr_group,
+	NULL
+};
+
+static int i915_pmu_cpu_online(unsigned int cpu, struct hlist_node *node)
+{
+	unsigned int target;
+
+	target = cpumask_any_and(&i915_pmu_cpumask, &i915_pmu_cpumask);
+	/* Select the first online CPU as a designated reader. */
+	if (target >= nr_cpu_ids)
+		cpumask_set_cpu(cpu, &i915_pmu_cpumask);
+
+	return 0;
+}
+
+static int i915_pmu_cpu_offline(unsigned int cpu, struct hlist_node *node)
+{
+	struct i915_pmu *pmu = hlist_entry_safe(node, typeof(*pmu), node);
+	unsigned int target;
+
+	if (cpumask_test_and_clear_cpu(cpu, &i915_pmu_cpumask)) {
+		target = cpumask_any_but(topology_sibling_cpumask(cpu), cpu);
+		/* Migrate events if there is a valid target */
+		if (target < nr_cpu_ids) {
+			cpumask_set_cpu(target, &i915_pmu_cpumask);
+			perf_pmu_migrate_context(&pmu->base, cpu, target);
+		}
+	}
+
+	return 0;
+}
+
+void i915_pmu_register(struct drm_i915_private *i915)
+{
+	int ret;
+
+	if (INTEL_GEN(i915) <= 2) {
+		DRM_INFO("PMU not supported for this GPU.");
+		return;
+	}
+
+	ret = cpuhp_setup_state_multi(CPUHP_AP_PERF_X86_UNCORE_ONLINE,
+				      "perf/x86/intel/i915:online",
+				      i915_pmu_cpu_online,
+				      i915_pmu_cpu_offline);
+	if (ret)
+		goto err_hp_state;
+
+	ret = cpuhp_state_add_instance(CPUHP_AP_PERF_X86_UNCORE_ONLINE,
+				       &i915->pmu.node);
+	if (ret)
+		goto err_hp_instance;
+
+	i915->pmu.base.attr_groups	= i915_pmu_attr_groups;
+	i915->pmu.base.task_ctx_nr	= perf_invalid_context;
+	i915->pmu.base.event_init	= i915_pmu_event_init;
+	i915->pmu.base.add		= i915_pmu_event_add;
+	i915->pmu.base.del		= i915_pmu_event_del;
+	i915->pmu.base.start		= i915_pmu_event_start;
+	i915->pmu.base.stop		= i915_pmu_event_stop;
+	i915->pmu.base.read		= i915_pmu_event_read;
+	i915->pmu.base.event_idx	= i915_pmu_event_event_idx;
+
+	spin_lock_init(&i915->pmu.lock);
+	hrtimer_init(&i915->pmu.timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+	i915->pmu.timer.function = i915_sample;
+	i915->pmu.enable = 0;
+
+	ret = perf_pmu_register(&i915->pmu.base, "i915", -1);
+	if (ret == 0)
+		return;
+
+	i915->pmu.base.event_init = NULL;
+
+	WARN_ON(cpuhp_state_remove_instance(CPUHP_AP_PERF_X86_UNCORE_ONLINE,
+					    &i915->pmu.node));
+
+err_hp_instance:
+	cpuhp_remove_multi_state(CPUHP_AP_PERF_X86_UNCORE_ONLINE);
+
+err_hp_state:
+	DRM_NOTE("Failed to register PMU! (err=%d)\n", ret);
+}
+
+void i915_pmu_unregister(struct drm_i915_private *i915)
+{
+	if (!i915->pmu.base.event_init)
+		return;
+
+	WARN_ON(cpuhp_state_remove_instance(CPUHP_AP_PERF_X86_UNCORE_ONLINE,
+					    &i915->pmu.node));
+	cpuhp_remove_multi_state(CPUHP_AP_PERF_X86_UNCORE_ONLINE);
+
+	i915->pmu.enable = 0;
+
+	hrtimer_cancel(&i915->pmu.timer);
+
+	perf_pmu_unregister(&i915->pmu.base);
+	i915->pmu.base.event_init = NULL;
+}
diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index 82f36dd0cd94..16e36b2eef87 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -186,6 +186,9 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 #define VIDEO_ENHANCEMENT_CLASS	2
 #define COPY_ENGINE_CLASS	3
 #define OTHER_CLASS		4
+#define MAX_ENGINE_CLASS	4
+
+#define MAX_ENGINE_INSTANCE    1
 
 /* PCI config space */
 
diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c
index a28e2a864cf1..35c117c3fa0d 100644
--- a/drivers/gpu/drm/i915/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/intel_engine_cs.c
@@ -200,6 +200,15 @@ intel_engine_setup(struct drm_i915_private *dev_priv,
 	GEM_BUG_ON(info->class >= ARRAY_SIZE(intel_engine_classes));
 	class_info = &intel_engine_classes[info->class];
 
+	if (GEM_WARN_ON(info->class > MAX_ENGINE_CLASS))
+		return -EINVAL;
+
+	if (GEM_WARN_ON(info->instance > MAX_ENGINE_INSTANCE))
+		return -EINVAL;
+
+	if (GEM_WARN_ON(dev_priv->engine_class[info->class][info->instance]))
+		return -EINVAL;
+
 	GEM_BUG_ON(dev_priv->engine[id]);
 	engine = kzalloc(sizeof(*engine), GFP_KERNEL);
 	if (!engine)
@@ -227,6 +236,7 @@ intel_engine_setup(struct drm_i915_private *dev_priv,
 
 	ATOMIC_INIT_NOTIFIER_HEAD(&engine->context_status_notifier);
 
+	dev_priv->engine_class[info->class][info->instance] = engine;
 	dev_priv->engine[id] = engine;
 	return 0;
 }
@@ -1578,6 +1588,31 @@ bool intel_engine_can_store_dword(struct intel_engine_cs *engine)
 	}
 }
 
+static u8 user_class_map[I915_ENGINE_CLASS_MAX] = {
+	[I915_ENGINE_CLASS_OTHER] = OTHER_CLASS,
+	[I915_ENGINE_CLASS_RENDER] = RENDER_CLASS,
+	[I915_ENGINE_CLASS_COPY] = COPY_ENGINE_CLASS,
+	[I915_ENGINE_CLASS_VIDEO] = VIDEO_DECODE_CLASS,
+	[I915_ENGINE_CLASS_VIDEO_ENHANCE] = VIDEO_ENHANCEMENT_CLASS,
+};
+
+struct intel_engine_cs *
+intel_engine_lookup_user(struct drm_i915_private *i915, u8 class, u8 instance)
+{
+	if (class >= ARRAY_SIZE(user_class_map))
+		return NULL;
+
+	class = user_class_map[class];
+
+	if (WARN_ON_ONCE(class > MAX_ENGINE_CLASS))
+		return NULL;
+
+	if (instance > MAX_ENGINE_INSTANCE)
+		return NULL;
+
+	return i915->engine_class[class][instance];
+}
+
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
 #include "selftests/mock_engine.c"
 #endif
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 56d7ae9f298b..7a901e766d03 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -330,6 +330,28 @@ struct intel_engine_cs {
 		I915_SELFTEST_DECLARE(bool mock : 1);
 	} breadcrumbs;
 
+	struct {
+		/**
+		 * @enable: Bitmask of enable sample events on this engine.
+		 *
+		 * Bits correspond to sample event types, for instance
+		 * I915_SAMPLE_QUEUED is bit 0 etc.
+		 */
+		u32 enable;
+		/**
+		 * @enable_count: Reference count for the enabled samplers.
+		 *
+		 * Index number corresponds to the bit number from @enable.
+		 */
+		unsigned int enable_count[I915_PMU_SAMPLE_BITS];
+		/**
+		 * @sample: Counter value for sampling events.
+		 *
+		 * Our internal timer stores the current counter in this field.
+		 */
+		u64 sample[I915_ENGINE_SAMPLE_MAX];
+	} pmu;
+
 	/*
 	 * A pool of objects to use as shadow copies of client batch buffers
 	 * when the command parser is enabled. Prevents the client from
@@ -834,4 +856,7 @@ void intel_engines_reset_default_submission(struct drm_i915_private *i915);
 
 bool intel_engine_can_store_dword(struct intel_engine_cs *engine);
 
+struct intel_engine_cs *
+intel_engine_lookup_user(struct drm_i915_private *i915, u8 class, u8 instance);
+
 #endif /* _INTEL_RINGBUFFER_H_ */
diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
index fe25a01c81f2..682abafecc8c 100644
--- a/include/uapi/drm/i915_drm.h
+++ b/include/uapi/drm/i915_drm.h
@@ -86,6 +86,63 @@ enum i915_mocs_table_index {
 	I915_MOCS_CACHED,
 };
 
+enum drm_i915_gem_engine_class {
+	I915_ENGINE_CLASS_OTHER = 0,
+	I915_ENGINE_CLASS_RENDER = 1,
+	I915_ENGINE_CLASS_COPY = 2,
+	I915_ENGINE_CLASS_VIDEO = 3,
+	I915_ENGINE_CLASS_VIDEO_ENHANCE = 4,
+	I915_ENGINE_CLASS_MAX /* non-ABI */
+};
+
+/**
+ * DOC: perf_events exposed by i915 through /sys/bus/event_sources/drivers/i915
+ *
+ */
+
+enum drm_i915_pmu_engine_sample {
+	I915_SAMPLE_QUEUED = 0,
+	I915_SAMPLE_BUSY = 1,
+	I915_SAMPLE_WAIT = 2,
+	I915_SAMPLE_SEMA = 3,
+	I915_ENGINE_SAMPLE_MAX /* non-ABI */
+};
+
+#define I915_PMU_SAMPLE_BITS (4)
+#define I915_PMU_SAMPLE_MASK (0xf)
+#define I915_PMU_SAMPLE_INSTANCE_BITS (8)
+#define I915_PMU_CLASS_SHIFT \
+	(I915_PMU_SAMPLE_BITS + I915_PMU_SAMPLE_INSTANCE_BITS)
+
+#define __I915_PMU_ENGINE(class, instance, sample) \
+	((class) << I915_PMU_CLASS_SHIFT | \
+	(instance) << I915_PMU_SAMPLE_BITS | \
+	(sample))
+
+#define I915_PMU_ENGINE_QUEUED(class, instance) \
+	__I915_PMU_ENGINE(class, instance, I915_SAMPLE_QUEUED)
+
+#define I915_PMU_ENGINE_BUSY(class, instance) \
+	__I915_PMU_ENGINE(class, instance, I915_SAMPLE_BUSY)
+
+#define I915_PMU_ENGINE_WAIT(class, instance) \
+	__I915_PMU_ENGINE(class, instance, I915_SAMPLE_WAIT)
+
+#define I915_PMU_ENGINE_SEMA(class, instance) \
+	__I915_PMU_ENGINE(class, instance, I915_SAMPLE_SEMA)
+
+#define __I915_PMU_OTHER(x) (__I915_PMU_ENGINE(0xff, 0xff, 0xf) + 1 + (x))
+
+#define I915_PMU_ACTUAL_FREQUENCY	__I915_PMU_OTHER(0)
+#define I915_PMU_REQUESTED_FREQUENCY	__I915_PMU_OTHER(1)
+#define I915_PMU_INTERRUPTS		__I915_PMU_OTHER(2)
+
+#define I915_PMU_RC6_RESIDENCY		__I915_PMU_OTHER(3)
+#define I915_PMU_RC6p_RESIDENCY		__I915_PMU_OTHER(4)
+#define I915_PMU_RC6pp_RESIDENCY	__I915_PMU_OTHER(5)
+
+#define I915_PMU_LAST I915_PMU_RC6pp_RESIDENCY
+
 /* Each region is a minimum of 16k, and there are at most 255 of them.
  */
 #define I915_NR_TEX_REGIONS 255	/* table size 2k - maximum due to use
-- 
2.9.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 4/8] drm/i915/pmu: Suspend sampling when GPU is idle
  2017-09-25 15:15 [PATCH v5 0/8] i915 PMU and engine busy stats Tvrtko Ursulin
                   ` (2 preceding siblings ...)
  2017-09-25 15:15 ` [PATCH 3/8] drm/i915/pmu: Expose a PMU interface for perf queries Tvrtko Ursulin
@ 2017-09-25 15:15 ` Tvrtko Ursulin
  2017-09-25 17:39   ` Chris Wilson
  2017-09-25 15:15 ` [PATCH 5/8] drm/i915: Wrap context schedule notification Tvrtko Ursulin
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 30+ messages in thread
From: Tvrtko Ursulin @ 2017-09-25 15:15 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

If only a subset of events is enabled we can afford to suspend
the sampling timer when the GPU is idle and so save some cycles
and power.

v2: Rebase and limit timer even more.
v3: Rebase.
v4: Rebase.
v5: Skip action if perf PMU failed to register.
v6: Checkpatch cleanup.
v7:
 * Add a common helper to start the timer if needed. (Chris Wilson)
 * Add comment explaining bitwise logic in pmu_needs_timer.
v8: Fix some comments styles. (Chris Wilson)

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h         |  8 +++
 drivers/gpu/drm/i915/i915_gem.c         |  1 +
 drivers/gpu/drm/i915/i915_gem_request.c |  1 +
 drivers/gpu/drm/i915/i915_pmu.c         | 88 +++++++++++++++++++++++++++++----
 4 files changed, 88 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index a40c314eed5f..11a624890ad6 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2253,6 +2253,10 @@ struct i915_pmu {
 	 */
 	unsigned int enable_count[I915_PMU_MASK_BITS];
 	/**
+	 * @timer_enabled: Should the internal sampling timer be running.
+	 */
+	bool timer_enabled;
+	/**
 	 * @sample: Current counter value for i915 events which need sampling.
 	 *
 	 * These counters are updated from the i915 PMU sampling timer.
@@ -4005,9 +4009,13 @@ extern void i915_perf_unregister(struct drm_i915_private *dev_priv);
 #ifdef CONFIG_PERF_EVENTS
 void i915_pmu_register(struct drm_i915_private *i915);
 void i915_pmu_unregister(struct drm_i915_private *i915);
+void i915_pmu_gt_idle(struct drm_i915_private *i915);
+void i915_pmu_gt_active(struct drm_i915_private *i915);
 #else
 static inline void i915_pmu_register(struct drm_i915_private *i915) {}
 static inline void i915_pmu_unregister(struct drm_i915_private *i915) {}
+static inline void i915_pmu_gt_idle(struct drm_i915_private *i915) {}
+static inline void i915_pmu_gt_active(struct drm_i915_private *i915) {}
 #endif
 
 /* i915_suspend.c */
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 73eeb6b1f1cd..18f9f0b541b8 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -3191,6 +3191,7 @@ i915_gem_idle_work_handler(struct work_struct *work)
 
 	intel_engines_mark_idle(dev_priv);
 	i915_gem_timelines_mark_idle(dev_priv);
+	i915_pmu_gt_idle(dev_priv);
 
 	GEM_BUG_ON(!dev_priv->gt.awake);
 	dev_priv->gt.awake = false;
diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index 4eb1a76731b2..43249be1f97c 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -258,6 +258,7 @@ static void mark_busy(struct drm_i915_private *i915)
 	i915_update_gfx_val(i915);
 	if (INTEL_GEN(i915) >= 6)
 		gen6_rps_busy(i915);
+	i915_pmu_gt_active(i915);
 
 	queue_delayed_work(i915->wq,
 			   &i915->gt.retire_work,
diff --git a/drivers/gpu/drm/i915/i915_pmu.c b/drivers/gpu/drm/i915/i915_pmu.c
index cf1406fbc215..13c1f5887667 100644
--- a/drivers/gpu/drm/i915/i915_pmu.c
+++ b/drivers/gpu/drm/i915/i915_pmu.c
@@ -90,6 +90,75 @@ static unsigned int event_enabled_bit(struct perf_event *event)
 	return config_enabled_bit(event->attr.config);
 }
 
+static bool pmu_needs_timer(struct drm_i915_private *i915, bool gpu_active)
+{
+	u64 enable;
+
+	/*
+	 * Only some counters need the sampling timer.
+	 *
+	 * We start with a bitmask of all currently enabled events.
+	 */
+	enable = i915->pmu.enable;
+
+	/*
+	 * Mask out all the ones which do not need the timer, or in
+	 * other words keep all the ones that could need the timer.
+	 */
+	enable &= config_enabled_mask(I915_PMU_ACTUAL_FREQUENCY) |
+		  config_enabled_mask(I915_PMU_REQUESTED_FREQUENCY) |
+		  ENGINE_SAMPLE_MASK;
+
+	/*
+	 * When the GPU is idle per-engine counters do not need to be
+	 * running so clear those bits out.
+	 */
+	if (!gpu_active)
+		enable &= ~ENGINE_SAMPLE_MASK;
+
+	/*
+	 * If some bits remain it means we need the sampling timer running.
+	 */
+	return enable;
+}
+
+void i915_pmu_gt_idle(struct drm_i915_private *i915)
+{
+	if (!i915->pmu.base.event_init)
+		return;
+
+	spin_lock_irq(&i915->pmu.lock);
+	/*
+	 * Signal sampling timer to stop if only engine events are enabled and
+	 * GPU went idle.
+	 */
+	i915->pmu.timer_enabled = pmu_needs_timer(i915, false);
+	spin_unlock_irq(&i915->pmu.lock);
+}
+
+static void __i915_pmu_maybe_start_timer(struct drm_i915_private *i915)
+{
+	if (!i915->pmu.timer_enabled && pmu_needs_timer(i915, true)) {
+		i915->pmu.timer_enabled = true;
+		hrtimer_start_range_ns(&i915->pmu.timer,
+				       ns_to_ktime(PERIOD), 0,
+				       HRTIMER_MODE_REL_PINNED);
+	}
+}
+
+void i915_pmu_gt_active(struct drm_i915_private *i915)
+{
+	if (!i915->pmu.base.event_init)
+		return;
+
+	spin_lock_irq(&i915->pmu.lock);
+	/*
+	 * Re-enable sampling timer when GPU goes active.
+	 */
+	__i915_pmu_maybe_start_timer(i915);
+	spin_unlock_irq(&i915->pmu.lock);
+}
+
 static bool grab_forcewake(struct drm_i915_private *i915, bool fw)
 {
 	if (!fw)
@@ -183,7 +252,7 @@ static enum hrtimer_restart i915_sample(struct hrtimer *hrtimer)
 	struct drm_i915_private *i915 =
 		container_of(hrtimer, struct drm_i915_private, pmu.timer);
 
-	if (i915->pmu.enable == 0)
+	if (!READ_ONCE(i915->pmu.timer_enabled))
 		return HRTIMER_NORESTART;
 
 	engines_sample(i915);
@@ -381,14 +450,6 @@ static void i915_pmu_enable(struct perf_event *event)
 	spin_lock_irqsave(&i915->pmu.lock, flags);
 
 	/*
-	 * Start the sampling timer when enabling the first event.
-	 */
-	if (i915->pmu.enable == 0)
-		hrtimer_start_range_ns(&i915->pmu.timer,
-				       ns_to_ktime(PERIOD), 0,
-				       HRTIMER_MODE_REL_PINNED);
-
-	/*
 	 * Update the bitmask of enabled events and increment
 	 * the event reference counter.
 	 */
@@ -398,6 +459,11 @@ static void i915_pmu_enable(struct perf_event *event)
 	i915->pmu.enable_count[bit]++;
 
 	/*
+	 * Start the sampling timer if needed and not already enabled.
+	 */
+	__i915_pmu_maybe_start_timer(i915);
+
+	/*
 	 * For per-engine events the bitmask and reference counting
 	 * is stored per engine.
 	 */
@@ -459,8 +525,10 @@ static void i915_pmu_disable(struct perf_event *event)
 	 * Decrement the reference count and clear the enabled
 	 * bitmask when the last listener on an event goes away.
 	 */
-	if (--i915->pmu.enable_count[bit] == 0)
+	if (--i915->pmu.enable_count[bit] == 0) {
 		i915->pmu.enable &= ~BIT_ULL(bit);
+		i915->pmu.timer_enabled &= pmu_needs_timer(i915, true);
+	}
 
 	spin_unlock_irqrestore(&i915->pmu.lock, flags);
 }
-- 
2.9.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 5/8] drm/i915: Wrap context schedule notification
  2017-09-25 15:15 [PATCH v5 0/8] i915 PMU and engine busy stats Tvrtko Ursulin
                   ` (3 preceding siblings ...)
  2017-09-25 15:15 ` [PATCH 4/8] drm/i915/pmu: Suspend sampling when GPU is idle Tvrtko Ursulin
@ 2017-09-25 15:15 ` Tvrtko Ursulin
  2017-09-25 17:40   ` Chris Wilson
  2017-09-25 15:15 ` [PATCH 6/8] drm/i915: Engine busy time tracking Tvrtko Ursulin
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 30+ messages in thread
From: Tvrtko Ursulin @ 2017-09-25 15:15 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

No functional change just something which will be handy in the
following patch.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/intel_lrc.c | 17 ++++++++++++++---
 1 file changed, 14 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 3623403a4f2d..7cd14b701ed7 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -363,6 +363,18 @@ execlists_context_status_change(struct drm_i915_gem_request *rq,
 				   status, rq);
 }
 
+static inline void
+execlists_context_schedule_in(struct drm_i915_gem_request *rq)
+{
+	execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_IN);
+}
+
+static inline void
+execlists_context_schedule_out(struct drm_i915_gem_request *rq)
+{
+	execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_OUT);
+}
+
 static void
 execlists_update_context_pdps(struct i915_hw_ppgtt *ppgtt, u32 *reg_state)
 {
@@ -408,7 +420,7 @@ static void execlists_submit_ports(struct intel_engine_cs *engine)
 		if (rq) {
 			GEM_BUG_ON(count > !n);
 			if (!count++)
-				execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_IN);
+				execlists_context_schedule_in(rq);
 			port_set(&port[n], port_pack(rq, count));
 			desc = execlists_update_context(rq);
 			GEM_DEBUG_EXEC(port[n].context_id = upper_32_bits(desc));
@@ -746,8 +758,7 @@ static void intel_lrc_irq_handler(unsigned long data)
 			if (--count == 0) {
 				GEM_BUG_ON(status & GEN8_CTX_STATUS_PREEMPTED);
 				GEM_BUG_ON(!i915_gem_request_completed(rq));
-				execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_OUT);
-
+				execlists_context_schedule_out(rq);
 				trace_i915_gem_request_out(rq);
 				i915_gem_request_put(rq);
 
-- 
2.9.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 6/8] drm/i915: Engine busy time tracking
  2017-09-25 15:15 [PATCH v5 0/8] i915 PMU and engine busy stats Tvrtko Ursulin
                   ` (4 preceding siblings ...)
  2017-09-25 15:15 ` [PATCH 5/8] drm/i915: Wrap context schedule notification Tvrtko Ursulin
@ 2017-09-25 15:15 ` Tvrtko Ursulin
  2017-09-25 17:43   ` Chris Wilson
  2017-09-25 15:15 ` [PATCH 7/8] drm/i915/pmu: Wire up engine busy stats to PMU Tvrtko Ursulin
                   ` (3 subsequent siblings)
  9 siblings, 1 reply; 30+ messages in thread
From: Tvrtko Ursulin @ 2017-09-25 15:15 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Track total time requests have been executing on the hardware.

We add new kernel API to allow software tracking of time GPU
engines are spending executing requests.

Both per-engine and global API is added with the latter also
being exported for use by external users.

v2:
 * Squashed with the internal API.
 * Dropped static key.
 * Made per-engine.
 * Store time in monotonic ktime.

v3: Moved stats clearing to disable.

v4:
 * Comments.
 * Don't export the API just yet.

v5: Whitespace cleanup.

v6:
 * Rename ref to active.
 * Drop engine aggregate stats for now.
 * Account initial busy period after enabling stats.

v7:
 * Rebase.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/intel_engine_cs.c  | 84 ++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/intel_lrc.c        |  2 +
 drivers/gpu/drm/i915/intel_ringbuffer.h | 92 +++++++++++++++++++++++++++++++++
 3 files changed, 178 insertions(+)

diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c
index 35c117c3fa0d..8db83f504d70 100644
--- a/drivers/gpu/drm/i915/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/intel_engine_cs.c
@@ -234,6 +234,8 @@ intel_engine_setup(struct drm_i915_private *dev_priv,
 	/* Nothing to do here, execute in order of dependencies */
 	engine->schedule = NULL;
 
+	spin_lock_init(&engine->stats.lock);
+
 	ATOMIC_INIT_NOTIFIER_HEAD(&engine->context_status_notifier);
 
 	dev_priv->engine_class[info->class][info->instance] = engine;
@@ -1613,6 +1615,88 @@ intel_engine_lookup_user(struct drm_i915_private *i915, u8 class, u8 instance)
 	return i915->engine_class[class][instance];
 }
 
+/**
+ * intel_enable_engine_stats() - Enable engine busy tracking on engine
+ * @engine: engine to enable stats collection
+ *
+ * Start collecting the engine busyness data for @engine.
+ *
+ * Returns 0 on success or a negative error code.
+ */
+int intel_enable_engine_stats(struct intel_engine_cs *engine)
+{
+	unsigned long flags;
+
+	if (!i915_modparams.enable_execlists)
+		return -ENODEV;
+
+	spin_lock_irqsave(&engine->stats.lock, flags);
+	if (engine->stats.enabled == ~0)
+		goto busy;
+	if (engine->stats.enabled++ == 0)
+		engine->stats.enabled_at = ktime_get();
+	spin_unlock_irqrestore(&engine->stats.lock, flags);
+
+	return 0;
+
+busy:
+	spin_unlock_irqrestore(&engine->stats.lock, flags);
+
+	return -EBUSY;
+}
+
+/**
+ * intel_disable_engine_stats() - Disable engine busy tracking on engine
+ * @engine: engine to disable stats collection
+ *
+ * Stops collecting the engine busyness data for @engine.
+ */
+void intel_disable_engine_stats(struct intel_engine_cs *engine)
+{
+	unsigned long flags;
+
+	if (!i915_modparams.enable_execlists)
+		return;
+
+	spin_lock_irqsave(&engine->stats.lock, flags);
+	WARN_ON_ONCE(engine->stats.enabled == 0);
+	if (--engine->stats.enabled == 0) {
+		engine->stats.enabled_at = 0;
+		engine->stats.active = 0;
+		engine->stats.start = 0;
+		engine->stats.total = 0;
+	}
+	spin_unlock_irqrestore(&engine->stats.lock, flags);
+}
+
+/**
+ * intel_engine_get_busy_time() - Return current accumulated engine busyness
+ * @engine: engine to report on
+ *
+ * Returns accumulated time @engine was busy since engine stats were enabled.
+ */
+ktime_t intel_engine_get_busy_time(struct intel_engine_cs *engine)
+{
+	ktime_t total;
+	unsigned long flags;
+
+	spin_lock_irqsave(&engine->stats.lock, flags);
+
+	total = engine->stats.total;
+
+	/*
+	 * If the engine is executing something at the moment
+	 * add it to the total.
+	 */
+	if (engine->stats.active)
+		total = ktime_add(total,
+				  ktime_sub(ktime_get(), engine->stats.start));
+
+	spin_unlock_irqrestore(&engine->stats.lock, flags);
+
+	return total;
+}
+
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
 #include "selftests/mock_engine.c"
 #endif
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 7cd14b701ed7..1e743d3d16cb 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -366,12 +366,14 @@ execlists_context_status_change(struct drm_i915_gem_request *rq,
 static inline void
 execlists_context_schedule_in(struct drm_i915_gem_request *rq)
 {
+	intel_engine_context_in(rq->engine);
 	execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_IN);
 }
 
 static inline void
 execlists_context_schedule_out(struct drm_i915_gem_request *rq)
 {
+	intel_engine_context_out(rq->engine);
 	execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_OUT);
 }
 
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 7a901e766d03..8db228ebdb28 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -538,6 +538,38 @@ struct intel_engine_cs {
 	 * certain bits to encode the command length in the header).
 	 */
 	u32 (*get_cmd_length_mask)(u32 cmd_header);
+
+	struct {
+		/**
+		 * @lock: Lock protecting the below fields.
+		 */
+		spinlock_t lock;
+		/**
+		 * @enabled: Reference count indicating number of listeners.
+		 */
+		unsigned int enabled;
+		/**
+		 * @active: Number of contexts currently scheduled in.
+		 */
+		unsigned int active;
+		/**
+		 * @enabled_at: Timestamp when busy stats were enabled.
+		 */
+		ktime_t enabled_at;
+		/**
+		 * @start: Timestamp of the last idle to active transition.
+		 *
+		 * Idle is defined as active == 0, active is active > 0.
+		 */
+		ktime_t start;
+		/**
+		 * @total: Total time this engine was busy.
+		 *
+		 * Accumulated time not counting the most recent block in cases
+		 * where engine is currently busy (active > 0).
+		 */
+		ktime_t total;
+	} stats;
 };
 
 static inline unsigned int
@@ -859,4 +891,64 @@ bool intel_engine_can_store_dword(struct intel_engine_cs *engine);
 struct intel_engine_cs *
 intel_engine_lookup_user(struct drm_i915_private *i915, u8 class, u8 instance);
 
+static inline void intel_engine_context_in(struct intel_engine_cs *engine)
+{
+	unsigned long flags;
+
+	if (READ_ONCE(engine->stats.enabled) == 0)
+		return;
+
+	spin_lock_irqsave(&engine->stats.lock, flags);
+
+	if (engine->stats.enabled > 0) {
+		if (engine->stats.active++ == 0)
+			engine->stats.start = ktime_get();
+		GEM_BUG_ON(engine->stats.active == 0);
+	}
+
+	spin_unlock_irqrestore(&engine->stats.lock, flags);
+}
+
+static inline void intel_engine_context_out(struct intel_engine_cs *engine)
+{
+	unsigned long flags;
+
+	if (READ_ONCE(engine->stats.enabled) == 0)
+		return;
+
+	spin_lock_irqsave(&engine->stats.lock, flags);
+
+	if (engine->stats.enabled > 0) {
+		ktime_t last, now = ktime_get();
+
+		if (engine->stats.active && --engine->stats.active == 0) {
+			/*
+			 * Decrement the active context count and in case GPU
+			 * is now idle add up to the running total.
+			 */
+			last = ktime_sub(now, engine->stats.start);
+
+			engine->stats.total = ktime_add(engine->stats.total,
+							last);
+		} else if (engine->stats.active == 0) {
+			/*
+			 * After turning on engine stats, context out might be
+			 * the first event in which case we account from the
+			 * time stats gathering was turned on.
+			 */
+			last = ktime_sub(now, engine->stats.enabled_at);
+
+			engine->stats.total = ktime_add(engine->stats.total,
+							last);
+		}
+	}
+
+	spin_unlock_irqrestore(&engine->stats.lock, flags);
+}
+
+int intel_enable_engine_stats(struct intel_engine_cs *engine);
+void intel_disable_engine_stats(struct intel_engine_cs *engine);
+
+ktime_t intel_engine_get_busy_time(struct intel_engine_cs *engine);
+
 #endif /* _INTEL_RINGBUFFER_H_ */
-- 
2.9.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 7/8] drm/i915/pmu: Wire up engine busy stats to PMU
  2017-09-25 15:15 [PATCH v5 0/8] i915 PMU and engine busy stats Tvrtko Ursulin
                   ` (5 preceding siblings ...)
  2017-09-25 15:15 ` [PATCH 6/8] drm/i915: Engine busy time tracking Tvrtko Ursulin
@ 2017-09-25 15:15 ` Tvrtko Ursulin
  2017-09-25 17:48   ` Chris Wilson
  2017-09-25 15:15 ` [PATCH 8/8] drm/i915: Gate engine stats collection with a static key Tvrtko Ursulin
                   ` (2 subsequent siblings)
  9 siblings, 1 reply; 30+ messages in thread
From: Tvrtko Ursulin @ 2017-09-25 15:15 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

We can use engine busy stats instead of the MMIO sampling timer
for better efficiency.

As minimum this saves period * num_engines / sec mmio reads,
and in a better case, when only engine busy samplers are active,
it enables us to not kick off the sampling timer at all.

v2: Rebase.
v3:
 * Rebase, comments.
 * Leave engine busyness controls out of workers.
v4: Checkpatch cleanup.
v5: Added comment to pmu_needs_timer change.
v6:
 * Rebase.
 * Fix style of some comments. (Chris Wilson)

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_pmu.c         | 40 ++++++++++++++++++++++++++++++---
 drivers/gpu/drm/i915/intel_ringbuffer.h |  5 +++++
 2 files changed, 42 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_pmu.c b/drivers/gpu/drm/i915/i915_pmu.c
index 13c1f5887667..228aa50ce709 100644
--- a/drivers/gpu/drm/i915/i915_pmu.c
+++ b/drivers/gpu/drm/i915/i915_pmu.c
@@ -90,6 +90,11 @@ static unsigned int event_enabled_bit(struct perf_event *event)
 	return config_enabled_bit(event->attr.config);
 }
 
+static bool supports_busy_stats(void)
+{
+	return i915_modparams.enable_execlists;
+}
+
 static bool pmu_needs_timer(struct drm_i915_private *i915, bool gpu_active)
 {
 	u64 enable;
@@ -115,6 +120,12 @@ static bool pmu_needs_timer(struct drm_i915_private *i915, bool gpu_active)
 	 */
 	if (!gpu_active)
 		enable &= ~ENGINE_SAMPLE_MASK;
+	/*
+	 * Also there is software busyness tracking available we do not
+	 * need the timer for I915_SAMPLE_BUSY counter.
+	 */
+	else if (supports_busy_stats())
+		enable &= ~BIT(I915_SAMPLE_BUSY);
 
 	/*
 	 * If some bits remain it means we need the sampling timer running.
@@ -194,7 +205,8 @@ static void engines_sample(struct drm_i915_private *dev_priv)
 		    (last_seqno - 1 > current_seqno))
 			engine->pmu.sample[I915_SAMPLE_QUEUED] += PERIOD;
 
-		if (enable & BIT(I915_SAMPLE_BUSY)) {
+		if ((enable & BIT(I915_SAMPLE_BUSY)) &&
+		    !engine->pmu.busy_stats) {
 			u32 val;
 
 			fw = grab_forcewake(dev_priv, fw);
@@ -387,6 +399,9 @@ static u64 __i915_pmu_event_read(struct perf_event *event)
 
 		if (WARN_ON_ONCE(!engine)) {
 			/* Do nothing */
+		} else if (sample == I915_SAMPLE_BUSY &&
+			   engine->pmu.busy_stats) {
+			val = ktime_to_ns(intel_engine_get_busy_time(engine));
 		} else {
 			val = engine->pmu.sample[sample];
 		}
@@ -440,6 +455,12 @@ static void i915_pmu_event_read(struct perf_event *event)
 	local64_add(new - prev, &event->count);
 }
 
+static bool engine_needs_busy_stats(struct intel_engine_cs *engine)
+{
+	return supports_busy_stats() &&
+	       (engine->pmu.enable & BIT(I915_SAMPLE_BUSY));
+}
+
 static void i915_pmu_enable(struct perf_event *event)
 {
 	struct drm_i915_private *i915 =
@@ -479,7 +500,14 @@ static void i915_pmu_enable(struct perf_event *event)
 
 		GEM_BUG_ON(sample >= I915_PMU_SAMPLE_BITS);
 		GEM_BUG_ON(engine->pmu.enable_count[sample] == ~0);
-		engine->pmu.enable_count[sample]++;
+		if (engine->pmu.enable_count[sample]++ == 0) {
+			if (engine_needs_busy_stats(engine) &&
+			    !engine->pmu.busy_stats) {
+				engine->pmu.busy_stats =
+					intel_enable_engine_stats(engine) == 0;
+				WARN_ON_ONCE(!engine->pmu.busy_stats);
+			}
+		}
 	}
 
 	/*
@@ -515,8 +543,14 @@ static void i915_pmu_disable(struct perf_event *event)
 		 * Decrement the reference count and clear the enabled
 		 * bitmask when the last listener on an event goes away.
 		 */
-		if (--engine->pmu.enable_count[sample] == 0)
+		if (--engine->pmu.enable_count[sample] == 0) {
 			engine->pmu.enable &= ~BIT(sample);
+			if (!engine_needs_busy_stats(engine) &&
+			    engine->pmu.busy_stats) {
+				engine->pmu.busy_stats = false;
+				intel_disable_engine_stats(engine);
+			}
+		}
 	}
 
 	GEM_BUG_ON(bit >= I915_PMU_MASK_BITS);
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 8db228ebdb28..ef638b97e46a 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -350,6 +350,11 @@ struct intel_engine_cs {
 		 * Our internal timer stores the current counter in this field.
 		 */
 		u64 sample[I915_ENGINE_SAMPLE_MAX];
+		/**
+		 * @busy_stats: Has enablement of engine stats tracking been
+		 * 		requested.
+		 */
+		bool busy_stats;
 	} pmu;
 
 	/*
-- 
2.9.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 8/8] drm/i915: Gate engine stats collection with a static key
  2017-09-25 15:15 [PATCH v5 0/8] i915 PMU and engine busy stats Tvrtko Ursulin
                   ` (6 preceding siblings ...)
  2017-09-25 15:15 ` [PATCH 7/8] drm/i915/pmu: Wire up engine busy stats to PMU Tvrtko Ursulin
@ 2017-09-25 15:15 ` Tvrtko Ursulin
  2017-09-25 17:56   ` Chris Wilson
  2017-09-25 15:46 ` ✓ Fi.CI.BAT: success for i915 PMU and engine busy stats (rev12) Patchwork
  2017-09-25 21:22 ` ✗ Fi.CI.IGT: failure " Patchwork
  9 siblings, 1 reply; 30+ messages in thread
From: Tvrtko Ursulin @ 2017-09-25 15:15 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

This reduces the cost of the software engine busyness tracking
to a single no-op instruction when there are no listeners.

v2: Rebase and some comments.
v3: Rebase.
v4: Checkpatch fixes.
v5: Rebase.
v6: Use system_long_wq to avoid being blocked by struct_mutex
    users.
v7: Fix bad conflict resolution from last rebase. (Dmitry Rogozhkin)
v8: Rebase.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_pmu.c         |  54 +++++++++++++++--
 drivers/gpu/drm/i915/intel_engine_cs.c  |  17 ++++++
 drivers/gpu/drm/i915/intel_ringbuffer.h | 101 ++++++++++++++++++++------------
 3 files changed, 130 insertions(+), 42 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_pmu.c b/drivers/gpu/drm/i915/i915_pmu.c
index 228aa50ce709..e768f33ebb3d 100644
--- a/drivers/gpu/drm/i915/i915_pmu.c
+++ b/drivers/gpu/drm/i915/i915_pmu.c
@@ -501,11 +501,17 @@ static void i915_pmu_enable(struct perf_event *event)
 		GEM_BUG_ON(sample >= I915_PMU_SAMPLE_BITS);
 		GEM_BUG_ON(engine->pmu.enable_count[sample] == ~0);
 		if (engine->pmu.enable_count[sample]++ == 0) {
+			/*
+			 * Enable engine busy stats tracking if needed or
+			 * alternatively cancel the scheduled disabling of the
+			 * same.
+			 */
 			if (engine_needs_busy_stats(engine) &&
 			    !engine->pmu.busy_stats) {
-				engine->pmu.busy_stats =
-					intel_enable_engine_stats(engine) == 0;
-				WARN_ON_ONCE(!engine->pmu.busy_stats);
+				engine->pmu.busy_stats = true;
+				if (!cancel_delayed_work(&engine->pmu.disable_busy_stats))
+					queue_work(system_long_wq,
+						   &engine->pmu.enable_busy_stats);
 			}
 		}
 	}
@@ -548,7 +554,15 @@ static void i915_pmu_disable(struct perf_event *event)
 			if (!engine_needs_busy_stats(engine) &&
 			    engine->pmu.busy_stats) {
 				engine->pmu.busy_stats = false;
-				intel_disable_engine_stats(engine);
+				/*
+				 * We request a delayed disable to handle the
+				 * rapid on/off cycles on events which can
+				 * happen when tools like perf stat start in a
+				 * nicer way.
+				 */
+				queue_delayed_work(system_long_wq,
+						   &engine->pmu.disable_busy_stats,
+						   round_jiffies_up_relative(HZ));
 			}
 		}
 	}
@@ -739,9 +753,27 @@ static int i915_pmu_cpu_offline(unsigned int cpu, struct hlist_node *node)
 	return 0;
 }
 
+static void __enable_busy_stats(struct work_struct *work)
+{
+	struct intel_engine_cs *engine =
+		container_of(work, typeof(*engine), pmu.enable_busy_stats);
+
+	WARN_ON_ONCE(intel_enable_engine_stats(engine));
+}
+
+static void __disable_busy_stats(struct work_struct *work)
+{
+	struct intel_engine_cs *engine =
+	       container_of(work, typeof(*engine), pmu.disable_busy_stats.work);
+
+	intel_disable_engine_stats(engine);
+}
+
 void i915_pmu_register(struct drm_i915_private *i915)
 {
 	int ret;
+	struct intel_engine_cs *engine;
+	enum intel_engine_id id;
 
 	if (INTEL_GEN(i915) <= 2) {
 		DRM_INFO("PMU not supported for this GPU.");
@@ -775,6 +807,12 @@ void i915_pmu_register(struct drm_i915_private *i915)
 	i915->pmu.timer.function = i915_sample;
 	i915->pmu.enable = 0;
 
+	for_each_engine(engine, i915, id) {
+		INIT_WORK(&engine->pmu.enable_busy_stats, __enable_busy_stats);
+		INIT_DELAYED_WORK(&engine->pmu.disable_busy_stats,
+				  __disable_busy_stats);
+	}
+
 	ret = perf_pmu_register(&i915->pmu.base, "i915", -1);
 	if (ret == 0)
 		return;
@@ -793,6 +831,9 @@ void i915_pmu_register(struct drm_i915_private *i915)
 
 void i915_pmu_unregister(struct drm_i915_private *i915)
 {
+	struct intel_engine_cs *engine;
+	enum intel_engine_id id;
+
 	if (!i915->pmu.base.event_init)
 		return;
 
@@ -804,6 +845,11 @@ void i915_pmu_unregister(struct drm_i915_private *i915)
 
 	hrtimer_cancel(&i915->pmu.timer);
 
+	for_each_engine(engine, i915, id) {
+		flush_work(&engine->pmu.enable_busy_stats);
+		flush_delayed_work(&engine->pmu.disable_busy_stats);
+	}
+
 	perf_pmu_unregister(&i915->pmu.base);
 	i915->pmu.base.event_init = NULL;
 }
diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c
index 8db83f504d70..eaf1d31dbf31 100644
--- a/drivers/gpu/drm/i915/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/intel_engine_cs.c
@@ -21,6 +21,7 @@
  * IN THE SOFTWARE.
  *
  */
+#include <linux/static_key.h>
 
 #include "i915_drv.h"
 #include "intel_ringbuffer.h"
@@ -1615,6 +1616,10 @@ intel_engine_lookup_user(struct drm_i915_private *i915, u8 class, u8 instance)
 	return i915->engine_class[class][instance];
 }
 
+DEFINE_STATIC_KEY_FALSE(i915_engine_stats_key);
+static DEFINE_MUTEX(i915_engine_stats_mutex);
+static int i915_engine_stats_ref;
+
 /**
  * intel_enable_engine_stats() - Enable engine busy tracking on engine
  * @engine: engine to enable stats collection
@@ -1630,6 +1635,8 @@ int intel_enable_engine_stats(struct intel_engine_cs *engine)
 	if (!i915_modparams.enable_execlists)
 		return -ENODEV;
 
+	mutex_lock(&i915_engine_stats_mutex);
+
 	spin_lock_irqsave(&engine->stats.lock, flags);
 	if (engine->stats.enabled == ~0)
 		goto busy;
@@ -1637,10 +1644,16 @@ int intel_enable_engine_stats(struct intel_engine_cs *engine)
 		engine->stats.enabled_at = ktime_get();
 	spin_unlock_irqrestore(&engine->stats.lock, flags);
 
+	if (i915_engine_stats_ref++ == 0)
+		static_branch_enable(&i915_engine_stats_key);
+
+	mutex_unlock(&i915_engine_stats_mutex);
+
 	return 0;
 
 busy:
 	spin_unlock_irqrestore(&engine->stats.lock, flags);
+	mutex_unlock(&i915_engine_stats_mutex);
 
 	return -EBUSY;
 }
@@ -1658,6 +1671,7 @@ void intel_disable_engine_stats(struct intel_engine_cs *engine)
 	if (!i915_modparams.enable_execlists)
 		return;
 
+	mutex_lock(&i915_engine_stats_mutex);
 	spin_lock_irqsave(&engine->stats.lock, flags);
 	WARN_ON_ONCE(engine->stats.enabled == 0);
 	if (--engine->stats.enabled == 0) {
@@ -1667,6 +1681,9 @@ void intel_disable_engine_stats(struct intel_engine_cs *engine)
 		engine->stats.total = 0;
 	}
 	spin_unlock_irqrestore(&engine->stats.lock, flags);
+	if (--i915_engine_stats_ref == 0)
+		static_branch_disable(&i915_engine_stats_key);
+	mutex_unlock(&i915_engine_stats_mutex);
 }
 
 /**
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index ef638b97e46a..7d9506c3efdc 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -355,6 +355,22 @@ struct intel_engine_cs {
 		 * 		requested.
 		 */
 		bool busy_stats;
+		/**
+		 * @enable_busy_stats: Work item for engine busy stats enabling.
+		 *
+		 * Since the action can sleep it needs to be decoupled from the
+		 * perf API callback.
+		 */
+		struct work_struct enable_busy_stats;
+		/**
+		 * @disable_busy_stats: Work item for busy stats disabling.
+		 *
+		 * Same as with @enable_busy_stats action, with the difference
+		 * that we delay it in case there are rapid enable-disable
+		 * actions, which can happen during tool startup (like perf
+		 * stat).
+		 */
+		struct delayed_work disable_busy_stats;
 	} pmu;
 
 	/*
@@ -896,59 +912,68 @@ bool intel_engine_can_store_dword(struct intel_engine_cs *engine);
 struct intel_engine_cs *
 intel_engine_lookup_user(struct drm_i915_private *i915, u8 class, u8 instance);
 
+DECLARE_STATIC_KEY_FALSE(i915_engine_stats_key);
+
 static inline void intel_engine_context_in(struct intel_engine_cs *engine)
 {
 	unsigned long flags;
 
-	if (READ_ONCE(engine->stats.enabled) == 0)
-		return;
+	if (static_branch_unlikely(&i915_engine_stats_key)) {
+		if (READ_ONCE(engine->stats.enabled) == 0)
+			return;
 
-	spin_lock_irqsave(&engine->stats.lock, flags);
+		spin_lock_irqsave(&engine->stats.lock, flags);
 
-	if (engine->stats.enabled > 0) {
-		if (engine->stats.active++ == 0)
-			engine->stats.start = ktime_get();
-		GEM_BUG_ON(engine->stats.active == 0);
-	}
+			if (engine->stats.enabled > 0) {
+				if (engine->stats.active++ == 0)
+					engine->stats.start = ktime_get();
+				GEM_BUG_ON(engine->stats.active == 0);
+			}
 
-	spin_unlock_irqrestore(&engine->stats.lock, flags);
+		spin_unlock_irqrestore(&engine->stats.lock, flags);
+	}
 }
 
 static inline void intel_engine_context_out(struct intel_engine_cs *engine)
 {
 	unsigned long flags;
 
-	if (READ_ONCE(engine->stats.enabled) == 0)
-		return;
-
-	spin_lock_irqsave(&engine->stats.lock, flags);
-
-	if (engine->stats.enabled > 0) {
-		ktime_t last, now = ktime_get();
-
-		if (engine->stats.active && --engine->stats.active == 0) {
-			/*
-			 * Decrement the active context count and in case GPU
-			 * is now idle add up to the running total.
-			 */
-			last = ktime_sub(now, engine->stats.start);
-
-			engine->stats.total = ktime_add(engine->stats.total,
-							last);
-		} else if (engine->stats.active == 0) {
-			/*
-			 * After turning on engine stats, context out might be
-			 * the first event in which case we account from the
-			 * time stats gathering was turned on.
-			 */
-			last = ktime_sub(now, engine->stats.enabled_at);
-
-			engine->stats.total = ktime_add(engine->stats.total,
-							last);
+	if (static_branch_unlikely(&i915_engine_stats_key)) {
+		if (READ_ONCE(engine->stats.enabled) == 0)
+			return;
+
+		spin_lock_irqsave(&engine->stats.lock, flags);
+
+		if (engine->stats.enabled > 0) {
+			ktime_t last, now = ktime_get();
+
+			if (engine->stats.active &&
+			    --engine->stats.active == 0) {
+				/*
+				 * Decrement the active context count and in
+				 * case GPU is now idle add up to the running
+				 * total.
+				 */
+				last = ktime_sub(now, engine->stats.start);
+
+				engine->stats.total =
+					ktime_add(engine->stats.total, last);
+			} else if (engine->stats.active == 0) {
+				/*
+				 * After turning on engine stats, context out
+				 * might be the first event in which case we
+				 * account from the time stats gathering was
+				 * turned on.
+				 */
+				last = ktime_sub(now, engine->stats.enabled_at);
+
+				engine->stats.total =
+					ktime_add(engine->stats.total, last);
+			}
 		}
-	}
 
-	spin_unlock_irqrestore(&engine->stats.lock, flags);
+		spin_unlock_irqrestore(&engine->stats.lock, flags);
+	}
 }
 
 int intel_enable_engine_stats(struct intel_engine_cs *engine);
-- 
2.9.5

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* ✓ Fi.CI.BAT: success for i915 PMU and engine busy stats (rev12)
  2017-09-25 15:15 [PATCH v5 0/8] i915 PMU and engine busy stats Tvrtko Ursulin
                   ` (7 preceding siblings ...)
  2017-09-25 15:15 ` [PATCH 8/8] drm/i915: Gate engine stats collection with a static key Tvrtko Ursulin
@ 2017-09-25 15:46 ` Patchwork
  2017-09-25 21:22 ` ✗ Fi.CI.IGT: failure " Patchwork
  9 siblings, 0 replies; 30+ messages in thread
From: Patchwork @ 2017-09-25 15:46 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

== Series Details ==

Series: i915 PMU and engine busy stats (rev12)
URL   : https://patchwork.freedesktop.org/series/27488/
State : success

== Summary ==

Series 27488v12 i915 PMU and engine busy stats
https://patchwork.freedesktop.org/api/1.0/series/27488/revisions/12/mbox/

Test kms_pipe_crc_basic:
        Subgroup suspend-read-crc-pipe-b:
                dmesg-warn -> PASS       (fi-skl-6770hq)

fi-bdw-5557u     total:289  pass:268  dwarn:0   dfail:0   fail:0   skip:21  time:440s
fi-bdw-gvtdvm    total:289  pass:265  dwarn:0   dfail:0   fail:0   skip:24  time:475s
fi-blb-e6850     total:289  pass:224  dwarn:1   dfail:0   fail:0   skip:64  time:419s
fi-bsw-n3050     total:289  pass:243  dwarn:0   dfail:0   fail:0   skip:46  time:521s
fi-bwr-2160      total:289  pass:184  dwarn:0   dfail:0   fail:0   skip:105 time:278s
fi-bxt-j4205     total:289  pass:260  dwarn:0   dfail:0   fail:0   skip:29  time:504s
fi-byt-j1900     total:289  pass:254  dwarn:1   dfail:0   fail:0   skip:34  time:498s
fi-byt-n2820     total:289  pass:250  dwarn:1   dfail:0   fail:0   skip:38  time:494s
fi-cfl-s         total:289  pass:222  dwarn:35  dfail:0   fail:0   skip:32  time:539s
fi-cnl-y         total:289  pass:257  dwarn:0   dfail:0   fail:5   skip:27  time:643s
fi-elk-e7500     total:289  pass:230  dwarn:0   dfail:0   fail:0   skip:59  time:411s
fi-glk-1         total:289  pass:260  dwarn:0   dfail:0   fail:0   skip:29  time:567s
fi-hsw-4770      total:289  pass:263  dwarn:0   dfail:0   fail:0   skip:26  time:424s
fi-hsw-4770r     total:289  pass:263  dwarn:0   dfail:0   fail:0   skip:26  time:405s
fi-ilk-650       total:289  pass:229  dwarn:0   dfail:0   fail:0   skip:60  time:433s
fi-ivb-3520m     total:289  pass:261  dwarn:0   dfail:0   fail:0   skip:28  time:485s
fi-ivb-3770      total:289  pass:261  dwarn:0   dfail:0   fail:0   skip:28  time:462s
fi-kbl-7500u     total:289  pass:264  dwarn:1   dfail:0   fail:0   skip:24  time:472s
fi-kbl-7560u     total:289  pass:270  dwarn:0   dfail:0   fail:0   skip:19  time:575s
fi-kbl-r         total:289  pass:262  dwarn:0   dfail:0   fail:0   skip:27  time:595s
fi-pnv-d510      total:289  pass:223  dwarn:1   dfail:0   fail:0   skip:65  time:542s
fi-skl-6260u     total:289  pass:269  dwarn:0   dfail:0   fail:0   skip:20  time:446s
fi-skl-6700k     total:289  pass:265  dwarn:0   dfail:0   fail:0   skip:24  time:750s
fi-skl-6770hq    total:289  pass:269  dwarn:0   dfail:0   fail:0   skip:20  time:488s
fi-skl-gvtdvm    total:289  pass:266  dwarn:0   dfail:0   fail:0   skip:23  time:474s
fi-snb-2520m     total:289  pass:251  dwarn:0   dfail:0   fail:0   skip:38  time:565s
fi-snb-2600      total:289  pass:250  dwarn:0   dfail:0   fail:0   skip:39  time:413s

0b65077382608db179cb5afafecf4a0edd35c2fe drm-tip: 2017y-09m-25d-13h-55m-18s UTC integration manifest
c602fd03a478 drm/i915: Gate engine stats collection with a static key
63889419b70e drm/i915/pmu: Wire up engine busy stats to PMU
1fac36561f66 drm/i915: Engine busy time tracking
a616984af5d8 drm/i915: Wrap context schedule notification
273c1d114d3c drm/i915/pmu: Suspend sampling when GPU is idle
3a679099c412 drm/i915/pmu: Expose a PMU interface for perf queries
0125ded13b1c drm/i915: Extract intel_get_cagf
c0b438e6df9b drm/i915: Convert intel_rc6_residency_us to ns

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_5807/
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/8] drm/i915: Convert intel_rc6_residency_us to ns
  2017-09-25 15:15 ` [PATCH 1/8] drm/i915: Convert intel_rc6_residency_us to ns Tvrtko Ursulin
@ 2017-09-25 17:08   ` Chris Wilson
  0 siblings, 0 replies; 30+ messages in thread
From: Chris Wilson @ 2017-09-25 17:08 UTC (permalink / raw)
  To: Tvrtko Ursulin, Intel-gfx

Quoting Tvrtko Ursulin (2017-09-25 16:15:36)
> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> Will be used for exposing the PMU counters.
> 
> v2:
>  * Move intel_runtime_pm_get/put to the callers. (Chris Wilson)
>  * Restore full unit conversion precision.
> 
> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> ---
>  drivers/gpu/drm/i915/i915_drv.h   |  8 +++++++-
>  drivers/gpu/drm/i915/i915_sysfs.c |  9 +++++++--
>  drivers/gpu/drm/i915/intel_pm.c   | 27 +++++++++++++--------------
>  3 files changed, 27 insertions(+), 17 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index c1e93a61d81b..ee03e839356f 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -4128,9 +4128,15 @@ void vlv_phy_reset_lanes(struct intel_encoder *encoder);
>  
>  int intel_gpu_freq(struct drm_i915_private *dev_priv, int val);
>  int intel_freq_opcode(struct drm_i915_private *dev_priv, int val);
> -u64 intel_rc6_residency_us(struct drm_i915_private *dev_priv,
> +u64 intel_rc6_residency_ns(struct drm_i915_private *dev_priv,
>                            const i915_reg_t reg);
>  
> +static inline u64 intel_rc6_residency_us(struct drm_i915_private *dev_priv,
> +                                        const i915_reg_t reg)
> +{
> +       return DIV_ROUND_UP_ULL(intel_rc6_residency_ns(dev_priv, reg), 1000);
> +}
> +
>  #define I915_READ8(reg)                dev_priv->uncore.funcs.mmio_readb(dev_priv, (reg), true)
>  #define I915_WRITE8(reg, val)  dev_priv->uncore.funcs.mmio_writeb(dev_priv, (reg), (val), true)
>  
> diff --git a/drivers/gpu/drm/i915/i915_sysfs.c b/drivers/gpu/drm/i915/i915_sysfs.c
> index d61c8727f756..1a34d32d0092 100644
> --- a/drivers/gpu/drm/i915/i915_sysfs.c
> +++ b/drivers/gpu/drm/i915/i915_sysfs.c
> @@ -42,8 +42,13 @@ static inline struct drm_i915_private *kdev_minor_to_i915(struct device *kdev)
>  static u32 calc_residency(struct drm_i915_private *dev_priv,
>                           i915_reg_t reg)
>  {
> -       return DIV_ROUND_CLOSEST_ULL(intel_rc6_residency_us(dev_priv, reg),
> -                                    1000);
> +       u64 res;
> +
> +       intel_runtime_pm_get(dev_priv);
> +       res = intel_rc6_residency_us(dev_priv, reg);
> +       intel_runtime_pm_put(dev_priv);
> +
> +       return DIV_ROUND_CLOSEST_ULL(res, 1000);
>  }
>  
>  static ssize_t
> diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> index c66af09e27a7..8b4ffff11cf7 100644
> --- a/drivers/gpu/drm/i915/intel_pm.c
> +++ b/drivers/gpu/drm/i915/intel_pm.c
> @@ -9343,34 +9343,33 @@ static u64 vlv_residency_raw(struct drm_i915_private *dev_priv,
>         return lower | (u64)upper << 8;
>  }
>  
> -u64 intel_rc6_residency_us(struct drm_i915_private *dev_priv,
> +u64 intel_rc6_residency_ns(struct drm_i915_private *dev_priv,
>                            const i915_reg_t reg)
>  {
> -       u64 time_hw, units, div;
> +       u64 time_hw;
> +       u32 mul, div;

time_hw is a u32... except for vlv!

>  
>         if (!intel_enable_rc6())
>                 return 0;
>  
> -       intel_runtime_pm_get(dev_priv);
> -
>         /* On VLV and CHV, residency time is in CZ units rather than 1.28us */
>         if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) {
> -               units = 1000;
> +               mul = 1 000 000;
>                 div = dev_priv->czclk_freq;

Ok, scaling; previously returned us, now needs to be x1000 for ns.

> -
>                 time_hw = vlv_residency_raw(dev_priv, reg);
> -       } else if (IS_GEN9_LP(dev_priv)) {
> -               units = 1000;
> -               div = 1200;             /* 833.33ns */
>  
> -               time_hw = I915_READ(reg);
>         } else {
> -               units = 128000; /* 1.28us */
> -               div = 100000;
> +               /* 833.33ns units on Gen9LP, 1.28us elsewhere. */
> +               if (IS_GEN9_LP(dev_priv)) {
> +                       mul = 10000;
> +                       div = 12;

Ok.

> +               } else {
> +                       mul = 1280;
> +                       div = 1;

Ok.
> +               }
>  
>                 time_hw = I915_READ(reg);
>         }
>  
> -       intel_runtime_pm_put(dev_priv);
> -       return DIV_ROUND_UP_ULL(time_hw * units, div);
> +       return DIV_ROUND_UP_ULL(time_hw * mul, div);
>  }

Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 3/8] drm/i915/pmu: Expose a PMU interface for perf queries
  2017-09-25 15:15 ` [PATCH 3/8] drm/i915/pmu: Expose a PMU interface for perf queries Tvrtko Ursulin
@ 2017-09-25 17:37   ` Chris Wilson
  2017-09-26 12:28     ` Tvrtko Ursulin
  0 siblings, 1 reply; 30+ messages in thread
From: Chris Wilson @ 2017-09-25 17:37 UTC (permalink / raw)
  To: Tvrtko Ursulin, Intel-gfx; +Cc: Peter Zijlstra

Quoting Tvrtko Ursulin (2017-09-25 16:15:38)
> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> From: Chris Wilson <chris@chris-wilson.co.uk>
> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> From: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
> 
> The first goal is to be able to measure GPU (and invidual ring) busyness
> without having to poll registers from userspace. (Which not only incurs
> holding the forcewake lock indefinitely, perturbing the system, but also
> runs the risk of hanging the machine.) As an alternative we can use the
> perf event counter interface to sample the ring registers periodically
> and send those results to userspace.
> 
> Functionality we are exporting to userspace is via the existing perf PMU
> API and can be exercised via the existing tools. For example:
> 
>   perf stat -a -e i915/rcs0-busy/ -I 1000
> 
> Will print the render engine busynnes once per second. All the performance
> counters can be enumerated (perf list) and have their unit of measure
> correctly reported in sysfs.
> 
> v1-v2 (Chris Wilson):
> 
> v2: Use a common timer for the ring sampling.
> 
> v3: (Tvrtko Ursulin)
>  * Decouple uAPI from i915 engine ids.
>  * Complete uAPI defines.
>  * Refactor some code to helpers for clarity.
>  * Skip sampling disabled engines.
>  * Expose counters in sysfs.
>  * Pass in fake regs to avoid null ptr deref in perf core.
>  * Convert to class/instance uAPI.
>  * Use shared driver code for rc6 residency, power and frequency.
> 
> v4: (Dmitry Rogozhkin)
>  * Register PMU with .task_ctx_nr=perf_invalid_context
>  * Expose cpumask for the PMU with the single CPU in the mask
>  * Properly support pmu->stop(): it should call pmu->read()
>  * Properly support pmu->del(): it should call stop(event, PERF_EF_UPDATE)
>  * Introduce refcounting of event subscriptions.
>  * Make pmu.busy_stats a refcounter to avoid busy stats going away
>    with some deleted event.
>  * Expose cpumask for i915 PMU to avoid multiple events creation of
>    the same type followed by counter aggregation by perf-stat.
>  * Track CPUs getting online/offline to migrate perf context. If (likely)
>    cpumask will initially set CPU0, CONFIG_BOOTPARAM_HOTPLUG_CPU0 will be
>    needed to see effect of CPU status tracking.
>  * End result is that only global events are supported and perf stat
>    works correctly.
>  * Deny perf driver level sampling - it is prohibited for uncore PMU.
> 
> v5: (Tvrtko Ursulin)
> 
>  * Don't hardcode number of engine samplers.
>  * Rewrite event ref-counting for correctness and simplicity.
>  * Store initial counter value when starting already enabled events
>    to correctly report values to all listeners.
>  * Fix RC6 residency readout.
>  * Comments, GPL header.
> 
> v6:
>  * Add missing entry to v4 changelog.
>  * Fix accounting in CPU hotplug case by copying the approach from
>    arch/x86/events/intel/cstate.c. (Dmitry Rogozhkin)
> 
> v7:
>  * Log failure message only on failure.
>  * Remove CPU hotplug notification state on unregister.
> 
> v8:
>  * Fix error unwind on failed registration.
>  * Checkpatch cleanup.
> 
> v9:
>  * Drop the energy metric, it is available via intel_rapl_perf.
>    (Ville Syrjälä)
>  * Use HAS_RC6(p). (Chris Wilson)
>  * Handle unsupported non-engine events. (Dmitry Rogozhkin)
>  * Rebase for intel_rc6_residency_ns needing caller managed
>    runtime pm.
>  * Drop HAS_RC6 checks from the read callback since creating those
>    events will be rejected at init time already.
>  * Add counter units to sysfs so perf stat output is nicer.
>  * Cleanup the attribute tables for brevity and readability.
> 
> v10:
>  * Fixed queued accounting.
> 
> v11:
>  * Move intel_engine_lookup_user to intel_engine_cs.c
>  * Commit update. (Joonas Lahtinen)
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> ---
>  drivers/gpu/drm/i915/Makefile           |   1 +
>  drivers/gpu/drm/i915/i915_drv.c         |   2 +
>  drivers/gpu/drm/i915/i915_drv.h         |  78 ++++
>  drivers/gpu/drm/i915/i915_pmu.c         | 707 ++++++++++++++++++++++++++++++++
>  drivers/gpu/drm/i915/i915_reg.h         |   3 +
>  drivers/gpu/drm/i915/intel_engine_cs.c  |  35 ++
>  drivers/gpu/drm/i915/intel_ringbuffer.h |  25 ++
>  include/uapi/drm/i915_drm.h             |  57 +++
>  8 files changed, 908 insertions(+)
>  create mode 100644 drivers/gpu/drm/i915/i915_pmu.c
> 
> diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
> index 5182e3d5557d..5c6013961b3b 100644
> --- a/drivers/gpu/drm/i915/Makefile
> +++ b/drivers/gpu/drm/i915/Makefile
> @@ -26,6 +26,7 @@ i915-y := i915_drv.o \
>  
>  i915-$(CONFIG_COMPAT)   += i915_ioc32.o
>  i915-$(CONFIG_DEBUG_FS) += i915_debugfs.o intel_pipe_crc.o
> +i915-$(CONFIG_PERF_EVENTS) += i915_pmu.o
>  
>  # GEM code
>  i915-y += i915_cmd_parser.o \
> diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
> index 7056bb299dc6..03bbe23e4df8 100644
> --- a/drivers/gpu/drm/i915/i915_drv.c
> +++ b/drivers/gpu/drm/i915/i915_drv.c
> @@ -1200,6 +1200,7 @@ static void i915_driver_register(struct drm_i915_private *dev_priv)
>         struct drm_device *dev = &dev_priv->drm;
>  
>         i915_gem_shrinker_init(dev_priv);
> +       i915_pmu_register(dev_priv);
>  
>         /*
>          * Notify a valid surface after modesetting,
> @@ -1254,6 +1255,7 @@ static void i915_driver_unregister(struct drm_i915_private *dev_priv)
>         intel_opregion_unregister(dev_priv);
>  
>         i915_perf_unregister(dev_priv);
> +       i915_pmu_unregister(dev_priv);
>  
>         i915_teardown_sysfs(dev_priv);
>         i915_guc_log_unregister(dev_priv);
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index 6143a709c605..a40c314eed5f 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -40,6 +40,7 @@
>  #include <linux/hash.h>
>  #include <linux/intel-iommu.h>
>  #include <linux/kref.h>
> +#include <linux/perf_event.h>
>  #include <linux/pm_qos.h>
>  #include <linux/reservation.h>
>  #include <linux/shmem_fs.h>
> @@ -2198,6 +2199,70 @@ struct intel_cdclk_state {
>         unsigned int cdclk, vco, ref;
>  };
>  
> +enum {
> +       __I915_SAMPLE_FREQ_ACT = 0,
> +       __I915_SAMPLE_FREQ_REQ,
> +       __I915_NUM_PMU_SAMPLERS
> +};
> +
> +/**
> + * How many different events we track in the global PMU mask.
> + *
> + * It is also used to know to needed number of event reference counters.
> + */
> +#define I915_PMU_MASK_BITS \
> +       ((1 << I915_PMU_SAMPLE_BITS) + \
> +        (I915_PMU_LAST + 1 - __I915_PMU_OTHER(0)))
> +
> +struct i915_pmu {
> +       /**
> +        * @node: List node for CPU hotplug handling.
> +        */
> +       struct hlist_node node;
> +       /**
> +        * @base: PMU base.
> +        */
> +       struct pmu base;
> +       /**
> +        * @lock: Lock protecting enable mask and ref count handling.
> +        */
> +       spinlock_t lock;
> +       /**
> +        * @timer: Timer for internal i915 PMU sampling.
> +        */
> +       struct hrtimer timer;
> +       /**
> +        * @enable: Bitmask of all currently enabled events.
> +        *
> +        * Bits are derived from uAPI event numbers in a way that low 16 bits
> +        * correspond to engine event _sample_ _type_ (I915_SAMPLE_QUEUED is
> +        * bit 0), and higher bits correspond to other events (for instance
> +        * I915_PMU_ACTUAL_FREQUENCY is bit 16 etc).
> +        *
> +        * In other words, low 16 bits are not per engine but per engine
> +        * sampler type, while the upper bits are directly mapped to other
> +        * event types.
> +        */
> +       u64 enable;
> +       /**
> +        * @enable_count: Reference counts for the enabled events.
> +        *
> +        * Array indices are mapped in the same way as bits in the @enable field
> +        * and they are used to control sampling on/off when multiple clients
> +        * are using the PMU API.
> +        */
> +       unsigned int enable_count[I915_PMU_MASK_BITS];
> +       /**
> +        * @sample: Current counter value for i915 events which need sampling.
> +        *
> +        * These counters are updated from the i915 PMU sampling timer.
> +        *
> +        * Only global counters are held here, while the per-engine ones are in
> +        * struct intel_engine_cs.
> +        */
> +       u64 sample[__I915_NUM_PMU_SAMPLERS];
> +};
> +
>  struct drm_i915_private {
>         struct drm_device drm;
>  
> @@ -2246,6 +2311,8 @@ struct drm_i915_private {
>         struct pci_dev *bridge_dev;
>         struct i915_gem_context *kernel_context;
>         struct intel_engine_cs *engine[I915_NUM_ENGINES];
> +       struct intel_engine_cs *engine_class[MAX_ENGINE_CLASS + 1]
> +                                           [MAX_ENGINE_INSTANCE + 1];
>         struct i915_vma *semaphore;
>  
>         struct drm_dma_handle *status_page_dmah;
> @@ -2708,6 +2775,8 @@ struct drm_i915_private {
>                 int     irq;
>         } lpe_audio;
>  
> +       struct i915_pmu pmu;
> +
>         /*
>          * NOTE: This is the dri1/ums dungeon, don't add stuff here. Your patch
>          * will be rejected. Instead look for a better place.
> @@ -3932,6 +4001,15 @@ extern void i915_perf_fini(struct drm_i915_private *dev_priv);
>  extern void i915_perf_register(struct drm_i915_private *dev_priv);
>  extern void i915_perf_unregister(struct drm_i915_private *dev_priv);
>  
> +/* i915_pmu.c */
> +#ifdef CONFIG_PERF_EVENTS
> +void i915_pmu_register(struct drm_i915_private *i915);
> +void i915_pmu_unregister(struct drm_i915_private *i915);
> +#else
> +static inline void i915_pmu_register(struct drm_i915_private *i915) {}
> +static inline void i915_pmu_unregister(struct drm_i915_private *i915) {}
> +#endif
> +
>  /* i915_suspend.c */
>  extern int i915_save_state(struct drm_i915_private *dev_priv);
>  extern int i915_restore_state(struct drm_i915_private *dev_priv);
> diff --git a/drivers/gpu/drm/i915/i915_pmu.c b/drivers/gpu/drm/i915/i915_pmu.c
> new file mode 100644
> index 000000000000..cf1406fbc215
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/i915_pmu.c
> @@ -0,0 +1,707 @@
> +/*
> + * Copyright © 2017 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + */
> +
> +#include <linux/perf_event.h>
> +#include <linux/pm_runtime.h>
> +
> +#include "i915_drv.h"
> +#include "intel_ringbuffer.h"
> +
> +/* Frequency for the sampling timer for events which need it. */
> +#define FREQUENCY 200
> +#define PERIOD max_t(u64, 10000, NSEC_PER_SEC / FREQUENCY)
> +
> +#define ENGINE_SAMPLE_MASK \
> +       (BIT(I915_SAMPLE_QUEUED) | \
> +        BIT(I915_SAMPLE_BUSY) | \
> +        BIT(I915_SAMPLE_WAIT) | \
> +        BIT(I915_SAMPLE_SEMA))
> +
> +#define ENGINE_SAMPLE_BITS (1 << I915_PMU_SAMPLE_BITS)
> +
> +static cpumask_t i915_pmu_cpumask = CPU_MASK_NONE;
> +
> +static u8 engine_config_sample(u64 config)
> +{
> +       return config & I915_PMU_SAMPLE_MASK;
> +}
> +
> +static u8 engine_event_sample(struct perf_event *event)
> +{
> +       return engine_config_sample(event->attr.config);
> +}
> +
> +static u8 engine_event_class(struct perf_event *event)
> +{
> +       return (event->attr.config >> I915_PMU_CLASS_SHIFT) & 0xff;
> +}
> +
> +static u8 engine_event_instance(struct perf_event *event)
> +{
> +       return (event->attr.config >> I915_PMU_SAMPLE_BITS) & 0xff;
> +}
> +
> +static bool is_engine_config(u64 config)
> +{
> +       return config < __I915_PMU_OTHER(0);
> +}
> +
> +static unsigned int config_enabled_bit(u64 config)
> +{
> +       if (is_engine_config(config))
> +               return engine_config_sample(config);
> +       else
> +               return ENGINE_SAMPLE_BITS + (config - __I915_PMU_OTHER(0));
> +}
> +
> +static u64 config_enabled_mask(u64 config)
> +{
> +       return BIT_ULL(config_enabled_bit(config));
> +}
> +
> +static bool is_engine_event(struct perf_event *event)
> +{
> +       return is_engine_config(event->attr.config);
> +}
> +
> +static unsigned int event_enabled_bit(struct perf_event *event)
> +{
> +       return config_enabled_bit(event->attr.config);
> +}
> +
> +static bool grab_forcewake(struct drm_i915_private *i915, bool fw)
> +{
> +       if (!fw)
> +               intel_uncore_forcewake_get(i915, FORCEWAKE_ALL);
> +
> +       return true;
> +}
> +
> +static void engines_sample(struct drm_i915_private *dev_priv)
> +{
> +       struct intel_engine_cs *engine;
> +       enum intel_engine_id id;
> +       bool fw = false;
> +
> +       if ((dev_priv->pmu.enable & ENGINE_SAMPLE_MASK) == 0)
> +               return;
> +
> +       if (!dev_priv->gt.awake)
> +               return;
> +
> +       if (!intel_runtime_pm_get_if_in_use(dev_priv))
> +               return;
> +
> +       for_each_engine(engine, dev_priv, id) {
> +               u32 current_seqno = intel_engine_get_seqno(engine);
> +               u32 last_seqno = intel_engine_last_submit(engine);
> +               u32 enable = engine->pmu.enable;
> +
> +               if (i915_seqno_passed(current_seqno, last_seqno))
> +                       continue;
> +
> +               if ((enable & BIT(I915_SAMPLE_QUEUED)) &&
> +                   (last_seqno - 1 > current_seqno))

Hmm, this isn't queued. This is more than one rq submitted to hw.

Do you want total inflight (including waiting-for-fences)? That would be
engine->timeline->inflight_seqnos (additional problem in its laziness).

Hard to distinguish all other cases, but last - current is susceptible
to not treating all ready requests as being queued. And miscounts on
legacy across semaphores (which you've excluded for execlists).

Aiui you want

diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index 4eb1a76731b2..19b8b31afbbc 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -477,6 +477,7 @@ void __i915_gem_request_submit(struct drm_i915_gem_request *request)
        engine->emit_breadcrumb(request,
                                request->ring->vaddr + request->postfix);
 
+       atomic_dec(&request->engine->queued);
        spin_lock(&request->timeline->lock);
        list_move_tail(&request->link, &timeline->requests);
        spin_unlock(&request->timeline->lock);
@@ -525,6 +526,7 @@ void __i915_gem_request_unsubmit(struct drm_i915_gem_request *request)
        spin_lock(&timeline->lock);
        list_move(&request->link, &timeline->requests);
        spin_unlock(&timeline->lock);
+       atomic_inc(&request->engine->queued);
 
        /* We don't need to wake_up any waiters on request->execute, they
         * will get woken by any other event or us re-adding this request
@@ -556,6 +558,7 @@ submit_notify(struct i915_sw_fence *fence, enum i915_sw_fence_notify state)
        switch (state) {
        case FENCE_COMPLETE:
                trace_i915_gem_request_submit(request);
+               atomic_inc(&request->engine->queued);
                request->engine->submit_request(request);
                break;
 

That excludes the requests actually in the hw queue, just those in the sw
queue. Or both.


> +                       engine->pmu.sample[I915_SAMPLE_QUEUED] += PERIOD;
> +
> +               if (enable & BIT(I915_SAMPLE_BUSY)) {
> +                       u32 val;
> +
> +                       fw = grab_forcewake(dev_priv, fw);
> +                       val = I915_READ_FW(RING_MI_MODE(engine->mmio_base));
> +                       if (!(val & MODE_IDLE))
> +                               engine->pmu.sample[I915_SAMPLE_BUSY] += PERIOD;
> +               }
> +
> +               if (enable & (BIT(I915_SAMPLE_WAIT) | BIT(I915_SAMPLE_SEMA))) {
> +                       u32 val;
> +
> +                       fw = grab_forcewake(dev_priv, fw);
> +                       val = I915_READ_FW(RING_CTL(engine->mmio_base));
> +                       if ((enable & BIT(I915_SAMPLE_WAIT)) &&
> +                           (val & RING_WAIT))
> +                               engine->pmu.sample[I915_SAMPLE_WAIT] += PERIOD;
> +                       if ((enable & BIT(I915_SAMPLE_SEMA)) &&
> +                           (val & RING_WAIT_SEMAPHORE))
> +                               engine->pmu.sample[I915_SAMPLE_SEMA] += PERIOD;
> +               }
> +       }
> +
> +       if (fw)
> +               intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
> +       intel_runtime_pm_put(dev_priv);
> +}
> +
> +static void frequency_sample(struct drm_i915_private *dev_priv)
> +{
> +       if (dev_priv->pmu.enable &
> +           config_enabled_mask(I915_PMU_ACTUAL_FREQUENCY)) {
> +               u64 val;
> +
> +               val = dev_priv->rps.cur_freq;
> +               if (dev_priv->gt.awake &&
> +                   intel_runtime_pm_get_if_in_use(dev_priv)) {
> +                       val = intel_get_cagf(dev_priv,
> +                                            I915_READ_NOTRACE(GEN6_RPSTAT1));
> +                       intel_runtime_pm_put(dev_priv);
> +               }
> +               val = intel_gpu_freq(dev_priv, val);
> +               dev_priv->pmu.sample[__I915_SAMPLE_FREQ_ACT] += val * PERIOD;
> +       }
> +
> +       if (dev_priv->pmu.enable &
> +           config_enabled_mask(I915_PMU_REQUESTED_FREQUENCY)) {
> +               u64 val = intel_gpu_freq(dev_priv, dev_priv->rps.cur_freq);
> +
> +               dev_priv->pmu.sample[__I915_SAMPLE_FREQ_REQ] += val * PERIOD;
> +       }

Ok. Do we need a comment to explain the importance of averaging here?
Should we be doing trapezium integration?

> +}
> +
> +static enum hrtimer_restart i915_sample(struct hrtimer *hrtimer)
> +{
> +       struct drm_i915_private *i915 =
> +               container_of(hrtimer, struct drm_i915_private, pmu.timer);
> +
> +       if (i915->pmu.enable == 0)
> +               return HRTIMER_NORESTART;
> +
> +       engines_sample(i915);
> +       frequency_sample(i915);
> +
> +       hrtimer_forward_now(hrtimer, ns_to_ktime(PERIOD));
> +       return HRTIMER_RESTART;
> +}
> +
> +static u64 count_interrupts(struct drm_i915_private *i915)
> +{
> +       /* open-coded kstat_irqs() */
> +       struct irq_desc *desc = irq_to_desc(i915->drm.pdev->irq);
> +       u64 sum = 0;
> +       int cpu;
> +
> +       if (!desc || !desc->kstat_irqs)
> +               return 0;
> +
> +       for_each_possible_cpu(cpu)
> +               sum += *per_cpu_ptr(desc->kstat_irqs, cpu);
> +
> +       return sum;
> +}
> +
> +static void i915_pmu_event_destroy(struct perf_event *event)
> +{
> +       WARN_ON(event->parent);
> +}
> +
> +static int engine_event_init(struct perf_event *event)
> +{
> +       struct drm_i915_private *i915 =
> +               container_of(event->pmu, typeof(*i915), pmu.base);
> +
> +       if (!intel_engine_lookup_user(i915, engine_event_class(event),
> +                                     engine_event_instance(event)))
> +               return -ENODEV;
> +
> +       switch (engine_event_sample(event)) {
> +       case I915_SAMPLE_QUEUED:
> +       case I915_SAMPLE_BUSY:
> +       case I915_SAMPLE_WAIT:
> +               break;
> +       case I915_SAMPLE_SEMA:
> +               if (INTEL_GEN(i915) < 6)
> +                       return -ENODEV;
> +               break;
> +       default:
> +               return -ENOENT;
> +       }
> +
> +       return 0;
> +}
> +
> +static int i915_pmu_event_init(struct perf_event *event)
> +{
> +       struct drm_i915_private *i915 =
> +               container_of(event->pmu, typeof(*i915), pmu.base);
> +       int cpu, ret;
> +
> +       if (event->attr.type != event->pmu->type)
> +               return -ENOENT;
> +
> +       /* unsupported modes and filters */
> +       if (event->attr.sample_period) /* no sampling */
> +               return -EINVAL;
> +
> +       if (has_branch_stack(event))
> +               return -EOPNOTSUPP;
> +
> +       if (event->cpu < 0)
> +               return -EINVAL;
> +
> +       cpu = cpumask_any_and(&i915_pmu_cpumask,
> +                             topology_sibling_cpumask(event->cpu));
> +       if (cpu >= nr_cpu_ids)
> +               return -ENODEV;
> +
> +       if (is_engine_event(event)) {
> +               ret = engine_event_init(event);
> +       } else {
> +               ret = 0;
> +               switch (event->attr.config) {
> +               case I915_PMU_INTERRUPTS:
> +                       break;
> +               case I915_PMU_ACTUAL_FREQUENCY:
> +                       if (IS_VALLEYVIEW(i915) || IS_CHERRYVIEW(i915))
> +                                /* Requires a mutex for sampling! */
> +                               ret = -ENODEV;
> +               case I915_PMU_REQUESTED_FREQUENCY:
> +                       if (INTEL_GEN(i915) < 6)
> +                               ret = -ENODEV;
> +                       break;
> +               case I915_PMU_RC6_RESIDENCY:
> +                       if (!HAS_RC6(i915))
> +                               ret = -ENODEV;
> +                       break;
> +               case I915_PMU_RC6p_RESIDENCY:
> +               case I915_PMU_RC6pp_RESIDENCY:
> +                       if (!HAS_RC6p(i915))
> +                               ret = -ENODEV;

Must introduced you to my HAS_RC6pp() patch.

> +                       break;
> +               default:
> +                       ret = -ENOENT;
> +                       break;
> +               }
> +       }
> +       if (ret)
> +               return ret;
> +
> +       event->cpu = cpu;
> +       if (!event->parent)
> +               event->destroy = i915_pmu_event_destroy;
> +
> +       return 0;
> +}
> +
> +static u64 __i915_pmu_event_read(struct perf_event *event)
> +{
> +       struct drm_i915_private *i915 =
> +               container_of(event->pmu, typeof(*i915), pmu.base);
> +       u64 val = 0;
> +
> +       if (is_engine_event(event)) {
> +               u8 sample = engine_event_sample(event);
> +               struct intel_engine_cs *engine;
> +
> +               engine = intel_engine_lookup_user(i915,
> +                                                 engine_event_class(event),
> +                                                 engine_event_instance(event));
> +
> +               if (WARN_ON_ONCE(!engine)) {
> +                       /* Do nothing */
> +               } else {
> +                       val = engine->pmu.sample[sample];
> +               }
> +       } else {
> +               switch (event->attr.config) {
> +               case I915_PMU_ACTUAL_FREQUENCY:
> +                       val = i915->pmu.sample[__I915_SAMPLE_FREQ_ACT];
> +                       break;
> +               case I915_PMU_REQUESTED_FREQUENCY:
> +                       val = i915->pmu.sample[__I915_SAMPLE_FREQ_REQ];
> +                       break;
> +               case I915_PMU_INTERRUPTS:
> +                       val = count_interrupts(i915);
> +                       break;
> +               case I915_PMU_RC6_RESIDENCY:
> +                       intel_runtime_pm_get(i915);
> +                       val = intel_rc6_residency_ns(i915,
> +                                                    IS_VALLEYVIEW(i915) ?
> +                                                    VLV_GT_RENDER_RC6 :
> +                                                    GEN6_GT_GFX_RC6);
> +                       intel_runtime_pm_put(i915);
> +                       break;
> +               case I915_PMU_RC6p_RESIDENCY:
> +                       intel_runtime_pm_get(i915);
> +                       val = intel_rc6_residency_ns(i915, GEN6_GT_GFX_RC6p);
> +                       intel_runtime_pm_put(i915);
> +                       break;
> +               case I915_PMU_RC6pp_RESIDENCY:
> +                       intel_runtime_pm_get(i915);
> +                       val = intel_rc6_residency_ns(i915, GEN6_GT_GFX_RC6pp);
> +                       intel_runtime_pm_put(i915);
> +                       break;
> +               }
> +       }

Ok.

> +
> +       return val;
> +}
> +
> +static void i915_pmu_event_read(struct perf_event *event)
> +{
> +       struct hw_perf_event *hwc = &event->hw;
> +       u64 prev, new;
> +
> +again:
> +       prev = local64_read(&hwc->prev_count);
> +       new = __i915_pmu_event_read(event);
> +
> +       if (local64_cmpxchg(&hwc->prev_count, prev, new) != prev)
> +               goto again;
> +
> +       local64_add(new - prev, &event->count);
> +}
> +
> +static void i915_pmu_enable(struct perf_event *event)
> +{
> +       struct drm_i915_private *i915 =
> +               container_of(event->pmu, typeof(*i915), pmu.base);
> +       unsigned int bit = event_enabled_bit(event);
> +       unsigned long flags;
> +
> +       spin_lock_irqsave(&i915->pmu.lock, flags);
> +
> +       /*
> +        * Start the sampling timer when enabling the first event.
> +        */
> +       if (i915->pmu.enable == 0)
> +               hrtimer_start_range_ns(&i915->pmu.timer,
> +                                      ns_to_ktime(PERIOD), 0,
> +                                      HRTIMER_MODE_REL_PINNED);
> +
> +       /*
> +        * Update the bitmask of enabled events and increment
> +        * the event reference counter.
> +        */
> +       GEM_BUG_ON(bit >= I915_PMU_MASK_BITS);
> +       GEM_BUG_ON(i915->pmu.enable_count[bit] == ~0);
> +       i915->pmu.enable |= BIT_ULL(bit);
> +       i915->pmu.enable_count[bit]++;
> +
> +       /*
> +        * For per-engine events the bitmask and reference counting
> +        * is stored per engine.
> +        */
> +       if (is_engine_event(event)) {
> +               u8 sample = engine_event_sample(event);
> +               struct intel_engine_cs *engine;
> +
> +               engine = intel_engine_lookup_user(i915,
> +                                                 engine_event_class(event),
> +                                                 engine_event_instance(event));
> +               GEM_BUG_ON(!engine);
> +               engine->pmu.enable |= BIT(sample);
> +
> +               GEM_BUG_ON(sample >= I915_PMU_SAMPLE_BITS);
> +               GEM_BUG_ON(engine->pmu.enable_count[sample] == ~0);
> +               engine->pmu.enable_count[sample]++;
> +       }
> +
> +       /*
> +        * Store the current counter value so we can report the correct delta
> +        * for all listeners. Even when the event was already enabled and has
> +        * an existing non-zero value.
> +        */
> +       local64_set(&event->hw.prev_count, __i915_pmu_event_read(event));
> +
> +       spin_unlock_irqrestore(&i915->pmu.lock, flags);
> +}
> +
> +static void i915_pmu_disable(struct perf_event *event)
> +{
> +       struct drm_i915_private *i915 =
> +               container_of(event->pmu, typeof(*i915), pmu.base);
> +       unsigned int bit = event_enabled_bit(event);
> +       unsigned long flags;
> +
> +       spin_lock_irqsave(&i915->pmu.lock, flags);
> +
> +       if (is_engine_event(event)) {
> +               u8 sample = engine_event_sample(event);
> +               struct intel_engine_cs *engine;
> +
> +               engine = intel_engine_lookup_user(i915,
> +                                                 engine_event_class(event),
> +                                                 engine_event_instance(event));
> +               GEM_BUG_ON(!engine);
> +               GEM_BUG_ON(sample >= I915_PMU_SAMPLE_BITS);
> +               GEM_BUG_ON(engine->pmu.enable_count[sample] == 0);
> +               /*
> +                * Decrement the reference count and clear the enabled
> +                * bitmask when the last listener on an event goes away.
> +                */
> +               if (--engine->pmu.enable_count[sample] == 0)
> +                       engine->pmu.enable &= ~BIT(sample);
> +       }
> +
> +       GEM_BUG_ON(bit >= I915_PMU_MASK_BITS);
> +       GEM_BUG_ON(i915->pmu.enable_count[bit] == 0);
> +       /*
> +        * Decrement the reference count and clear the enabled
> +        * bitmask when the last listener on an event goes away.
> +        */
> +       if (--i915->pmu.enable_count[bit] == 0)
> +               i915->pmu.enable &= ~BIT_ULL(bit);
> +
> +       spin_unlock_irqrestore(&i915->pmu.lock, flags);

Ok.
> +}
> +
> +static void i915_pmu_event_start(struct perf_event *event, int flags)
> +{
> +       i915_pmu_enable(event);
> +       event->hw.state = 0;
> +}
> +
> +static void i915_pmu_event_stop(struct perf_event *event, int flags)
> +{
> +       if (flags & PERF_EF_UPDATE)
> +               i915_pmu_event_read(event);
> +       i915_pmu_disable(event);
> +       event->hw.state = PERF_HES_STOPPED;

Never did understand the differences in perf start/stop, enable/disable,
add/del :)

> +}
> +
> +static int i915_pmu_event_add(struct perf_event *event, int flags)
> +{
> +       if (flags & PERF_EF_START)
> +               i915_pmu_event_start(event, flags);
> +
> +       return 0;
> +}
> +
> +static void i915_pmu_event_del(struct perf_event *event, int flags)
> +{
> +       i915_pmu_event_stop(event, PERF_EF_UPDATE);
> +}
> +
> +static int i915_pmu_event_event_idx(struct perf_event *event)
> +{
> +       return 0;
> +}
> +
> +static ssize_t i915_pmu_format_show(struct device *dev,
> +                                   struct device_attribute *attr, char *buf)
> +{
> +       struct dev_ext_attribute *eattr;
> +
> +       eattr = container_of(attr, struct dev_ext_attribute, attr);
> +       return sprintf(buf, "%s\n", (char *)eattr->var);
> +}
> +
> +#define I915_PMU_FORMAT_ATTR(_name, _config) \
> +       (&((struct dev_ext_attribute[]) { \
> +               { .attr = __ATTR(_name, 0444, i915_pmu_format_show, NULL), \
> +                 .var = (void *)_config, } \
> +       })[0].attr.attr)
> +
> +static struct attribute *i915_pmu_format_attrs[] = {
> +       I915_PMU_FORMAT_ATTR(i915_eventid, "config:0-42"),
> +       NULL,
> +};
> +
> +static const struct attribute_group i915_pmu_format_attr_group = {
> +       .name = "format",
> +       .attrs = i915_pmu_format_attrs,
> +};
> +
> +static ssize_t i915_pmu_event_show(struct device *dev,
> +                                  struct device_attribute *attr, char *buf)
> +{
> +       struct dev_ext_attribute *eattr;
> +
> +       eattr = container_of(attr, struct dev_ext_attribute, attr);
> +       return sprintf(buf, "config=0x%lx\n", (unsigned long)eattr->var);
> +}
> +
> +#define I915_EVENT_ATTR(_name, _config) \
> +       (&((struct dev_ext_attribute[]) { \
> +               { .attr = __ATTR(_name, 0444, i915_pmu_event_show, NULL), \
> +                 .var = (void *)_config, } \
> +       })[0].attr.attr)
> +
> +#define I915_EVENT_STR(_name, _str) \
> +       (&((struct perf_pmu_events_attr[]) { \
> +               { .attr      = __ATTR(_name, 0444, perf_event_sysfs_show, NULL), \
> +                 .id        = 0, \
> +                 .event_str = _str, } \
> +       })[0].attr.attr)
> +
> +#define I915_EVENT(_name, _config, _unit) \
> +       I915_EVENT_ATTR(_name, _config), \
> +       I915_EVENT_STR(_name.unit, _unit)
> +
> +#define I915_ENGINE_EVENT(_name, _class, _instance, _sample) \
> +       I915_EVENT_ATTR(_name, __I915_PMU_ENGINE(_class, _instance, _sample)), \
> +       I915_EVENT_STR(_name.unit, "ns")
> +
> +#define I915_ENGINE_EVENTS(_name, _class, _instance) \
> +       I915_ENGINE_EVENT(_name##_instance-queued, _class, _instance, I915_SAMPLE_QUEUED), \
> +       I915_ENGINE_EVENT(_name##_instance-busy, _class, _instance, I915_SAMPLE_BUSY), \
> +       I915_ENGINE_EVENT(_name##_instance-sema, _class, _instance, I915_SAMPLE_SEMA), \
> +       I915_ENGINE_EVENT(_name##_instance-wait, _class, _instance, I915_SAMPLE_WAIT)
> +
> +static struct attribute *i915_pmu_events_attrs[] = {
> +       I915_ENGINE_EVENTS(rcs, I915_ENGINE_CLASS_RENDER, 0),
> +       I915_ENGINE_EVENTS(bcs, I915_ENGINE_CLASS_COPY, 0),
> +       I915_ENGINE_EVENTS(vcs, I915_ENGINE_CLASS_VIDEO, 0),
> +       I915_ENGINE_EVENTS(vcs, I915_ENGINE_CLASS_VIDEO, 1),
> +       I915_ENGINE_EVENTS(vecs, I915_ENGINE_CLASS_VIDEO_ENHANCE, 0),

Please add a crossref to intel_engine_cs to remind ourselves to add the
extra perf counters if we add a new engine.

> +
> +       I915_EVENT(actual-frequency,    I915_PMU_ACTUAL_FREQUENCY,    "Hz"),
> +       I915_EVENT(requested-frequency, I915_PMU_REQUESTED_FREQUENCY, "Hz"),
> +
> +       I915_EVENT_ATTR(interrupts, I915_PMU_INTERRUPTS),
> +
> +       I915_EVENT(rc6-residency,   I915_PMU_RC6_RESIDENCY,   "ns"),
> +       I915_EVENT(rc6p-residency,  I915_PMU_RC6p_RESIDENCY,  "ns"),
> +       I915_EVENT(rc6pp-residency, I915_PMU_RC6pp_RESIDENCY, "ns"),
> +
> +       NULL,
> +};
> +
> +static const struct attribute_group i915_pmu_events_attr_group = {
> +       .name = "events",
> +       .attrs = i915_pmu_events_attrs,
> +};
> +
> +static ssize_t
> +i915_pmu_get_attr_cpumask(struct device *dev,
> +                         struct device_attribute *attr,
> +                         char *buf)
> +{
> +       return cpumap_print_to_pagebuf(true, buf, &i915_pmu_cpumask);
> +}
> +
> +static DEVICE_ATTR(cpumask, 0444, i915_pmu_get_attr_cpumask, NULL);
> +
> +static struct attribute *i915_cpumask_attrs[] = {
> +       &dev_attr_cpumask.attr,
> +       NULL,
> +};
> +
> +static struct attribute_group i915_pmu_cpumask_attr_group = {
> +       .attrs = i915_cpumask_attrs,
> +};
> +
> +static const struct attribute_group *i915_pmu_attr_groups[] = {
> +       &i915_pmu_format_attr_group,
> +       &i915_pmu_events_attr_group,
> +       &i915_pmu_cpumask_attr_group,
> +       NULL
> +};
> +
> +static int i915_pmu_cpu_online(unsigned int cpu, struct hlist_node *node)
> +{
> +       unsigned int target;
> +
> +       target = cpumask_any_and(&i915_pmu_cpumask, &i915_pmu_cpumask);
> +       /* Select the first online CPU as a designated reader. */
> +       if (target >= nr_cpu_ids)
> +               cpumask_set_cpu(cpu, &i915_pmu_cpumask);
> +
> +       return 0;
> +}
> +
> +static int i915_pmu_cpu_offline(unsigned int cpu, struct hlist_node *node)
> +{
> +       struct i915_pmu *pmu = hlist_entry_safe(node, typeof(*pmu), node);
> +       unsigned int target;
> +
> +       if (cpumask_test_and_clear_cpu(cpu, &i915_pmu_cpumask)) {
> +               target = cpumask_any_but(topology_sibling_cpumask(cpu), cpu);
> +               /* Migrate events if there is a valid target */
> +               if (target < nr_cpu_ids) {
> +                       cpumask_set_cpu(target, &i915_pmu_cpumask);
> +                       perf_pmu_migrate_context(&pmu->base, cpu, target);
> +               }
> +       }
> +
> +       return 0;
> +}
> +
> +void i915_pmu_register(struct drm_i915_private *i915)
> +{
> +       int ret;
> +
> +       if (INTEL_GEN(i915) <= 2) {
> +               DRM_INFO("PMU not supported for this GPU.");
> +               return;
> +       }
> +
> +       ret = cpuhp_setup_state_multi(CPUHP_AP_PERF_X86_UNCORE_ONLINE,
> +                                     "perf/x86/intel/i915:online",
> +                                     i915_pmu_cpu_online,
> +                                     i915_pmu_cpu_offline);
> +       if (ret)
> +               goto err_hp_state;
> +
> +       ret = cpuhp_state_add_instance(CPUHP_AP_PERF_X86_UNCORE_ONLINE,
> +                                      &i915->pmu.node);
> +       if (ret)
> +               goto err_hp_instance;
> +
> +       i915->pmu.base.attr_groups      = i915_pmu_attr_groups;
> +       i915->pmu.base.task_ctx_nr      = perf_invalid_context;
> +       i915->pmu.base.event_init       = i915_pmu_event_init;
> +       i915->pmu.base.add              = i915_pmu_event_add;
> +       i915->pmu.base.del              = i915_pmu_event_del;
> +       i915->pmu.base.start            = i915_pmu_event_start;
> +       i915->pmu.base.stop             = i915_pmu_event_stop;
> +       i915->pmu.base.read             = i915_pmu_event_read;
> +       i915->pmu.base.event_idx        = i915_pmu_event_event_idx;
> +
> +       spin_lock_init(&i915->pmu.lock);
> +       hrtimer_init(&i915->pmu.timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
> +       i915->pmu.timer.function = i915_sample;
> +       i915->pmu.enable = 0;
> +
> +       ret = perf_pmu_register(&i915->pmu.base, "i915", -1);
> +       if (ret == 0)
> +               return;
> +
> +       i915->pmu.base.event_init = NULL;
> +
> +       WARN_ON(cpuhp_state_remove_instance(CPUHP_AP_PERF_X86_UNCORE_ONLINE,
> +                                           &i915->pmu.node));
> +
> +err_hp_instance:
> +       cpuhp_remove_multi_state(CPUHP_AP_PERF_X86_UNCORE_ONLINE);
> +
> +err_hp_state:
> +       DRM_NOTE("Failed to register PMU! (err=%d)\n", ret);

Fair enough, not fatal, the driver will continue; just some user
functionality will be lost. No data loss or anything else meriting a
warning.

> +}
> +
> +void i915_pmu_unregister(struct drm_i915_private *i915)
> +{
> +       if (!i915->pmu.base.event_init)
> +               return;
> +
> +       WARN_ON(cpuhp_state_remove_instance(CPUHP_AP_PERF_X86_UNCORE_ONLINE,
> +                                           &i915->pmu.node));
> +       cpuhp_remove_multi_state(CPUHP_AP_PERF_X86_UNCORE_ONLINE);
> +
> +       i915->pmu.enable = 0;
> +
> +       hrtimer_cancel(&i915->pmu.timer);
> +
> +       perf_pmu_unregister(&i915->pmu.base);
> +       i915->pmu.base.event_init = NULL;
> +}
> diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
> index 82f36dd0cd94..16e36b2eef87 100644
> --- a/drivers/gpu/drm/i915/i915_reg.h
> +++ b/drivers/gpu/drm/i915/i915_reg.h
> @@ -186,6 +186,9 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
>  #define VIDEO_ENHANCEMENT_CLASS        2
>  #define COPY_ENGINE_CLASS      3
>  #define OTHER_CLASS            4
> +#define MAX_ENGINE_CLASS       4
> +
> +#define MAX_ENGINE_INSTANCE    1
>  
>  /* PCI config space */
>  
> diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c
> index a28e2a864cf1..35c117c3fa0d 100644
> --- a/drivers/gpu/drm/i915/intel_engine_cs.c
> +++ b/drivers/gpu/drm/i915/intel_engine_cs.c
> @@ -200,6 +200,15 @@ intel_engine_setup(struct drm_i915_private *dev_priv,
>         GEM_BUG_ON(info->class >= ARRAY_SIZE(intel_engine_classes));
>         class_info = &intel_engine_classes[info->class];
>  
> +       if (GEM_WARN_ON(info->class > MAX_ENGINE_CLASS))
> +               return -EINVAL;
> +
> +       if (GEM_WARN_ON(info->instance > MAX_ENGINE_INSTANCE))
> +               return -EINVAL;
> +
> +       if (GEM_WARN_ON(dev_priv->engine_class[info->class][info->instance]))
> +               return -EINVAL;
> +
>         GEM_BUG_ON(dev_priv->engine[id]);
>         engine = kzalloc(sizeof(*engine), GFP_KERNEL);
>         if (!engine)
> @@ -227,6 +236,7 @@ intel_engine_setup(struct drm_i915_private *dev_priv,
>  
>         ATOMIC_INIT_NOTIFIER_HEAD(&engine->context_status_notifier);
>  
> +       dev_priv->engine_class[info->class][info->instance] = engine;
>         dev_priv->engine[id] = engine;
>         return 0;
>  }
> @@ -1578,6 +1588,31 @@ bool intel_engine_can_store_dword(struct intel_engine_cs *engine)
>         }
>  }
>  
> +static u8 user_class_map[I915_ENGINE_CLASS_MAX] = {
> +       [I915_ENGINE_CLASS_OTHER] = OTHER_CLASS,
> +       [I915_ENGINE_CLASS_RENDER] = RENDER_CLASS,
> +       [I915_ENGINE_CLASS_COPY] = COPY_ENGINE_CLASS,
> +       [I915_ENGINE_CLASS_VIDEO] = VIDEO_DECODE_CLASS,
> +       [I915_ENGINE_CLASS_VIDEO_ENHANCE] = VIDEO_ENHANCEMENT_CLASS,
> +};
> +
> +struct intel_engine_cs *
> +intel_engine_lookup_user(struct drm_i915_private *i915, u8 class, u8 instance)
> +{
> +       if (class >= ARRAY_SIZE(user_class_map))
> +               return NULL;
> +
> +       class = user_class_map[class];
> +
> +       if (WARN_ON_ONCE(class > MAX_ENGINE_CLASS))
> +               return NULL;
> +
> +       if (instance > MAX_ENGINE_INSTANCE)
> +               return NULL;
> +
> +       return i915->engine_class[class][instance];
> +}
> +
>  #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
>  #include "selftests/mock_engine.c"
>  #endif
> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
> index 56d7ae9f298b..7a901e766d03 100644
> --- a/drivers/gpu/drm/i915/intel_ringbuffer.h
> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
> @@ -330,6 +330,28 @@ struct intel_engine_cs {
>                 I915_SELFTEST_DECLARE(bool mock : 1);
>         } breadcrumbs;
>  
> +       struct {
> +               /**
> +                * @enable: Bitmask of enable sample events on this engine.
> +                *
> +                * Bits correspond to sample event types, for instance
> +                * I915_SAMPLE_QUEUED is bit 0 etc.
> +                */
> +               u32 enable;
> +               /**
> +                * @enable_count: Reference count for the enabled samplers.
> +                *
> +                * Index number corresponds to the bit number from @enable.
> +                */
> +               unsigned int enable_count[I915_PMU_SAMPLE_BITS];
> +               /**
> +                * @sample: Counter value for sampling events.
> +                *
> +                * Our internal timer stores the current counter in this field.
> +                */
> +               u64 sample[I915_ENGINE_SAMPLE_MAX];
> +       } pmu;
> +
>         /*
>          * A pool of objects to use as shadow copies of client batch buffers
>          * when the command parser is enabled. Prevents the client from
> @@ -834,4 +856,7 @@ void intel_engines_reset_default_submission(struct drm_i915_private *i915);
>  
>  bool intel_engine_can_store_dword(struct intel_engine_cs *engine);
>  
> +struct intel_engine_cs *
> +intel_engine_lookup_user(struct drm_i915_private *i915, u8 class, u8 instance);
> +
>  #endif /* _INTEL_RINGBUFFER_H_ */
> diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
> index fe25a01c81f2..682abafecc8c 100644
> --- a/include/uapi/drm/i915_drm.h
> +++ b/include/uapi/drm/i915_drm.h
> @@ -86,6 +86,63 @@ enum i915_mocs_table_index {
>         I915_MOCS_CACHED,
>  };

Hmm, how about uapi/drm/i915_perf.h? (Or i915_pmu.h)
We are not adding traditional drm-ioctl functionality (except for some
common enums). But I guess it is not huge, so no big concern. Just
thinking that we essentially copy this into igt, so we should really
make it a copy!

> +enum drm_i915_gem_engine_class {
> +       I915_ENGINE_CLASS_OTHER = 0,
> +       I915_ENGINE_CLASS_RENDER = 1,
> +       I915_ENGINE_CLASS_COPY = 2,
> +       I915_ENGINE_CLASS_VIDEO = 3,
> +       I915_ENGINE_CLASS_VIDEO_ENHANCE = 4,
> +       I915_ENGINE_CLASS_MAX /* non-ABI */
> +};
> +
> +/**
> + * DOC: perf_events exposed by i915 through /sys/bus/event_sources/drivers/i915
> + *
> + */
> +
> +enum drm_i915_pmu_engine_sample {
> +       I915_SAMPLE_QUEUED = 0,
> +       I915_SAMPLE_BUSY = 1,
> +       I915_SAMPLE_WAIT = 2,
> +       I915_SAMPLE_SEMA = 3,
> +       I915_ENGINE_SAMPLE_MAX /* non-ABI */
> +};
> +
> +#define I915_PMU_SAMPLE_BITS (4)
> +#define I915_PMU_SAMPLE_MASK (0xf)
> +#define I915_PMU_SAMPLE_INSTANCE_BITS (8)
> +#define I915_PMU_CLASS_SHIFT \
> +       (I915_PMU_SAMPLE_BITS + I915_PMU_SAMPLE_INSTANCE_BITS)
> +
> +#define __I915_PMU_ENGINE(class, instance, sample) \
> +       ((class) << I915_PMU_CLASS_SHIFT | \
> +       (instance) << I915_PMU_SAMPLE_BITS | \
> +       (sample))
> +
> +#define I915_PMU_ENGINE_QUEUED(class, instance) \
> +       __I915_PMU_ENGINE(class, instance, I915_SAMPLE_QUEUED)
> +
> +#define I915_PMU_ENGINE_BUSY(class, instance) \
> +       __I915_PMU_ENGINE(class, instance, I915_SAMPLE_BUSY)
> +
> +#define I915_PMU_ENGINE_WAIT(class, instance) \
> +       __I915_PMU_ENGINE(class, instance, I915_SAMPLE_WAIT)
> +
> +#define I915_PMU_ENGINE_SEMA(class, instance) \
> +       __I915_PMU_ENGINE(class, instance, I915_SAMPLE_SEMA)
> +
> +#define __I915_PMU_OTHER(x) (__I915_PMU_ENGINE(0xff, 0xff, 0xf) + 1 + (x))
> +
> +#define I915_PMU_ACTUAL_FREQUENCY      __I915_PMU_OTHER(0)
> +#define I915_PMU_REQUESTED_FREQUENCY   __I915_PMU_OTHER(1)
> +#define I915_PMU_INTERRUPTS            __I915_PMU_OTHER(2)
> +
> +#define I915_PMU_RC6_RESIDENCY         __I915_PMU_OTHER(3)
> +#define I915_PMU_RC6p_RESIDENCY                __I915_PMU_OTHER(4)
> +#define I915_PMU_RC6pp_RESIDENCY       __I915_PMU_OTHER(5)
> +
> +#define I915_PMU_LAST I915_PMU_RC6pp_RESIDENCY
> +
>  /* Each region is a minimum of 16k, and there are at most 255 of them.
>   */
>  #define I915_NR_TEX_REGIONS 255        /* table size 2k - maximum due to use
> -- 
> 2.9.5
> 
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH 4/8] drm/i915/pmu: Suspend sampling when GPU is idle
  2017-09-25 15:15 ` [PATCH 4/8] drm/i915/pmu: Suspend sampling when GPU is idle Tvrtko Ursulin
@ 2017-09-25 17:39   ` Chris Wilson
  0 siblings, 0 replies; 30+ messages in thread
From: Chris Wilson @ 2017-09-25 17:39 UTC (permalink / raw)
  To: Tvrtko Ursulin, Intel-gfx

Quoting Tvrtko Ursulin (2017-09-25 16:15:39)
> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> If only a subset of events is enabled we can afford to suspend
> the sampling timer when the GPU is idle and so save some cycles
> and power.
> 
> v2: Rebase and limit timer even more.
> v3: Rebase.
> v4: Rebase.
> v5: Skip action if perf PMU failed to register.
> v6: Checkpatch cleanup.
> v7:
>  * Add a common helper to start the timer if needed. (Chris Wilson)
>  * Add comment explaining bitwise logic in pmu_needs_timer.
> v8: Fix some comments styles. (Chris Wilson)
> 
> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

You may have gone a little overboard on the comments; a little too far
into i++ /* post-increment i*/ ;)

Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 5/8] drm/i915: Wrap context schedule notification
  2017-09-25 15:15 ` [PATCH 5/8] drm/i915: Wrap context schedule notification Tvrtko Ursulin
@ 2017-09-25 17:40   ` Chris Wilson
  2017-09-25 18:40     ` Chris Wilson
  0 siblings, 1 reply; 30+ messages in thread
From: Chris Wilson @ 2017-09-25 17:40 UTC (permalink / raw)
  To: Tvrtko Ursulin, Intel-gfx

Quoting Tvrtko Ursulin (2017-09-25 16:15:40)
> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> No functional change just something which will be handy in the
> following patch.
> 
> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 6/8] drm/i915: Engine busy time tracking
  2017-09-25 15:15 ` [PATCH 6/8] drm/i915: Engine busy time tracking Tvrtko Ursulin
@ 2017-09-25 17:43   ` Chris Wilson
  2017-09-26 12:30     ` Tvrtko Ursulin
  0 siblings, 1 reply; 30+ messages in thread
From: Chris Wilson @ 2017-09-25 17:43 UTC (permalink / raw)
  To: Tvrtko Ursulin, Intel-gfx

Quoting Tvrtko Ursulin (2017-09-25 16:15:41)
> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> Track total time requests have been executing on the hardware.
> 
> We add new kernel API to allow software tracking of time GPU
> engines are spending executing requests.
> 
> Both per-engine and global API is added with the latter also
> being exported for use by external users.
> 
> v2:
>  * Squashed with the internal API.
>  * Dropped static key.
>  * Made per-engine.
>  * Store time in monotonic ktime.
> 
> v3: Moved stats clearing to disable.
> 
> v4:
>  * Comments.
>  * Don't export the API just yet.
> 
> v5: Whitespace cleanup.
> 
> v6:
>  * Rename ref to active.
>  * Drop engine aggregate stats for now.
>  * Account initial busy period after enabling stats.
> 
> v7:
>  * Rebase.
> 
> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> ---
>  drivers/gpu/drm/i915/intel_engine_cs.c  | 84 ++++++++++++++++++++++++++++++
>  drivers/gpu/drm/i915/intel_lrc.c        |  2 +
>  drivers/gpu/drm/i915/intel_ringbuffer.h | 92 +++++++++++++++++++++++++++++++++
>  3 files changed, 178 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c
> index 35c117c3fa0d..8db83f504d70 100644
> --- a/drivers/gpu/drm/i915/intel_engine_cs.c
> +++ b/drivers/gpu/drm/i915/intel_engine_cs.c
> @@ -234,6 +234,8 @@ intel_engine_setup(struct drm_i915_private *dev_priv,
>         /* Nothing to do here, execute in order of dependencies */
>         engine->schedule = NULL;
>  
> +       spin_lock_init(&engine->stats.lock);
> +
>         ATOMIC_INIT_NOTIFIER_HEAD(&engine->context_status_notifier);
>  
>         dev_priv->engine_class[info->class][info->instance] = engine;
> @@ -1613,6 +1615,88 @@ intel_engine_lookup_user(struct drm_i915_private *i915, u8 class, u8 instance)
>         return i915->engine_class[class][instance];
>  }
>  
> +/**
> + * intel_enable_engine_stats() - Enable engine busy tracking on engine
> + * @engine: engine to enable stats collection
> + *
> + * Start collecting the engine busyness data for @engine.
> + *
> + * Returns 0 on success or a negative error code.
> + */
> +int intel_enable_engine_stats(struct intel_engine_cs *engine)
> +{
> +       unsigned long flags;
> +
> +       if (!i915_modparams.enable_execlists)
> +               return -ENODEV;
> +
> +       spin_lock_irqsave(&engine->stats.lock, flags);
> +       if (engine->stats.enabled == ~0)
> +               goto busy;

This still makes me go wut? Why not just refcount_t for this?

> +       if (engine->stats.enabled++ == 0)
> +               engine->stats.enabled_at = ktime_get();
> +       spin_unlock_irqrestore(&engine->stats.lock, flags);
> +
> +       return 0;
> +
> +busy:
> +       spin_unlock_irqrestore(&engine->stats.lock, flags);
> +
> +       return -EBUSY;
> +}
> +
> +/**
> + * intel_disable_engine_stats() - Disable engine busy tracking on engine
> + * @engine: engine to disable stats collection
> + *
> + * Stops collecting the engine busyness data for @engine.
> + */
> +void intel_disable_engine_stats(struct intel_engine_cs *engine)
> +{
> +       unsigned long flags;
> +
> +       if (!i915_modparams.enable_execlists)
> +               return;
> +
> +       spin_lock_irqsave(&engine->stats.lock, flags);
> +       WARN_ON_ONCE(engine->stats.enabled == 0);
> +       if (--engine->stats.enabled == 0) {
> +               engine->stats.enabled_at = 0;
> +               engine->stats.active = 0;
> +               engine->stats.start = 0;
> +               engine->stats.total = 0;
> +       }
> +       spin_unlock_irqrestore(&engine->stats.lock, flags);
> +}
> +
> +/**
> + * intel_engine_get_busy_time() - Return current accumulated engine busyness
> + * @engine: engine to report on
> + *
> + * Returns accumulated time @engine was busy since engine stats were enabled.
> + */
> +ktime_t intel_engine_get_busy_time(struct intel_engine_cs *engine)
> +{
> +       ktime_t total;
> +       unsigned long flags;
> +
> +       spin_lock_irqsave(&engine->stats.lock, flags);
> +
> +       total = engine->stats.total;
> +
> +       /*
> +        * If the engine is executing something at the moment
> +        * add it to the total.
> +        */
> +       if (engine->stats.active)
> +               total = ktime_add(total,
> +                                 ktime_sub(ktime_get(), engine->stats.start));
> +
> +       spin_unlock_irqrestore(&engine->stats.lock, flags);
> +
> +       return total;
> +}
> +
>  #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
>  #include "selftests/mock_engine.c"
>  #endif
> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> index 7cd14b701ed7..1e743d3d16cb 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -366,12 +366,14 @@ execlists_context_status_change(struct drm_i915_gem_request *rq,
>  static inline void
>  execlists_context_schedule_in(struct drm_i915_gem_request *rq)
>  {
> +       intel_engine_context_in(rq->engine);
>         execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_IN);

Symmetry: start accumulating after the notifier.

>  }
>  
>  static inline void
>  execlists_context_schedule_out(struct drm_i915_gem_request *rq)
>  {
> +       intel_engine_context_out(rq->engine);
>         execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_OUT);
>  }
>  
> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
> index 7a901e766d03..8db228ebdb28 100644
> --- a/drivers/gpu/drm/i915/intel_ringbuffer.h
> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
> @@ -538,6 +538,38 @@ struct intel_engine_cs {
>          * certain bits to encode the command length in the header).
>          */
>         u32 (*get_cmd_length_mask)(u32 cmd_header);
> +
> +       struct {
> +               /**
> +                * @lock: Lock protecting the below fields.
> +                */
> +               spinlock_t lock;
> +               /**
> +                * @enabled: Reference count indicating number of listeners.
> +                */
> +               unsigned int enabled;
> +               /**
> +                * @active: Number of contexts currently scheduled in.
> +                */
> +               unsigned int active;
> +               /**
> +                * @enabled_at: Timestamp when busy stats were enabled.
> +                */
> +               ktime_t enabled_at;
> +               /**
> +                * @start: Timestamp of the last idle to active transition.
> +                *
> +                * Idle is defined as active == 0, active is active > 0.
> +                */
> +               ktime_t start;
> +               /**
> +                * @total: Total time this engine was busy.
> +                *
> +                * Accumulated time not counting the most recent block in cases
> +                * where engine is currently busy (active > 0).
> +                */
> +               ktime_t total;
> +       } stats;
>  };
>  
>  static inline unsigned int
> @@ -859,4 +891,64 @@ bool intel_engine_can_store_dword(struct intel_engine_cs *engine);
>  struct intel_engine_cs *
>  intel_engine_lookup_user(struct drm_i915_private *i915, u8 class, u8 instance);
>  
> +static inline void intel_engine_context_in(struct intel_engine_cs *engine)
> +{
> +       unsigned long flags;
> +
> +       if (READ_ONCE(engine->stats.enabled) == 0)
> +               return;
> +
> +       spin_lock_irqsave(&engine->stats.lock, flags);
> +
> +       if (engine->stats.enabled > 0) {
> +               if (engine->stats.active++ == 0)
> +                       engine->stats.start = ktime_get();
> +               GEM_BUG_ON(engine->stats.active == 0);
> +       }
> +
> +       spin_unlock_irqrestore(&engine->stats.lock, flags);
> +}
> +
> +static inline void intel_engine_context_out(struct intel_engine_cs *engine)
> +{
> +       unsigned long flags;
> +
> +       if (READ_ONCE(engine->stats.enabled) == 0)
> +               return;
> +
> +       spin_lock_irqsave(&engine->stats.lock, flags);
> +
> +       if (engine->stats.enabled > 0) {
> +               ktime_t last, now = ktime_get();
> +
> +               if (engine->stats.active && --engine->stats.active == 0) {
> +                       /*
> +                        * Decrement the active context count and in case GPU
> +                        * is now idle add up to the running total.
> +                        */
> +                       last = ktime_sub(now, engine->stats.start);
> +
> +                       engine->stats.total = ktime_add(engine->stats.total,
> +                                                       last);
> +               } else if (engine->stats.active == 0) {
> +                       /*
> +                        * After turning on engine stats, context out might be
> +                        * the first event in which case we account from the
> +                        * time stats gathering was turned on.
> +                        */
> +                       last = ktime_sub(now, engine->stats.enabled_at);
> +
> +                       engine->stats.total = ktime_add(engine->stats.total,
> +                                                       last);
> +               }
> +       }
> +
> +       spin_unlock_irqrestore(&engine->stats.lock, flags);
> +}
> +
> +int intel_enable_engine_stats(struct intel_engine_cs *engine);
> +void intel_disable_engine_stats(struct intel_engine_cs *engine);
> +
> +ktime_t intel_engine_get_busy_time(struct intel_engine_cs *engine);
> +
>  #endif /* _INTEL_RINGBUFFER_H_ */
> -- 
> 2.9.5
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 7/8] drm/i915/pmu: Wire up engine busy stats to PMU
  2017-09-25 15:15 ` [PATCH 7/8] drm/i915/pmu: Wire up engine busy stats to PMU Tvrtko Ursulin
@ 2017-09-25 17:48   ` Chris Wilson
  2017-09-26 12:32     ` Tvrtko Ursulin
  0 siblings, 1 reply; 30+ messages in thread
From: Chris Wilson @ 2017-09-25 17:48 UTC (permalink / raw)
  To: Tvrtko Ursulin, Intel-gfx

Quoting Tvrtko Ursulin (2017-09-25 16:15:42)
> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> We can use engine busy stats instead of the MMIO sampling timer
> for better efficiency.
> 
> As minimum this saves period * num_engines / sec mmio reads,
> and in a better case, when only engine busy samplers are active,
> it enables us to not kick off the sampling timer at all.

Or you could inspect port_isset(execlists.port).
You can avoid the mmio for this case also by just using HWSP. It's just
that I never enabled busy tracking in isolation, so I always ended up
using the mmio.

Stronger argument is that this method give more precise timing than
stochastic sampling.

> 
> v2: Rebase.
> v3:
>  * Rebase, comments.
>  * Leave engine busyness controls out of workers.
> v4: Checkpatch cleanup.
> v5: Added comment to pmu_needs_timer change.
> v6:
>  * Rebase.
>  * Fix style of some comments. (Chris Wilson)
> 
> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

With a better explanation of why this is preferred,
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 8/8] drm/i915: Gate engine stats collection with a static key
  2017-09-25 15:15 ` [PATCH 8/8] drm/i915: Gate engine stats collection with a static key Tvrtko Ursulin
@ 2017-09-25 17:56   ` Chris Wilson
  2017-09-26 12:42     ` Tvrtko Ursulin
  0 siblings, 1 reply; 30+ messages in thread
From: Chris Wilson @ 2017-09-25 17:56 UTC (permalink / raw)
  To: Tvrtko Ursulin, Intel-gfx

Quoting Tvrtko Ursulin (2017-09-25 16:15:43)
> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> This reduces the cost of the software engine busyness tracking
> to a single no-op instruction when there are no listeners.
> 
> v2: Rebase and some comments.
> v3: Rebase.
> v4: Checkpatch fixes.
> v5: Rebase.
> v6: Use system_long_wq to avoid being blocked by struct_mutex
>     users.
> v7: Fix bad conflict resolution from last rebase. (Dmitry Rogozhkin)
> v8: Rebase.
> 
> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Bah, still unhappy about the global. I know in all likelihood it doesn't
matter, but it still bugs me.

> ---
>  drivers/gpu/drm/i915/i915_pmu.c         |  54 +++++++++++++++--
>  drivers/gpu/drm/i915/intel_engine_cs.c  |  17 ++++++
>  drivers/gpu/drm/i915/intel_ringbuffer.h | 101 ++++++++++++++++++++------------
>  3 files changed, 130 insertions(+), 42 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_pmu.c b/drivers/gpu/drm/i915/i915_pmu.c
> index 228aa50ce709..e768f33ebb3d 100644
> --- a/drivers/gpu/drm/i915/i915_pmu.c
> +++ b/drivers/gpu/drm/i915/i915_pmu.c
> @@ -501,11 +501,17 @@ static void i915_pmu_enable(struct perf_event *event)
>                 GEM_BUG_ON(sample >= I915_PMU_SAMPLE_BITS);
>                 GEM_BUG_ON(engine->pmu.enable_count[sample] == ~0);
>                 if (engine->pmu.enable_count[sample]++ == 0) {
> +                       /*
> +                        * Enable engine busy stats tracking if needed or
> +                        * alternatively cancel the scheduled disabling of the
> +                        * same.
> +                        */
>                         if (engine_needs_busy_stats(engine) &&
>                             !engine->pmu.busy_stats) {
> -                               engine->pmu.busy_stats =
> -                                       intel_enable_engine_stats(engine) == 0;
> -                               WARN_ON_ONCE(!engine->pmu.busy_stats);
> +                               engine->pmu.busy_stats = true;
> +                               if (!cancel_delayed_work(&engine->pmu.disable_busy_stats))
> +                                       queue_work(system_long_wq,
> +                                                  &engine->pmu.enable_busy_stats);
>                         }
>                 }
>         }
> @@ -548,7 +554,15 @@ static void i915_pmu_disable(struct perf_event *event)
>                         if (!engine_needs_busy_stats(engine) &&
>                             engine->pmu.busy_stats) {
>                                 engine->pmu.busy_stats = false;
> -                               intel_disable_engine_stats(engine);
> +                               /*
> +                                * We request a delayed disable to handle the
> +                                * rapid on/off cycles on events which can
> +                                * happen when tools like perf stat start in a
> +                                * nicer way.
> +                                */
> +                               queue_delayed_work(system_long_wq,
> +                                                  &engine->pmu.disable_busy_stats,
> +                                                  round_jiffies_up_relative(HZ));

What's preventing the perverse system from executing the enable after
the disable? Say the enable was scheduled on another cpu that was
stalled?

Something like a if (cancel_work(enable)) return ?

>                         }
>                 }
>         }
> @@ -739,9 +753,27 @@ static int i915_pmu_cpu_offline(unsigned int cpu, struct hlist_node *node)
>         return 0;
>  }
>  
> +static void __enable_busy_stats(struct work_struct *work)
> +{
> +       struct intel_engine_cs *engine =
> +               container_of(work, typeof(*engine), pmu.enable_busy_stats);
> +
> +       WARN_ON_ONCE(intel_enable_engine_stats(engine));
> +}
> +
> +static void __disable_busy_stats(struct work_struct *work)
> +{
> +       struct intel_engine_cs *engine =
> +              container_of(work, typeof(*engine), pmu.disable_busy_stats.work);
> +
> +       intel_disable_engine_stats(engine);
> +}
> +
>  void i915_pmu_register(struct drm_i915_private *i915)
>  {
>         int ret;
> +       struct intel_engine_cs *engine;
> +       enum intel_engine_id id;

What a nice Christmas tree you are growing :-p
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 5/8] drm/i915: Wrap context schedule notification
  2017-09-25 17:40   ` Chris Wilson
@ 2017-09-25 18:40     ` Chris Wilson
  0 siblings, 0 replies; 30+ messages in thread
From: Chris Wilson @ 2017-09-25 18:40 UTC (permalink / raw)
  To: Tvrtko Ursulin, Intel-gfx

Quoting Chris Wilson (2017-09-25 18:40:20)
> Quoting Tvrtko Ursulin (2017-09-25 16:15:40)
> > From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > 
> > No functional change just something which will be handy in the
> > following patch.
> > 
> > Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>

This would be handy right away, if you don't mind pushing it out of
sequence.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 30+ messages in thread

* ✗ Fi.CI.IGT: failure for i915 PMU and engine busy stats (rev12)
  2017-09-25 15:15 [PATCH v5 0/8] i915 PMU and engine busy stats Tvrtko Ursulin
                   ` (8 preceding siblings ...)
  2017-09-25 15:46 ` ✓ Fi.CI.BAT: success for i915 PMU and engine busy stats (rev12) Patchwork
@ 2017-09-25 21:22 ` Patchwork
  9 siblings, 0 replies; 30+ messages in thread
From: Patchwork @ 2017-09-25 21:22 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: i915 PMU and engine busy stats (rev12)
URL   : https://patchwork.freedesktop.org/series/27488/
State : failure

== Summary ==

Test perf:
        Subgroup polling:
                pass       -> FAIL       (shard-hsw) fdo#102252
Test kms_cursor_legacy:
        Subgroup flip-vs-cursor-toggle:
                pass       -> FAIL       (shard-hsw) fdo#102670
        Subgroup cursor-vs-flip-toggle:
                pass       -> FAIL       (shard-hsw)
Test gem_sync:
        Subgroup basic-store-all:
                pass       -> FAIL       (shard-hsw)

fdo#102252 https://bugs.freedesktop.org/show_bug.cgi?id=102252
fdo#102670 https://bugs.freedesktop.org/show_bug.cgi?id=102670

shard-hsw        total:2429 pass:1321 dwarn:5   dfail:0   fail:20  skip:1083 time:9825s

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_5807/shards.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 3/8] drm/i915/pmu: Expose a PMU interface for perf queries
  2017-09-25 17:37   ` Chris Wilson
@ 2017-09-26 12:28     ` Tvrtko Ursulin
  2017-09-26 13:00       ` Chris Wilson
  2017-09-26 13:14       ` Chris Wilson
  0 siblings, 2 replies; 30+ messages in thread
From: Tvrtko Ursulin @ 2017-09-26 12:28 UTC (permalink / raw)
  To: Chris Wilson, Tvrtko Ursulin, Intel-gfx; +Cc: Peter Zijlstra


On 25/09/2017 18:37, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2017-09-25 16:15:38)
>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>
>> From: Chris Wilson <chris@chris-wilson.co.uk>
>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>> From: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
>>
>> The first goal is to be able to measure GPU (and invidual ring) busyness
>> without having to poll registers from userspace. (Which not only incurs
>> holding the forcewake lock indefinitely, perturbing the system, but also
>> runs the risk of hanging the machine.) As an alternative we can use the
>> perf event counter interface to sample the ring registers periodically
>> and send those results to userspace.
>>
>> Functionality we are exporting to userspace is via the existing perf PMU
>> API and can be exercised via the existing tools. For example:
>>
>>    perf stat -a -e i915/rcs0-busy/ -I 1000
>>
>> Will print the render engine busynnes once per second. All the performance
>> counters can be enumerated (perf list) and have their unit of measure
>> correctly reported in sysfs.
>>
>> v1-v2 (Chris Wilson):
>>
>> v2: Use a common timer for the ring sampling.
>>
>> v3: (Tvrtko Ursulin)
>>   * Decouple uAPI from i915 engine ids.
>>   * Complete uAPI defines.
>>   * Refactor some code to helpers for clarity.
>>   * Skip sampling disabled engines.
>>   * Expose counters in sysfs.
>>   * Pass in fake regs to avoid null ptr deref in perf core.
>>   * Convert to class/instance uAPI.
>>   * Use shared driver code for rc6 residency, power and frequency.
>>
>> v4: (Dmitry Rogozhkin)
>>   * Register PMU with .task_ctx_nr=perf_invalid_context
>>   * Expose cpumask for the PMU with the single CPU in the mask
>>   * Properly support pmu->stop(): it should call pmu->read()
>>   * Properly support pmu->del(): it should call stop(event, PERF_EF_UPDATE)
>>   * Introduce refcounting of event subscriptions.
>>   * Make pmu.busy_stats a refcounter to avoid busy stats going away
>>     with some deleted event.
>>   * Expose cpumask for i915 PMU to avoid multiple events creation of
>>     the same type followed by counter aggregation by perf-stat.
>>   * Track CPUs getting online/offline to migrate perf context. If (likely)
>>     cpumask will initially set CPU0, CONFIG_BOOTPARAM_HOTPLUG_CPU0 will be
>>     needed to see effect of CPU status tracking.
>>   * End result is that only global events are supported and perf stat
>>     works correctly.
>>   * Deny perf driver level sampling - it is prohibited for uncore PMU.
>>
>> v5: (Tvrtko Ursulin)
>>
>>   * Don't hardcode number of engine samplers.
>>   * Rewrite event ref-counting for correctness and simplicity.
>>   * Store initial counter value when starting already enabled events
>>     to correctly report values to all listeners.
>>   * Fix RC6 residency readout.
>>   * Comments, GPL header.
>>
>> v6:
>>   * Add missing entry to v4 changelog.
>>   * Fix accounting in CPU hotplug case by copying the approach from
>>     arch/x86/events/intel/cstate.c. (Dmitry Rogozhkin)
>>
>> v7:
>>   * Log failure message only on failure.
>>   * Remove CPU hotplug notification state on unregister.
>>
>> v8:
>>   * Fix error unwind on failed registration.
>>   * Checkpatch cleanup.
>>
>> v9:
>>   * Drop the energy metric, it is available via intel_rapl_perf.
>>     (Ville Syrjälä)
>>   * Use HAS_RC6(p). (Chris Wilson)
>>   * Handle unsupported non-engine events. (Dmitry Rogozhkin)
>>   * Rebase for intel_rc6_residency_ns needing caller managed
>>     runtime pm.
>>   * Drop HAS_RC6 checks from the read callback since creating those
>>     events will be rejected at init time already.
>>   * Add counter units to sysfs so perf stat output is nicer.
>>   * Cleanup the attribute tables for brevity and readability.
>>
>> v10:
>>   * Fixed queued accounting.
>>
>> v11:
>>   * Move intel_engine_lookup_user to intel_engine_cs.c
>>   * Commit update. (Joonas Lahtinen)
>>
>> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>> Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
>> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>> Cc: Chris Wilson <chris@chris-wilson.co.uk>
>> Cc: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
>> Cc: Peter Zijlstra <peterz@infradead.org>
>> ---
>>   drivers/gpu/drm/i915/Makefile           |   1 +
>>   drivers/gpu/drm/i915/i915_drv.c         |   2 +
>>   drivers/gpu/drm/i915/i915_drv.h         |  78 ++++
>>   drivers/gpu/drm/i915/i915_pmu.c         | 707 ++++++++++++++++++++++++++++++++
>>   drivers/gpu/drm/i915/i915_reg.h         |   3 +
>>   drivers/gpu/drm/i915/intel_engine_cs.c  |  35 ++
>>   drivers/gpu/drm/i915/intel_ringbuffer.h |  25 ++
>>   include/uapi/drm/i915_drm.h             |  57 +++
>>   8 files changed, 908 insertions(+)
>>   create mode 100644 drivers/gpu/drm/i915/i915_pmu.c
>>
>> diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
>> index 5182e3d5557d..5c6013961b3b 100644
>> --- a/drivers/gpu/drm/i915/Makefile
>> +++ b/drivers/gpu/drm/i915/Makefile
>> @@ -26,6 +26,7 @@ i915-y := i915_drv.o \
>>   
>>   i915-$(CONFIG_COMPAT)   += i915_ioc32.o
>>   i915-$(CONFIG_DEBUG_FS) += i915_debugfs.o intel_pipe_crc.o
>> +i915-$(CONFIG_PERF_EVENTS) += i915_pmu.o
>>   
>>   # GEM code
>>   i915-y += i915_cmd_parser.o \
>> diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
>> index 7056bb299dc6..03bbe23e4df8 100644
>> --- a/drivers/gpu/drm/i915/i915_drv.c
>> +++ b/drivers/gpu/drm/i915/i915_drv.c
>> @@ -1200,6 +1200,7 @@ static void i915_driver_register(struct drm_i915_private *dev_priv)
>>          struct drm_device *dev = &dev_priv->drm;
>>   
>>          i915_gem_shrinker_init(dev_priv);
>> +       i915_pmu_register(dev_priv);
>>   
>>          /*
>>           * Notify a valid surface after modesetting,
>> @@ -1254,6 +1255,7 @@ static void i915_driver_unregister(struct drm_i915_private *dev_priv)
>>          intel_opregion_unregister(dev_priv);
>>   
>>          i915_perf_unregister(dev_priv);
>> +       i915_pmu_unregister(dev_priv);
>>   
>>          i915_teardown_sysfs(dev_priv);
>>          i915_guc_log_unregister(dev_priv);
>> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
>> index 6143a709c605..a40c314eed5f 100644
>> --- a/drivers/gpu/drm/i915/i915_drv.h
>> +++ b/drivers/gpu/drm/i915/i915_drv.h
>> @@ -40,6 +40,7 @@
>>   #include <linux/hash.h>
>>   #include <linux/intel-iommu.h>
>>   #include <linux/kref.h>
>> +#include <linux/perf_event.h>
>>   #include <linux/pm_qos.h>
>>   #include <linux/reservation.h>
>>   #include <linux/shmem_fs.h>
>> @@ -2198,6 +2199,70 @@ struct intel_cdclk_state {
>>          unsigned int cdclk, vco, ref;
>>   };
>>   
>> +enum {
>> +       __I915_SAMPLE_FREQ_ACT = 0,
>> +       __I915_SAMPLE_FREQ_REQ,
>> +       __I915_NUM_PMU_SAMPLERS
>> +};
>> +
>> +/**
>> + * How many different events we track in the global PMU mask.
>> + *
>> + * It is also used to know to needed number of event reference counters.
>> + */
>> +#define I915_PMU_MASK_BITS \
>> +       ((1 << I915_PMU_SAMPLE_BITS) + \
>> +        (I915_PMU_LAST + 1 - __I915_PMU_OTHER(0)))
>> +
>> +struct i915_pmu {
>> +       /**
>> +        * @node: List node for CPU hotplug handling.
>> +        */
>> +       struct hlist_node node;
>> +       /**
>> +        * @base: PMU base.
>> +        */
>> +       struct pmu base;
>> +       /**
>> +        * @lock: Lock protecting enable mask and ref count handling.
>> +        */
>> +       spinlock_t lock;
>> +       /**
>> +        * @timer: Timer for internal i915 PMU sampling.
>> +        */
>> +       struct hrtimer timer;
>> +       /**
>> +        * @enable: Bitmask of all currently enabled events.
>> +        *
>> +        * Bits are derived from uAPI event numbers in a way that low 16 bits
>> +        * correspond to engine event _sample_ _type_ (I915_SAMPLE_QUEUED is
>> +        * bit 0), and higher bits correspond to other events (for instance
>> +        * I915_PMU_ACTUAL_FREQUENCY is bit 16 etc).
>> +        *
>> +        * In other words, low 16 bits are not per engine but per engine
>> +        * sampler type, while the upper bits are directly mapped to other
>> +        * event types.
>> +        */
>> +       u64 enable;
>> +       /**
>> +        * @enable_count: Reference counts for the enabled events.
>> +        *
>> +        * Array indices are mapped in the same way as bits in the @enable field
>> +        * and they are used to control sampling on/off when multiple clients
>> +        * are using the PMU API.
>> +        */
>> +       unsigned int enable_count[I915_PMU_MASK_BITS];
>> +       /**
>> +        * @sample: Current counter value for i915 events which need sampling.
>> +        *
>> +        * These counters are updated from the i915 PMU sampling timer.
>> +        *
>> +        * Only global counters are held here, while the per-engine ones are in
>> +        * struct intel_engine_cs.
>> +        */
>> +       u64 sample[__I915_NUM_PMU_SAMPLERS];
>> +};
>> +
>>   struct drm_i915_private {
>>          struct drm_device drm;
>>   
>> @@ -2246,6 +2311,8 @@ struct drm_i915_private {
>>          struct pci_dev *bridge_dev;
>>          struct i915_gem_context *kernel_context;
>>          struct intel_engine_cs *engine[I915_NUM_ENGINES];
>> +       struct intel_engine_cs *engine_class[MAX_ENGINE_CLASS + 1]
>> +                                           [MAX_ENGINE_INSTANCE + 1];
>>          struct i915_vma *semaphore;
>>   
>>          struct drm_dma_handle *status_page_dmah;
>> @@ -2708,6 +2775,8 @@ struct drm_i915_private {
>>                  int     irq;
>>          } lpe_audio;
>>   
>> +       struct i915_pmu pmu;
>> +
>>          /*
>>           * NOTE: This is the dri1/ums dungeon, don't add stuff here. Your patch
>>           * will be rejected. Instead look for a better place.
>> @@ -3932,6 +4001,15 @@ extern void i915_perf_fini(struct drm_i915_private *dev_priv);
>>   extern void i915_perf_register(struct drm_i915_private *dev_priv);
>>   extern void i915_perf_unregister(struct drm_i915_private *dev_priv);
>>   
>> +/* i915_pmu.c */
>> +#ifdef CONFIG_PERF_EVENTS
>> +void i915_pmu_register(struct drm_i915_private *i915);
>> +void i915_pmu_unregister(struct drm_i915_private *i915);
>> +#else
>> +static inline void i915_pmu_register(struct drm_i915_private *i915) {}
>> +static inline void i915_pmu_unregister(struct drm_i915_private *i915) {}
>> +#endif
>> +
>>   /* i915_suspend.c */
>>   extern int i915_save_state(struct drm_i915_private *dev_priv);
>>   extern int i915_restore_state(struct drm_i915_private *dev_priv);
>> diff --git a/drivers/gpu/drm/i915/i915_pmu.c b/drivers/gpu/drm/i915/i915_pmu.c
>> new file mode 100644
>> index 000000000000..cf1406fbc215
>> --- /dev/null
>> +++ b/drivers/gpu/drm/i915/i915_pmu.c
>> @@ -0,0 +1,707 @@
>> +/*
>> + * Copyright © 2017 Intel Corporation
>> + *
>> + * Permission is hereby granted, free of charge, to any person obtaining a
>> + * copy of this software and associated documentation files (the "Software"),
>> + * to deal in the Software without restriction, including without limitation
>> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
>> + * and/or sell copies of the Software, and to permit persons to whom the
>> + * Software is furnished to do so, subject to the following conditions:
>> + *
>> + * The above copyright notice and this permission notice (including the next
>> + * paragraph) shall be included in all copies or substantial portions of the
>> + * Software.
>> + *
>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
>> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
>> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
>> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
>> + * IN THE SOFTWARE.
>> + *
>> + */
>> +
>> +#include <linux/perf_event.h>
>> +#include <linux/pm_runtime.h>
>> +
>> +#include "i915_drv.h"
>> +#include "intel_ringbuffer.h"
>> +
>> +/* Frequency for the sampling timer for events which need it. */
>> +#define FREQUENCY 200
>> +#define PERIOD max_t(u64, 10000, NSEC_PER_SEC / FREQUENCY)
>> +
>> +#define ENGINE_SAMPLE_MASK \
>> +       (BIT(I915_SAMPLE_QUEUED) | \
>> +        BIT(I915_SAMPLE_BUSY) | \
>> +        BIT(I915_SAMPLE_WAIT) | \
>> +        BIT(I915_SAMPLE_SEMA))
>> +
>> +#define ENGINE_SAMPLE_BITS (1 << I915_PMU_SAMPLE_BITS)
>> +
>> +static cpumask_t i915_pmu_cpumask = CPU_MASK_NONE;
>> +
>> +static u8 engine_config_sample(u64 config)
>> +{
>> +       return config & I915_PMU_SAMPLE_MASK;
>> +}
>> +
>> +static u8 engine_event_sample(struct perf_event *event)
>> +{
>> +       return engine_config_sample(event->attr.config);
>> +}
>> +
>> +static u8 engine_event_class(struct perf_event *event)
>> +{
>> +       return (event->attr.config >> I915_PMU_CLASS_SHIFT) & 0xff;
>> +}
>> +
>> +static u8 engine_event_instance(struct perf_event *event)
>> +{
>> +       return (event->attr.config >> I915_PMU_SAMPLE_BITS) & 0xff;
>> +}
>> +
>> +static bool is_engine_config(u64 config)
>> +{
>> +       return config < __I915_PMU_OTHER(0);
>> +}
>> +
>> +static unsigned int config_enabled_bit(u64 config)
>> +{
>> +       if (is_engine_config(config))
>> +               return engine_config_sample(config);
>> +       else
>> +               return ENGINE_SAMPLE_BITS + (config - __I915_PMU_OTHER(0));
>> +}
>> +
>> +static u64 config_enabled_mask(u64 config)
>> +{
>> +       return BIT_ULL(config_enabled_bit(config));
>> +}
>> +
>> +static bool is_engine_event(struct perf_event *event)
>> +{
>> +       return is_engine_config(event->attr.config);
>> +}
>> +
>> +static unsigned int event_enabled_bit(struct perf_event *event)
>> +{
>> +       return config_enabled_bit(event->attr.config);
>> +}
>> +
>> +static bool grab_forcewake(struct drm_i915_private *i915, bool fw)
>> +{
>> +       if (!fw)
>> +               intel_uncore_forcewake_get(i915, FORCEWAKE_ALL);
>> +
>> +       return true;
>> +}
>> +
>> +static void engines_sample(struct drm_i915_private *dev_priv)
>> +{
>> +       struct intel_engine_cs *engine;
>> +       enum intel_engine_id id;
>> +       bool fw = false;
>> +
>> +       if ((dev_priv->pmu.enable & ENGINE_SAMPLE_MASK) == 0)
>> +               return;
>> +
>> +       if (!dev_priv->gt.awake)
>> +               return;
>> +
>> +       if (!intel_runtime_pm_get_if_in_use(dev_priv))
>> +               return;
>> +
>> +       for_each_engine(engine, dev_priv, id) {
>> +               u32 current_seqno = intel_engine_get_seqno(engine);
>> +               u32 last_seqno = intel_engine_last_submit(engine);
>> +               u32 enable = engine->pmu.enable;
>> +
>> +               if (i915_seqno_passed(current_seqno, last_seqno))
>> +                       continue;
>> +
>> +               if ((enable & BIT(I915_SAMPLE_QUEUED)) &&
>> +                   (last_seqno - 1 > current_seqno))
> 
> Hmm, this isn't queued. This is more than one rq submitted to hw.

Yes, that's what I thought we want from the PMU perspective. Maybe 
rename to runnable if queued is not the clearest?

> 
> Do you want total inflight (including waiting-for-fences)? That would be
> engine->timeline->inflight_seqnos (additional problem in its laziness).
> 
> Hard to distinguish all other cases, but last - current is susceptible
> to not treating all ready requests as being queued. 

Hm how?

>And miscounts on
> legacy across semaphores (which you've excluded for execlists).
> 
> Aiui you want
> 
> diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
> index 4eb1a76731b2..19b8b31afbbc 100644
> --- a/drivers/gpu/drm/i915/i915_gem_request.c
> +++ b/drivers/gpu/drm/i915/i915_gem_request.c
> @@ -477,6 +477,7 @@ void __i915_gem_request_submit(struct drm_i915_gem_request *request)
>          engine->emit_breadcrumb(request,
>                                  request->ring->vaddr + request->postfix);
>   
> +       atomic_dec(&request->engine->queued);
>          spin_lock(&request->timeline->lock);
>          list_move_tail(&request->link, &timeline->requests);
>          spin_unlock(&request->timeline->lock);
> @@ -525,6 +526,7 @@ void __i915_gem_request_unsubmit(struct drm_i915_gem_request *request)
>          spin_lock(&timeline->lock);
>          list_move(&request->link, &timeline->requests);
>          spin_unlock(&timeline->lock);
> +       atomic_inc(&request->engine->queued);
>   
>          /* We don't need to wake_up any waiters on request->execute, they
>           * will get woken by any other event or us re-adding this request
> @@ -556,6 +558,7 @@ submit_notify(struct i915_sw_fence *fence, enum i915_sw_fence_notify state)
>          switch (state) {
>          case FENCE_COMPLETE:
>                  trace_i915_gem_request_submit(request);
> +               atomic_inc(&request->engine->queued);
>                  request->engine->submit_request(request);

Or have the counter normal unsigned int under the engine timeline lock, 
moving into the execlists_submit_request etc?

>                  break;
>   
> 
> That excludes the requests actually in the hw queue, just those in the sw
> queue. Or both.

This looks good to me, I was just reluctant to suggest new software 
tracking. Happy to go with this and to set the metric meaning to this?


>> +                       engine->pmu.sample[I915_SAMPLE_QUEUED] += PERIOD;
>> +
>> +               if (enable & BIT(I915_SAMPLE_BUSY)) {
>> +                       u32 val;
>> +
>> +                       fw = grab_forcewake(dev_priv, fw);
>> +                       val = I915_READ_FW(RING_MI_MODE(engine->mmio_base));
>> +                       if (!(val & MODE_IDLE))
>> +                               engine->pmu.sample[I915_SAMPLE_BUSY] += PERIOD;
>> +               }
>> +
>> +               if (enable & (BIT(I915_SAMPLE_WAIT) | BIT(I915_SAMPLE_SEMA))) {
>> +                       u32 val;
>> +
>> +                       fw = grab_forcewake(dev_priv, fw);
>> +                       val = I915_READ_FW(RING_CTL(engine->mmio_base));
>> +                       if ((enable & BIT(I915_SAMPLE_WAIT)) &&
>> +                           (val & RING_WAIT))
>> +                               engine->pmu.sample[I915_SAMPLE_WAIT] += PERIOD;
>> +                       if ((enable & BIT(I915_SAMPLE_SEMA)) &&
>> +                           (val & RING_WAIT_SEMAPHORE))
>> +                               engine->pmu.sample[I915_SAMPLE_SEMA] += PERIOD;
>> +               }
>> +       }
>> +
>> +       if (fw)
>> +               intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
>> +       intel_runtime_pm_put(dev_priv);
>> +}
>> +
>> +static void frequency_sample(struct drm_i915_private *dev_priv)
>> +{
>> +       if (dev_priv->pmu.enable &
>> +           config_enabled_mask(I915_PMU_ACTUAL_FREQUENCY)) {
>> +               u64 val;
>> +
>> +               val = dev_priv->rps.cur_freq;
>> +               if (dev_priv->gt.awake &&
>> +                   intel_runtime_pm_get_if_in_use(dev_priv)) {
>> +                       val = intel_get_cagf(dev_priv,
>> +                                            I915_READ_NOTRACE(GEN6_RPSTAT1));
>> +                       intel_runtime_pm_put(dev_priv);
>> +               }
>> +               val = intel_gpu_freq(dev_priv, val);
>> +               dev_priv->pmu.sample[__I915_SAMPLE_FREQ_ACT] += val * PERIOD;
>> +       }
>> +
>> +       if (dev_priv->pmu.enable &
>> +           config_enabled_mask(I915_PMU_REQUESTED_FREQUENCY)) {
>> +               u64 val = intel_gpu_freq(dev_priv, dev_priv->rps.cur_freq);
>> +
>> +               dev_priv->pmu.sample[__I915_SAMPLE_FREQ_REQ] += val * PERIOD;
>> +       }
> 
> Ok. Do we need a comment to explain the importance of averaging here?
> Should we be doing trapezium integration?

After reading up on it, do you mean increasing the accuracy by something 
like sample[] += val * PERIOD + ((val * PERIOD - sample[]) * PERIOD / 2) ?

I am not sure if it makes sense here since our freq changes are much 
slower and less granular than the sampling period.

But it sounds like a good idea for other samplers, no? They are 
potentially much more granular than the sampling period.

> 
>> +}
>> +
>> +static enum hrtimer_restart i915_sample(struct hrtimer *hrtimer)
>> +{
>> +       struct drm_i915_private *i915 =
>> +               container_of(hrtimer, struct drm_i915_private, pmu.timer);
>> +
>> +       if (i915->pmu.enable == 0)
>> +               return HRTIMER_NORESTART;
>> +
>> +       engines_sample(i915);
>> +       frequency_sample(i915);
>> +
>> +       hrtimer_forward_now(hrtimer, ns_to_ktime(PERIOD));
>> +       return HRTIMER_RESTART;
>> +}
>> +
>> +static u64 count_interrupts(struct drm_i915_private *i915)
>> +{
>> +       /* open-coded kstat_irqs() */
>> +       struct irq_desc *desc = irq_to_desc(i915->drm.pdev->irq);
>> +       u64 sum = 0;
>> +       int cpu;
>> +
>> +       if (!desc || !desc->kstat_irqs)
>> +               return 0;
>> +
>> +       for_each_possible_cpu(cpu)
>> +               sum += *per_cpu_ptr(desc->kstat_irqs, cpu);
>> +
>> +       return sum;
>> +}
>> +
>> +static void i915_pmu_event_destroy(struct perf_event *event)
>> +{
>> +       WARN_ON(event->parent);
>> +}
>> +
>> +static int engine_event_init(struct perf_event *event)
>> +{
>> +       struct drm_i915_private *i915 =
>> +               container_of(event->pmu, typeof(*i915), pmu.base);
>> +
>> +       if (!intel_engine_lookup_user(i915, engine_event_class(event),
>> +                                     engine_event_instance(event)))
>> +               return -ENODEV;
>> +
>> +       switch (engine_event_sample(event)) {
>> +       case I915_SAMPLE_QUEUED:
>> +       case I915_SAMPLE_BUSY:
>> +       case I915_SAMPLE_WAIT:
>> +               break;
>> +       case I915_SAMPLE_SEMA:
>> +               if (INTEL_GEN(i915) < 6)
>> +                       return -ENODEV;
>> +               break;
>> +       default:
>> +               return -ENOENT;
>> +       }
>> +
>> +       return 0;
>> +}
>> +
>> +static int i915_pmu_event_init(struct perf_event *event)
>> +{
>> +       struct drm_i915_private *i915 =
>> +               container_of(event->pmu, typeof(*i915), pmu.base);
>> +       int cpu, ret;
>> +
>> +       if (event->attr.type != event->pmu->type)
>> +               return -ENOENT;
>> +
>> +       /* unsupported modes and filters */
>> +       if (event->attr.sample_period) /* no sampling */
>> +               return -EINVAL;
>> +
>> +       if (has_branch_stack(event))
>> +               return -EOPNOTSUPP;
>> +
>> +       if (event->cpu < 0)
>> +               return -EINVAL;
>> +
>> +       cpu = cpumask_any_and(&i915_pmu_cpumask,
>> +                             topology_sibling_cpumask(event->cpu));
>> +       if (cpu >= nr_cpu_ids)
>> +               return -ENODEV;
>> +
>> +       if (is_engine_event(event)) {
>> +               ret = engine_event_init(event);
>> +       } else {
>> +               ret = 0;
>> +               switch (event->attr.config) {
>> +               case I915_PMU_INTERRUPTS:
>> +                       break;
>> +               case I915_PMU_ACTUAL_FREQUENCY:
>> +                       if (IS_VALLEYVIEW(i915) || IS_CHERRYVIEW(i915))
>> +                                /* Requires a mutex for sampling! */
>> +                               ret = -ENODEV;
>> +               case I915_PMU_REQUESTED_FREQUENCY:
>> +                       if (INTEL_GEN(i915) < 6)
>> +                               ret = -ENODEV;
>> +                       break;
>> +               case I915_PMU_RC6_RESIDENCY:
>> +                       if (!HAS_RC6(i915))
>> +                               ret = -ENODEV;
>> +                       break;
>> +               case I915_PMU_RC6p_RESIDENCY:
>> +               case I915_PMU_RC6pp_RESIDENCY:
>> +                       if (!HAS_RC6p(i915))
>> +                               ret = -ENODEV;
> 
> Must introduced you to my HAS_RC6pp() patch.

Can I have a reminder please? You have a patch which adds it? I thought 
from the code that where we have RC6p we also have RC6pp, wrong?

> 
>> +                       break;
>> +               default:
>> +                       ret = -ENOENT;
>> +                       break;
>> +               }
>> +       }
>> +       if (ret)
>> +               return ret;
>> +
>> +       event->cpu = cpu;
>> +       if (!event->parent)
>> +               event->destroy = i915_pmu_event_destroy;
>> +
>> +       return 0;
>> +}
>> +
>> +static u64 __i915_pmu_event_read(struct perf_event *event)
>> +{
>> +       struct drm_i915_private *i915 =
>> +               container_of(event->pmu, typeof(*i915), pmu.base);
>> +       u64 val = 0;
>> +
>> +       if (is_engine_event(event)) {
>> +               u8 sample = engine_event_sample(event);
>> +               struct intel_engine_cs *engine;
>> +
>> +               engine = intel_engine_lookup_user(i915,
>> +                                                 engine_event_class(event),
>> +                                                 engine_event_instance(event));
>> +
>> +               if (WARN_ON_ONCE(!engine)) {
>> +                       /* Do nothing */
>> +               } else {
>> +                       val = engine->pmu.sample[sample];
>> +               }
>> +       } else {
>> +               switch (event->attr.config) {
>> +               case I915_PMU_ACTUAL_FREQUENCY:
>> +                       val = i915->pmu.sample[__I915_SAMPLE_FREQ_ACT];
>> +                       break;
>> +               case I915_PMU_REQUESTED_FREQUENCY:
>> +                       val = i915->pmu.sample[__I915_SAMPLE_FREQ_REQ];
>> +                       break;
>> +               case I915_PMU_INTERRUPTS:
>> +                       val = count_interrupts(i915);
>> +                       break;
>> +               case I915_PMU_RC6_RESIDENCY:
>> +                       intel_runtime_pm_get(i915);
>> +                       val = intel_rc6_residency_ns(i915,
>> +                                                    IS_VALLEYVIEW(i915) ?
>> +                                                    VLV_GT_RENDER_RC6 :
>> +                                                    GEN6_GT_GFX_RC6);
>> +                       intel_runtime_pm_put(i915);
>> +                       break;
>> +               case I915_PMU_RC6p_RESIDENCY:
>> +                       intel_runtime_pm_get(i915);
>> +                       val = intel_rc6_residency_ns(i915, GEN6_GT_GFX_RC6p);
>> +                       intel_runtime_pm_put(i915);
>> +                       break;
>> +               case I915_PMU_RC6pp_RESIDENCY:
>> +                       intel_runtime_pm_get(i915);
>> +                       val = intel_rc6_residency_ns(i915, GEN6_GT_GFX_RC6pp);
>> +                       intel_runtime_pm_put(i915);
>> +                       break;
>> +               }
>> +       }
> 
> Ok.
> 
>> +
>> +       return val;
>> +}
>> +
>> +static void i915_pmu_event_read(struct perf_event *event)
>> +{
>> +       struct hw_perf_event *hwc = &event->hw;
>> +       u64 prev, new;
>> +
>> +again:
>> +       prev = local64_read(&hwc->prev_count);
>> +       new = __i915_pmu_event_read(event);
>> +
>> +       if (local64_cmpxchg(&hwc->prev_count, prev, new) != prev)
>> +               goto again;
>> +
>> +       local64_add(new - prev, &event->count);
>> +}
>> +
>> +static void i915_pmu_enable(struct perf_event *event)
>> +{
>> +       struct drm_i915_private *i915 =
>> +               container_of(event->pmu, typeof(*i915), pmu.base);
>> +       unsigned int bit = event_enabled_bit(event);
>> +       unsigned long flags;
>> +
>> +       spin_lock_irqsave(&i915->pmu.lock, flags);
>> +
>> +       /*
>> +        * Start the sampling timer when enabling the first event.
>> +        */
>> +       if (i915->pmu.enable == 0)
>> +               hrtimer_start_range_ns(&i915->pmu.timer,
>> +                                      ns_to_ktime(PERIOD), 0,
>> +                                      HRTIMER_MODE_REL_PINNED);
>> +
>> +       /*
>> +        * Update the bitmask of enabled events and increment
>> +        * the event reference counter.
>> +        */
>> +       GEM_BUG_ON(bit >= I915_PMU_MASK_BITS);
>> +       GEM_BUG_ON(i915->pmu.enable_count[bit] == ~0);
>> +       i915->pmu.enable |= BIT_ULL(bit);
>> +       i915->pmu.enable_count[bit]++;
>> +
>> +       /*
>> +        * For per-engine events the bitmask and reference counting
>> +        * is stored per engine.
>> +        */
>> +       if (is_engine_event(event)) {
>> +               u8 sample = engine_event_sample(event);
>> +               struct intel_engine_cs *engine;
>> +
>> +               engine = intel_engine_lookup_user(i915,
>> +                                                 engine_event_class(event),
>> +                                                 engine_event_instance(event));
>> +               GEM_BUG_ON(!engine);
>> +               engine->pmu.enable |= BIT(sample);
>> +
>> +               GEM_BUG_ON(sample >= I915_PMU_SAMPLE_BITS);
>> +               GEM_BUG_ON(engine->pmu.enable_count[sample] == ~0);
>> +               engine->pmu.enable_count[sample]++;
>> +       }
>> +
>> +       /*
>> +        * Store the current counter value so we can report the correct delta
>> +        * for all listeners. Even when the event was already enabled and has
>> +        * an existing non-zero value.
>> +        */
>> +       local64_set(&event->hw.prev_count, __i915_pmu_event_read(event));
>> +
>> +       spin_unlock_irqrestore(&i915->pmu.lock, flags);
>> +}
>> +
>> +static void i915_pmu_disable(struct perf_event *event)
>> +{
>> +       struct drm_i915_private *i915 =
>> +               container_of(event->pmu, typeof(*i915), pmu.base);
>> +       unsigned int bit = event_enabled_bit(event);
>> +       unsigned long flags;
>> +
>> +       spin_lock_irqsave(&i915->pmu.lock, flags);
>> +
>> +       if (is_engine_event(event)) {
>> +               u8 sample = engine_event_sample(event);
>> +               struct intel_engine_cs *engine;
>> +
>> +               engine = intel_engine_lookup_user(i915,
>> +                                                 engine_event_class(event),
>> +                                                 engine_event_instance(event));
>> +               GEM_BUG_ON(!engine);
>> +               GEM_BUG_ON(sample >= I915_PMU_SAMPLE_BITS);
>> +               GEM_BUG_ON(engine->pmu.enable_count[sample] == 0);
>> +               /*
>> +                * Decrement the reference count and clear the enabled
>> +                * bitmask when the last listener on an event goes away.
>> +                */
>> +               if (--engine->pmu.enable_count[sample] == 0)
>> +                       engine->pmu.enable &= ~BIT(sample);
>> +       }
>> +
>> +       GEM_BUG_ON(bit >= I915_PMU_MASK_BITS);
>> +       GEM_BUG_ON(i915->pmu.enable_count[bit] == 0);
>> +       /*
>> +        * Decrement the reference count and clear the enabled
>> +        * bitmask when the last listener on an event goes away.
>> +        */
>> +       if (--i915->pmu.enable_count[bit] == 0)
>> +               i915->pmu.enable &= ~BIT_ULL(bit);
>> +
>> +       spin_unlock_irqrestore(&i915->pmu.lock, flags);
> 
> Ok.
>> +}
>> +
>> +static void i915_pmu_event_start(struct perf_event *event, int flags)
>> +{
>> +       i915_pmu_enable(event);
>> +       event->hw.state = 0;
>> +}
>> +
>> +static void i915_pmu_event_stop(struct perf_event *event, int flags)
>> +{
>> +       if (flags & PERF_EF_UPDATE)
>> +               i915_pmu_event_read(event);
>> +       i915_pmu_disable(event);
>> +       event->hw.state = PERF_HES_STOPPED;
> 
> Never did understand the differences in perf start/stop, enable/disable,
> add/del :)
> 
>> +}
>> +
>> +static int i915_pmu_event_add(struct perf_event *event, int flags)
>> +{
>> +       if (flags & PERF_EF_START)
>> +               i915_pmu_event_start(event, flags);
>> +
>> +       return 0;
>> +}
>> +
>> +static void i915_pmu_event_del(struct perf_event *event, int flags)
>> +{
>> +       i915_pmu_event_stop(event, PERF_EF_UPDATE);
>> +}
>> +
>> +static int i915_pmu_event_event_idx(struct perf_event *event)
>> +{
>> +       return 0;
>> +}
>> +
>> +static ssize_t i915_pmu_format_show(struct device *dev,
>> +                                   struct device_attribute *attr, char *buf)
>> +{
>> +       struct dev_ext_attribute *eattr;
>> +
>> +       eattr = container_of(attr, struct dev_ext_attribute, attr);
>> +       return sprintf(buf, "%s\n", (char *)eattr->var);
>> +}
>> +
>> +#define I915_PMU_FORMAT_ATTR(_name, _config) \
>> +       (&((struct dev_ext_attribute[]) { \
>> +               { .attr = __ATTR(_name, 0444, i915_pmu_format_show, NULL), \
>> +                 .var = (void *)_config, } \
>> +       })[0].attr.attr)
>> +
>> +static struct attribute *i915_pmu_format_attrs[] = {
>> +       I915_PMU_FORMAT_ATTR(i915_eventid, "config:0-42"),
>> +       NULL,
>> +};
>> +
>> +static const struct attribute_group i915_pmu_format_attr_group = {
>> +       .name = "format",
>> +       .attrs = i915_pmu_format_attrs,
>> +};
>> +
>> +static ssize_t i915_pmu_event_show(struct device *dev,
>> +                                  struct device_attribute *attr, char *buf)
>> +{
>> +       struct dev_ext_attribute *eattr;
>> +
>> +       eattr = container_of(attr, struct dev_ext_attribute, attr);
>> +       return sprintf(buf, "config=0x%lx\n", (unsigned long)eattr->var);
>> +}
>> +
>> +#define I915_EVENT_ATTR(_name, _config) \
>> +       (&((struct dev_ext_attribute[]) { \
>> +               { .attr = __ATTR(_name, 0444, i915_pmu_event_show, NULL), \
>> +                 .var = (void *)_config, } \
>> +       })[0].attr.attr)
>> +
>> +#define I915_EVENT_STR(_name, _str) \
>> +       (&((struct perf_pmu_events_attr[]) { \
>> +               { .attr      = __ATTR(_name, 0444, perf_event_sysfs_show, NULL), \
>> +                 .id        = 0, \
>> +                 .event_str = _str, } \
>> +       })[0].attr.attr)
>> +
>> +#define I915_EVENT(_name, _config, _unit) \
>> +       I915_EVENT_ATTR(_name, _config), \
>> +       I915_EVENT_STR(_name.unit, _unit)
>> +
>> +#define I915_ENGINE_EVENT(_name, _class, _instance, _sample) \
>> +       I915_EVENT_ATTR(_name, __I915_PMU_ENGINE(_class, _instance, _sample)), \
>> +       I915_EVENT_STR(_name.unit, "ns")
>> +
>> +#define I915_ENGINE_EVENTS(_name, _class, _instance) \
>> +       I915_ENGINE_EVENT(_name##_instance-queued, _class, _instance, I915_SAMPLE_QUEUED), \
>> +       I915_ENGINE_EVENT(_name##_instance-busy, _class, _instance, I915_SAMPLE_BUSY), \
>> +       I915_ENGINE_EVENT(_name##_instance-sema, _class, _instance, I915_SAMPLE_SEMA), \
>> +       I915_ENGINE_EVENT(_name##_instance-wait, _class, _instance, I915_SAMPLE_WAIT)
>> +
>> +static struct attribute *i915_pmu_events_attrs[] = {
>> +       I915_ENGINE_EVENTS(rcs, I915_ENGINE_CLASS_RENDER, 0),
>> +       I915_ENGINE_EVENTS(bcs, I915_ENGINE_CLASS_COPY, 0),
>> +       I915_ENGINE_EVENTS(vcs, I915_ENGINE_CLASS_VIDEO, 0),
>> +       I915_ENGINE_EVENTS(vcs, I915_ENGINE_CLASS_VIDEO, 1),
>> +       I915_ENGINE_EVENTS(vecs, I915_ENGINE_CLASS_VIDEO_ENHANCE, 0),
> 
> Please add a crossref to intel_engine_cs to remind ourselves to add the
> extra perf counters if we add a new engine.

On my whishlist I have an item to initalize this array (more or less) 
from the for_each_engine loop. But a comment will do for now. :)

> 
>> +
>> +       I915_EVENT(actual-frequency,    I915_PMU_ACTUAL_FREQUENCY,    "Hz"),
>> +       I915_EVENT(requested-frequency, I915_PMU_REQUESTED_FREQUENCY, "Hz"),
>> +
>> +       I915_EVENT_ATTR(interrupts, I915_PMU_INTERRUPTS),
>> +
>> +       I915_EVENT(rc6-residency,   I915_PMU_RC6_RESIDENCY,   "ns"),
>> +       I915_EVENT(rc6p-residency,  I915_PMU_RC6p_RESIDENCY,  "ns"),
>> +       I915_EVENT(rc6pp-residency, I915_PMU_RC6pp_RESIDENCY, "ns"),
>> +
>> +       NULL,
>> +};
>> +
>> +static const struct attribute_group i915_pmu_events_attr_group = {
>> +       .name = "events",
>> +       .attrs = i915_pmu_events_attrs,
>> +};
>> +
>> +static ssize_t
>> +i915_pmu_get_attr_cpumask(struct device *dev,
>> +                         struct device_attribute *attr,
>> +                         char *buf)
>> +{
>> +       return cpumap_print_to_pagebuf(true, buf, &i915_pmu_cpumask);
>> +}
>> +
>> +static DEVICE_ATTR(cpumask, 0444, i915_pmu_get_attr_cpumask, NULL);
>> +
>> +static struct attribute *i915_cpumask_attrs[] = {
>> +       &dev_attr_cpumask.attr,
>> +       NULL,
>> +};
>> +
>> +static struct attribute_group i915_pmu_cpumask_attr_group = {
>> +       .attrs = i915_cpumask_attrs,
>> +};
>> +
>> +static const struct attribute_group *i915_pmu_attr_groups[] = {
>> +       &i915_pmu_format_attr_group,
>> +       &i915_pmu_events_attr_group,
>> +       &i915_pmu_cpumask_attr_group,
>> +       NULL
>> +};
>> +
>> +static int i915_pmu_cpu_online(unsigned int cpu, struct hlist_node *node)
>> +{
>> +       unsigned int target;
>> +
>> +       target = cpumask_any_and(&i915_pmu_cpumask, &i915_pmu_cpumask);
>> +       /* Select the first online CPU as a designated reader. */
>> +       if (target >= nr_cpu_ids)
>> +               cpumask_set_cpu(cpu, &i915_pmu_cpumask);
>> +
>> +       return 0;
>> +}
>> +
>> +static int i915_pmu_cpu_offline(unsigned int cpu, struct hlist_node *node)
>> +{
>> +       struct i915_pmu *pmu = hlist_entry_safe(node, typeof(*pmu), node);
>> +       unsigned int target;
>> +
>> +       if (cpumask_test_and_clear_cpu(cpu, &i915_pmu_cpumask)) {
>> +               target = cpumask_any_but(topology_sibling_cpumask(cpu), cpu);
>> +               /* Migrate events if there is a valid target */
>> +               if (target < nr_cpu_ids) {
>> +                       cpumask_set_cpu(target, &i915_pmu_cpumask);
>> +                       perf_pmu_migrate_context(&pmu->base, cpu, target);
>> +               }
>> +       }
>> +
>> +       return 0;
>> +}
>> +
>> +void i915_pmu_register(struct drm_i915_private *i915)
>> +{
>> +       int ret;
>> +
>> +       if (INTEL_GEN(i915) <= 2) {
>> +               DRM_INFO("PMU not supported for this GPU.");
>> +               return;
>> +       }
>> +
>> +       ret = cpuhp_setup_state_multi(CPUHP_AP_PERF_X86_UNCORE_ONLINE,
>> +                                     "perf/x86/intel/i915:online",
>> +                                     i915_pmu_cpu_online,
>> +                                     i915_pmu_cpu_offline);
>> +       if (ret)
>> +               goto err_hp_state;
>> +
>> +       ret = cpuhp_state_add_instance(CPUHP_AP_PERF_X86_UNCORE_ONLINE,
>> +                                      &i915->pmu.node);
>> +       if (ret)
>> +               goto err_hp_instance;
>> +
>> +       i915->pmu.base.attr_groups      = i915_pmu_attr_groups;
>> +       i915->pmu.base.task_ctx_nr      = perf_invalid_context;
>> +       i915->pmu.base.event_init       = i915_pmu_event_init;
>> +       i915->pmu.base.add              = i915_pmu_event_add;
>> +       i915->pmu.base.del              = i915_pmu_event_del;
>> +       i915->pmu.base.start            = i915_pmu_event_start;
>> +       i915->pmu.base.stop             = i915_pmu_event_stop;
>> +       i915->pmu.base.read             = i915_pmu_event_read;
>> +       i915->pmu.base.event_idx        = i915_pmu_event_event_idx;
>> +
>> +       spin_lock_init(&i915->pmu.lock);
>> +       hrtimer_init(&i915->pmu.timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
>> +       i915->pmu.timer.function = i915_sample;
>> +       i915->pmu.enable = 0;
>> +
>> +       ret = perf_pmu_register(&i915->pmu.base, "i915", -1);
>> +       if (ret == 0)
>> +               return;
>> +
>> +       i915->pmu.base.event_init = NULL;
>> +
>> +       WARN_ON(cpuhp_state_remove_instance(CPUHP_AP_PERF_X86_UNCORE_ONLINE,
>> +                                           &i915->pmu.node));
>> +
>> +err_hp_instance:
>> +       cpuhp_remove_multi_state(CPUHP_AP_PERF_X86_UNCORE_ONLINE);
>> +
>> +err_hp_state:
>> +       DRM_NOTE("Failed to register PMU! (err=%d)\n", ret);
> 
> Fair enough, not fatal, the driver will continue; just some user
> functionality will be lost. No data loss or anything else meriting a
> warning.
> 
>> +}
>> +
>> +void i915_pmu_unregister(struct drm_i915_private *i915)
>> +{
>> +       if (!i915->pmu.base.event_init)
>> +               return;
>> +
>> +       WARN_ON(cpuhp_state_remove_instance(CPUHP_AP_PERF_X86_UNCORE_ONLINE,
>> +                                           &i915->pmu.node));
>> +       cpuhp_remove_multi_state(CPUHP_AP_PERF_X86_UNCORE_ONLINE);
>> +
>> +       i915->pmu.enable = 0;
>> +
>> +       hrtimer_cancel(&i915->pmu.timer);
>> +
>> +       perf_pmu_unregister(&i915->pmu.base);
>> +       i915->pmu.base.event_init = NULL;
>> +}
>> diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
>> index 82f36dd0cd94..16e36b2eef87 100644
>> --- a/drivers/gpu/drm/i915/i915_reg.h
>> +++ b/drivers/gpu/drm/i915/i915_reg.h
>> @@ -186,6 +186,9 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
>>   #define VIDEO_ENHANCEMENT_CLASS        2
>>   #define COPY_ENGINE_CLASS      3
>>   #define OTHER_CLASS            4
>> +#define MAX_ENGINE_CLASS       4
>> +
>> +#define MAX_ENGINE_INSTANCE    1
>>   
>>   /* PCI config space */
>>   
>> diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c
>> index a28e2a864cf1..35c117c3fa0d 100644
>> --- a/drivers/gpu/drm/i915/intel_engine_cs.c
>> +++ b/drivers/gpu/drm/i915/intel_engine_cs.c
>> @@ -200,6 +200,15 @@ intel_engine_setup(struct drm_i915_private *dev_priv,
>>          GEM_BUG_ON(info->class >= ARRAY_SIZE(intel_engine_classes));
>>          class_info = &intel_engine_classes[info->class];
>>   
>> +       if (GEM_WARN_ON(info->class > MAX_ENGINE_CLASS))
>> +               return -EINVAL;
>> +
>> +       if (GEM_WARN_ON(info->instance > MAX_ENGINE_INSTANCE))
>> +               return -EINVAL;
>> +
>> +       if (GEM_WARN_ON(dev_priv->engine_class[info->class][info->instance]))
>> +               return -EINVAL;
>> +
>>          GEM_BUG_ON(dev_priv->engine[id]);
>>          engine = kzalloc(sizeof(*engine), GFP_KERNEL);
>>          if (!engine)
>> @@ -227,6 +236,7 @@ intel_engine_setup(struct drm_i915_private *dev_priv,
>>   
>>          ATOMIC_INIT_NOTIFIER_HEAD(&engine->context_status_notifier);
>>   
>> +       dev_priv->engine_class[info->class][info->instance] = engine;
>>          dev_priv->engine[id] = engine;
>>          return 0;
>>   }
>> @@ -1578,6 +1588,31 @@ bool intel_engine_can_store_dword(struct intel_engine_cs *engine)
>>          }
>>   }
>>   
>> +static u8 user_class_map[I915_ENGINE_CLASS_MAX] = {
>> +       [I915_ENGINE_CLASS_OTHER] = OTHER_CLASS,
>> +       [I915_ENGINE_CLASS_RENDER] = RENDER_CLASS,
>> +       [I915_ENGINE_CLASS_COPY] = COPY_ENGINE_CLASS,
>> +       [I915_ENGINE_CLASS_VIDEO] = VIDEO_DECODE_CLASS,
>> +       [I915_ENGINE_CLASS_VIDEO_ENHANCE] = VIDEO_ENHANCEMENT_CLASS,
>> +};
>> +
>> +struct intel_engine_cs *
>> +intel_engine_lookup_user(struct drm_i915_private *i915, u8 class, u8 instance)
>> +{
>> +       if (class >= ARRAY_SIZE(user_class_map))
>> +               return NULL;
>> +
>> +       class = user_class_map[class];
>> +
>> +       if (WARN_ON_ONCE(class > MAX_ENGINE_CLASS))
>> +               return NULL;
>> +
>> +       if (instance > MAX_ENGINE_INSTANCE)
>> +               return NULL;
>> +
>> +       return i915->engine_class[class][instance];
>> +}
>> +
>>   #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
>>   #include "selftests/mock_engine.c"
>>   #endif
>> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
>> index 56d7ae9f298b..7a901e766d03 100644
>> --- a/drivers/gpu/drm/i915/intel_ringbuffer.h
>> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
>> @@ -330,6 +330,28 @@ struct intel_engine_cs {
>>                  I915_SELFTEST_DECLARE(bool mock : 1);
>>          } breadcrumbs;
>>   
>> +       struct {
>> +               /**
>> +                * @enable: Bitmask of enable sample events on this engine.
>> +                *
>> +                * Bits correspond to sample event types, for instance
>> +                * I915_SAMPLE_QUEUED is bit 0 etc.
>> +                */
>> +               u32 enable;
>> +               /**
>> +                * @enable_count: Reference count for the enabled samplers.
>> +                *
>> +                * Index number corresponds to the bit number from @enable.
>> +                */
>> +               unsigned int enable_count[I915_PMU_SAMPLE_BITS];
>> +               /**
>> +                * @sample: Counter value for sampling events.
>> +                *
>> +                * Our internal timer stores the current counter in this field.
>> +                */
>> +               u64 sample[I915_ENGINE_SAMPLE_MAX];
>> +       } pmu;
>> +
>>          /*
>>           * A pool of objects to use as shadow copies of client batch buffers
>>           * when the command parser is enabled. Prevents the client from
>> @@ -834,4 +856,7 @@ void intel_engines_reset_default_submission(struct drm_i915_private *i915);
>>   
>>   bool intel_engine_can_store_dword(struct intel_engine_cs *engine);
>>   
>> +struct intel_engine_cs *
>> +intel_engine_lookup_user(struct drm_i915_private *i915, u8 class, u8 instance);
>> +
>>   #endif /* _INTEL_RINGBUFFER_H_ */
>> diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
>> index fe25a01c81f2..682abafecc8c 100644
>> --- a/include/uapi/drm/i915_drm.h
>> +++ b/include/uapi/drm/i915_drm.h
>> @@ -86,6 +86,63 @@ enum i915_mocs_table_index {
>>          I915_MOCS_CACHED,
>>   };
> 
> Hmm, how about uapi/drm/i915_perf.h? (Or i915_pmu.h)
> We are not adding traditional drm-ioctl functionality (except for some
> common enums). But I guess it is not huge, so no big concern. Just
> thinking that we essentially copy this into igt, so we should really
> make it a copy!

No objections from me. Definitely want it?

> 
>> +enum drm_i915_gem_engine_class {
>> +       I915_ENGINE_CLASS_OTHER = 0,
>> +       I915_ENGINE_CLASS_RENDER = 1,
>> +       I915_ENGINE_CLASS_COPY = 2,
>> +       I915_ENGINE_CLASS_VIDEO = 3,
>> +       I915_ENGINE_CLASS_VIDEO_ENHANCE = 4,
>> +       I915_ENGINE_CLASS_MAX /* non-ABI */
>> +};
>> +
>> +/**
>> + * DOC: perf_events exposed by i915 through /sys/bus/event_sources/drivers/i915
>> + *
>> + */
>> +
>> +enum drm_i915_pmu_engine_sample {
>> +       I915_SAMPLE_QUEUED = 0,
>> +       I915_SAMPLE_BUSY = 1,
>> +       I915_SAMPLE_WAIT = 2,
>> +       I915_SAMPLE_SEMA = 3,
>> +       I915_ENGINE_SAMPLE_MAX /* non-ABI */
>> +};
>> +
>> +#define I915_PMU_SAMPLE_BITS (4)
>> +#define I915_PMU_SAMPLE_MASK (0xf)
>> +#define I915_PMU_SAMPLE_INSTANCE_BITS (8)
>> +#define I915_PMU_CLASS_SHIFT \
>> +       (I915_PMU_SAMPLE_BITS + I915_PMU_SAMPLE_INSTANCE_BITS)
>> +
>> +#define __I915_PMU_ENGINE(class, instance, sample) \
>> +       ((class) << I915_PMU_CLASS_SHIFT | \
>> +       (instance) << I915_PMU_SAMPLE_BITS | \
>> +       (sample))
>> +
>> +#define I915_PMU_ENGINE_QUEUED(class, instance) \
>> +       __I915_PMU_ENGINE(class, instance, I915_SAMPLE_QUEUED)
>> +
>> +#define I915_PMU_ENGINE_BUSY(class, instance) \
>> +       __I915_PMU_ENGINE(class, instance, I915_SAMPLE_BUSY)
>> +
>> +#define I915_PMU_ENGINE_WAIT(class, instance) \
>> +       __I915_PMU_ENGINE(class, instance, I915_SAMPLE_WAIT)
>> +
>> +#define I915_PMU_ENGINE_SEMA(class, instance) \
>> +       __I915_PMU_ENGINE(class, instance, I915_SAMPLE_SEMA)
>> +
>> +#define __I915_PMU_OTHER(x) (__I915_PMU_ENGINE(0xff, 0xff, 0xf) + 1 + (x))
>> +
>> +#define I915_PMU_ACTUAL_FREQUENCY      __I915_PMU_OTHER(0)
>> +#define I915_PMU_REQUESTED_FREQUENCY   __I915_PMU_OTHER(1)
>> +#define I915_PMU_INTERRUPTS            __I915_PMU_OTHER(2)
>> +
>> +#define I915_PMU_RC6_RESIDENCY         __I915_PMU_OTHER(3)
>> +#define I915_PMU_RC6p_RESIDENCY                __I915_PMU_OTHER(4)
>> +#define I915_PMU_RC6pp_RESIDENCY       __I915_PMU_OTHER(5)
>> +

Would we want to add some more counters?

I915_PMU_ENGINE_PREEPMT - number of preemption events on engine
I915_PMU_ENGINE_RESCHEDULE - number of requests which needed to be moved 
in the tree due new request arriving

Or leave all this for later, since you also had some ideas on what to add.

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 6/8] drm/i915: Engine busy time tracking
  2017-09-25 17:43   ` Chris Wilson
@ 2017-09-26 12:30     ` Tvrtko Ursulin
  0 siblings, 0 replies; 30+ messages in thread
From: Tvrtko Ursulin @ 2017-09-26 12:30 UTC (permalink / raw)
  To: Chris Wilson, Tvrtko Ursulin, Intel-gfx


On 25/09/2017 18:43, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2017-09-25 16:15:41)
>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>
>> Track total time requests have been executing on the hardware.
>>
>> We add new kernel API to allow software tracking of time GPU
>> engines are spending executing requests.
>>
>> Both per-engine and global API is added with the latter also
>> being exported for use by external users.
>>
>> v2:
>>   * Squashed with the internal API.
>>   * Dropped static key.
>>   * Made per-engine.
>>   * Store time in monotonic ktime.
>>
>> v3: Moved stats clearing to disable.
>>
>> v4:
>>   * Comments.
>>   * Don't export the API just yet.
>>
>> v5: Whitespace cleanup.
>>
>> v6:
>>   * Rename ref to active.
>>   * Drop engine aggregate stats for now.
>>   * Account initial busy period after enabling stats.
>>
>> v7:
>>   * Rebase.
>>
>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>> ---
>>   drivers/gpu/drm/i915/intel_engine_cs.c  | 84 ++++++++++++++++++++++++++++++
>>   drivers/gpu/drm/i915/intel_lrc.c        |  2 +
>>   drivers/gpu/drm/i915/intel_ringbuffer.h | 92 +++++++++++++++++++++++++++++++++
>>   3 files changed, 178 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c
>> index 35c117c3fa0d..8db83f504d70 100644
>> --- a/drivers/gpu/drm/i915/intel_engine_cs.c
>> +++ b/drivers/gpu/drm/i915/intel_engine_cs.c
>> @@ -234,6 +234,8 @@ intel_engine_setup(struct drm_i915_private *dev_priv,
>>          /* Nothing to do here, execute in order of dependencies */
>>          engine->schedule = NULL;
>>   
>> +       spin_lock_init(&engine->stats.lock);
>> +
>>          ATOMIC_INIT_NOTIFIER_HEAD(&engine->context_status_notifier);
>>   
>>          dev_priv->engine_class[info->class][info->instance] = engine;
>> @@ -1613,6 +1615,88 @@ intel_engine_lookup_user(struct drm_i915_private *i915, u8 class, u8 instance)
>>          return i915->engine_class[class][instance];
>>   }
>>   
>> +/**
>> + * intel_enable_engine_stats() - Enable engine busy tracking on engine
>> + * @engine: engine to enable stats collection
>> + *
>> + * Start collecting the engine busyness data for @engine.
>> + *
>> + * Returns 0 on success or a negative error code.
>> + */
>> +int intel_enable_engine_stats(struct intel_engine_cs *engine)
>> +{
>> +       unsigned long flags;
>> +
>> +       if (!i915_modparams.enable_execlists)
>> +               return -ENODEV;
>> +
>> +       spin_lock_irqsave(&engine->stats.lock, flags);
>> +       if (engine->stats.enabled == ~0)
>> +               goto busy;
> 
> This still makes me go wut? Why not just refcount_t for this?

Because I need the spinlock anyway (in context in/out) so refcount_t for 
enable doesn't sound like a gain to me.

>> +       if (engine->stats.enabled++ == 0)
>> +               engine->stats.enabled_at = ktime_get();
>> +       spin_unlock_irqrestore(&engine->stats.lock, flags);
>> +
>> +       return 0;
>> +
>> +busy:
>> +       spin_unlock_irqrestore(&engine->stats.lock, flags);
>> +
>> +       return -EBUSY;
>> +}
>> +
>> +/**
>> + * intel_disable_engine_stats() - Disable engine busy tracking on engine
>> + * @engine: engine to disable stats collection
>> + *
>> + * Stops collecting the engine busyness data for @engine.
>> + */
>> +void intel_disable_engine_stats(struct intel_engine_cs *engine)
>> +{
>> +       unsigned long flags;
>> +
>> +       if (!i915_modparams.enable_execlists)
>> +               return;
>> +
>> +       spin_lock_irqsave(&engine->stats.lock, flags);
>> +       WARN_ON_ONCE(engine->stats.enabled == 0);
>> +       if (--engine->stats.enabled == 0) {
>> +               engine->stats.enabled_at = 0;
>> +               engine->stats.active = 0;
>> +               engine->stats.start = 0;
>> +               engine->stats.total = 0;
>> +       }
>> +       spin_unlock_irqrestore(&engine->stats.lock, flags);
>> +}
>> +
>> +/**
>> + * intel_engine_get_busy_time() - Return current accumulated engine busyness
>> + * @engine: engine to report on
>> + *
>> + * Returns accumulated time @engine was busy since engine stats were enabled.
>> + */
>> +ktime_t intel_engine_get_busy_time(struct intel_engine_cs *engine)
>> +{
>> +       ktime_t total;
>> +       unsigned long flags;
>> +
>> +       spin_lock_irqsave(&engine->stats.lock, flags);
>> +
>> +       total = engine->stats.total;
>> +
>> +       /*
>> +        * If the engine is executing something at the moment
>> +        * add it to the total.
>> +        */
>> +       if (engine->stats.active)
>> +               total = ktime_add(total,
>> +                                 ktime_sub(ktime_get(), engine->stats.start));
>> +
>> +       spin_unlock_irqrestore(&engine->stats.lock, flags);
>> +
>> +       return total;
>> +}
>> +
>>   #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
>>   #include "selftests/mock_engine.c"
>>   #endif
>> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
>> index 7cd14b701ed7..1e743d3d16cb 100644
>> --- a/drivers/gpu/drm/i915/intel_lrc.c
>> +++ b/drivers/gpu/drm/i915/intel_lrc.c
>> @@ -366,12 +366,14 @@ execlists_context_status_change(struct drm_i915_gem_request *rq,
>>   static inline void
>>   execlists_context_schedule_in(struct drm_i915_gem_request *rq)
>>   {
>> +       intel_engine_context_in(rq->engine);
>>          execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_IN);
> 
> Symmetry: start accumulating after the notifier.

Ok.

> 
>>   }
>>   
>>   static inline void
>>   execlists_context_schedule_out(struct drm_i915_gem_request *rq)
>>   {
>> +       intel_engine_context_out(rq->engine);
>>          execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_OUT);
>>   }
>>   
>> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
>> index 7a901e766d03..8db228ebdb28 100644
>> --- a/drivers/gpu/drm/i915/intel_ringbuffer.h
>> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
>> @@ -538,6 +538,38 @@ struct intel_engine_cs {
>>           * certain bits to encode the command length in the header).
>>           */
>>          u32 (*get_cmd_length_mask)(u32 cmd_header);
>> +
>> +       struct {
>> +               /**
>> +                * @lock: Lock protecting the below fields.
>> +                */
>> +               spinlock_t lock;
>> +               /**
>> +                * @enabled: Reference count indicating number of listeners.
>> +                */
>> +               unsigned int enabled;
>> +               /**
>> +                * @active: Number of contexts currently scheduled in.
>> +                */
>> +               unsigned int active;
>> +               /**
>> +                * @enabled_at: Timestamp when busy stats were enabled.
>> +                */
>> +               ktime_t enabled_at;
>> +               /**
>> +                * @start: Timestamp of the last idle to active transition.
>> +                *
>> +                * Idle is defined as active == 0, active is active > 0.
>> +                */
>> +               ktime_t start;
>> +               /**
>> +                * @total: Total time this engine was busy.
>> +                *
>> +                * Accumulated time not counting the most recent block in cases
>> +                * where engine is currently busy (active > 0).
>> +                */
>> +               ktime_t total;
>> +       } stats;
>>   };
>>   
>>   static inline unsigned int
>> @@ -859,4 +891,64 @@ bool intel_engine_can_store_dword(struct intel_engine_cs *engine);
>>   struct intel_engine_cs *
>>   intel_engine_lookup_user(struct drm_i915_private *i915, u8 class, u8 instance);
>>   
>> +static inline void intel_engine_context_in(struct intel_engine_cs *engine)
>> +{
>> +       unsigned long flags;
>> +
>> +       if (READ_ONCE(engine->stats.enabled) == 0)
>> +               return;
>> +
>> +       spin_lock_irqsave(&engine->stats.lock, flags);
>> +
>> +       if (engine->stats.enabled > 0) {
>> +               if (engine->stats.active++ == 0)
>> +                       engine->stats.start = ktime_get();
>> +               GEM_BUG_ON(engine->stats.active == 0);
>> +       }
>> +
>> +       spin_unlock_irqrestore(&engine->stats.lock, flags);
>> +}
>> +
>> +static inline void intel_engine_context_out(struct intel_engine_cs *engine)
>> +{
>> +       unsigned long flags;
>> +
>> +       if (READ_ONCE(engine->stats.enabled) == 0)
>> +               return;
>> +
>> +       spin_lock_irqsave(&engine->stats.lock, flags);
>> +
>> +       if (engine->stats.enabled > 0) {
>> +               ktime_t last, now = ktime_get();
>> +
>> +               if (engine->stats.active && --engine->stats.active == 0) {
>> +                       /*
>> +                        * Decrement the active context count and in case GPU
>> +                        * is now idle add up to the running total.
>> +                        */
>> +                       last = ktime_sub(now, engine->stats.start);
>> +
>> +                       engine->stats.total = ktime_add(engine->stats.total,
>> +                                                       last);
>> +               } else if (engine->stats.active == 0) {
>> +                       /*
>> +                        * After turning on engine stats, context out might be
>> +                        * the first event in which case we account from the
>> +                        * time stats gathering was turned on.
>> +                        */
>> +                       last = ktime_sub(now, engine->stats.enabled_at);
>> +
>> +                       engine->stats.total = ktime_add(engine->stats.total,
>> +                                                       last);
>> +               }
>> +       }
>> +
>> +       spin_unlock_irqrestore(&engine->stats.lock, flags);
>> +}
>> +
>> +int intel_enable_engine_stats(struct intel_engine_cs *engine);
>> +void intel_disable_engine_stats(struct intel_engine_cs *engine);
>> +
>> +ktime_t intel_engine_get_busy_time(struct intel_engine_cs *engine);
>> +
>>   #endif /* _INTEL_RINGBUFFER_H_ */
>> -- 
>> 2.9.5
>>
>> _______________________________________________
>> Intel-gfx mailing list
>> Intel-gfx@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
> 

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 7/8] drm/i915/pmu: Wire up engine busy stats to PMU
  2017-09-25 17:48   ` Chris Wilson
@ 2017-09-26 12:32     ` Tvrtko Ursulin
  2017-09-26 18:46       ` Rogozhkin, Dmitry V
  2017-09-26 20:05       ` Chris Wilson
  0 siblings, 2 replies; 30+ messages in thread
From: Tvrtko Ursulin @ 2017-09-26 12:32 UTC (permalink / raw)
  To: Chris Wilson, Tvrtko Ursulin, Intel-gfx


On 25/09/2017 18:48, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2017-09-25 16:15:42)
>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>
>> We can use engine busy stats instead of the MMIO sampling timer
>> for better efficiency.
>>
>> As minimum this saves period * num_engines / sec mmio reads,
>> and in a better case, when only engine busy samplers are active,
>> it enables us to not kick off the sampling timer at all.
> 
> Or you could inspect port_isset(execlists.port).
> You can avoid the mmio for this case also by just using HWSP. It's just
> that I never enabled busy tracking in isolation, so I always ended up
> using the mmio.

This would be for execlists only. I could change the main patch to do 
this, you think it is worth it?

> Stronger argument is that this method give more precise timing than
> stochastic sampling.
> 
>>
>> v2: Rebase.
>> v3:
>>   * Rebase, comments.
>>   * Leave engine busyness controls out of workers.
>> v4: Checkpatch cleanup.
>> v5: Added comment to pmu_needs_timer change.
>> v6:
>>   * Rebase.
>>   * Fix style of some comments. (Chris Wilson)
>>
>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> With a better explanation of why this is preferred,
> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>

Thanks,

Tvrtko

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 8/8] drm/i915: Gate engine stats collection with a static key
  2017-09-25 17:56   ` Chris Wilson
@ 2017-09-26 12:42     ` Tvrtko Ursulin
  0 siblings, 0 replies; 30+ messages in thread
From: Tvrtko Ursulin @ 2017-09-26 12:42 UTC (permalink / raw)
  To: Chris Wilson, Tvrtko Ursulin, Intel-gfx


On 25/09/2017 18:56, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2017-09-25 16:15:43)
>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>
>> This reduces the cost of the software engine busyness tracking
>> to a single no-op instruction when there are no listeners.
>>
>> v2: Rebase and some comments.
>> v3: Rebase.
>> v4: Checkpatch fixes.
>> v5: Rebase.
>> v6: Use system_long_wq to avoid being blocked by struct_mutex
>>      users.
>> v7: Fix bad conflict resolution from last rebase. (Dmitry Rogozhkin)
>> v8: Rebase.
>>
>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> Bah, still unhappy about the global. I know in all likelihood it doesn't
> matter, but it still bugs me.

If that's the biggest problem with this patch then that's good! :)

But I think some benchmarking is also in order with data added to the 
commit msg.

>> ---
>>   drivers/gpu/drm/i915/i915_pmu.c         |  54 +++++++++++++++--
>>   drivers/gpu/drm/i915/intel_engine_cs.c  |  17 ++++++
>>   drivers/gpu/drm/i915/intel_ringbuffer.h | 101 ++++++++++++++++++++------------
>>   3 files changed, 130 insertions(+), 42 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/i915_pmu.c b/drivers/gpu/drm/i915/i915_pmu.c
>> index 228aa50ce709..e768f33ebb3d 100644
>> --- a/drivers/gpu/drm/i915/i915_pmu.c
>> +++ b/drivers/gpu/drm/i915/i915_pmu.c
>> @@ -501,11 +501,17 @@ static void i915_pmu_enable(struct perf_event *event)
>>                  GEM_BUG_ON(sample >= I915_PMU_SAMPLE_BITS);
>>                  GEM_BUG_ON(engine->pmu.enable_count[sample] == ~0);
>>                  if (engine->pmu.enable_count[sample]++ == 0) {
>> +                       /*
>> +                        * Enable engine busy stats tracking if needed or
>> +                        * alternatively cancel the scheduled disabling of the
>> +                        * same.
>> +                        */
>>                          if (engine_needs_busy_stats(engine) &&
>>                              !engine->pmu.busy_stats) {
>> -                               engine->pmu.busy_stats =
>> -                                       intel_enable_engine_stats(engine) == 0;
>> -                               WARN_ON_ONCE(!engine->pmu.busy_stats);
>> +                               engine->pmu.busy_stats = true;
>> +                               if (!cancel_delayed_work(&engine->pmu.disable_busy_stats))
>> +                                       queue_work(system_long_wq,
>> +                                                  &engine->pmu.enable_busy_stats);
>>                          }
>>                  }
>>          }
>> @@ -548,7 +554,15 @@ static void i915_pmu_disable(struct perf_event *event)
>>                          if (!engine_needs_busy_stats(engine) &&
>>                              engine->pmu.busy_stats) {
>>                                  engine->pmu.busy_stats = false;
>> -                               intel_disable_engine_stats(engine);
>> +                               /*
>> +                                * We request a delayed disable to handle the
>> +                                * rapid on/off cycles on events which can
>> +                                * happen when tools like perf stat start in a
>> +                                * nicer way.
>> +                                */
>> +                               queue_delayed_work(system_long_wq,
>> +                                                  &engine->pmu.disable_busy_stats,
>> +                                                  round_jiffies_up_relative(HZ));
> 
> What's preventing the perverse system from executing the enable after
> the disable? Say the enable was scheduled on another cpu that was
> stalled? >
> Something like a if (cancel_work(enable)) return ?

It would be very perverse indeed considering the delayed disable, but 
well spotted!

It looks like your solution would work. I could also go with a dedicated 
ordered wq. At the moment I don't have a preference. Will give it a think.

>>                          }
>>                  }
>>          }
>> @@ -739,9 +753,27 @@ static int i915_pmu_cpu_offline(unsigned int cpu, struct hlist_node *node)
>>          return 0;
>>   }
>>   
>> +static void __enable_busy_stats(struct work_struct *work)
>> +{
>> +       struct intel_engine_cs *engine =
>> +               container_of(work, typeof(*engine), pmu.enable_busy_stats);
>> +
>> +       WARN_ON_ONCE(intel_enable_engine_stats(engine));
>> +}
>> +
>> +static void __disable_busy_stats(struct work_struct *work)
>> +{
>> +       struct intel_engine_cs *engine =
>> +              container_of(work, typeof(*engine), pmu.disable_busy_stats.work);
>> +
>> +       intel_disable_engine_stats(engine);
>> +}
>> +
>>   void i915_pmu_register(struct drm_i915_private *i915)
>>   {
>>          int ret;
>> +       struct intel_engine_cs *engine;
>> +       enum intel_engine_id id;
> 
> What a nice Christmas tree you are growing :-p

Will make it prettier.

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 3/8] drm/i915/pmu: Expose a PMU interface for perf queries
  2017-09-26 12:28     ` Tvrtko Ursulin
@ 2017-09-26 13:00       ` Chris Wilson
  2017-09-26 13:14       ` Chris Wilson
  1 sibling, 0 replies; 30+ messages in thread
From: Chris Wilson @ 2017-09-26 13:00 UTC (permalink / raw)
  To: Tvrtko Ursulin, Tvrtko Ursulin, Intel-gfx; +Cc: Peter Zijlstra

Quoting Tvrtko Ursulin (2017-09-26 13:28:03)
> On 25/09/2017 18:37, Chris Wilson wrote:
> > Quoting Tvrtko Ursulin (2017-09-25 16:15:38)
> >> +       for_each_engine(engine, dev_priv, id) {
> >> +               u32 current_seqno = intel_engine_get_seqno(engine);
> >> +               u32 last_seqno = intel_engine_last_submit(engine);
> >> +               u32 enable = engine->pmu.enable;
> >> +
> >> +               if (i915_seqno_passed(current_seqno, last_seqno))
> >> +                       continue;
> >> +
> >> +               if ((enable & BIT(I915_SAMPLE_QUEUED)) &&
> >> +                   (last_seqno - 1 > current_seqno))
> > 
> > Hmm, this isn't queued. This is more than one rq submitted to hw.
> 
> Yes, that's what I thought we want from the PMU perspective. Maybe 
> rename to runnable if queued is not the clearest?

It's not runnable either. Everything on the execlists queue, or on hw
for legacy, is runnable. That count is what I think we want for
userspace.

> > Do you want total inflight (including waiting-for-fences)? That would be
> > engine->timeline->inflight_seqnos (additional problem in its laziness).
> > 
> > Hard to distinguish all other cases, but last - current is susceptible
> > to not treating all ready requests as being queued. 
> 
> Hm how?

Those not assigned a seqno for starters.

> 
> >And miscounts on
> > legacy across semaphores (which you've excluded for execlists).
> > 
> > Aiui you want
> > 
> > diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
> > index 4eb1a76731b2..19b8b31afbbc 100644
> > --- a/drivers/gpu/drm/i915/i915_gem_request.c
> > +++ b/drivers/gpu/drm/i915/i915_gem_request.c
> > @@ -477,6 +477,7 @@ void __i915_gem_request_submit(struct drm_i915_gem_request *request)
> >          engine->emit_breadcrumb(request,
> >                                  request->ring->vaddr + request->postfix);
> >   
> > +       atomic_dec(&request->engine->queued);
> >          spin_lock(&request->timeline->lock);
> >          list_move_tail(&request->link, &timeline->requests);
> >          spin_unlock(&request->timeline->lock);
> > @@ -525,6 +526,7 @@ void __i915_gem_request_unsubmit(struct drm_i915_gem_request *request)
> >          spin_lock(&timeline->lock);
> >          list_move(&request->link, &timeline->requests);
> >          spin_unlock(&timeline->lock);
> > +       atomic_inc(&request->engine->queued);
> >   
> >          /* We don't need to wake_up any waiters on request->execute, they
> >           * will get woken by any other event or us re-adding this request
> > @@ -556,6 +558,7 @@ submit_notify(struct i915_sw_fence *fence, enum i915_sw_fence_notify state)
> >          switch (state) {
> >          case FENCE_COMPLETE:
> >                  trace_i915_gem_request_submit(request);
> > +               atomic_inc(&request->engine->queued);
> >                  request->engine->submit_request(request);
> 
> Or have the counter normal unsigned int under the engine timeline lock, 
> moving into the execlists_submit_request etc?
> 
> >                  break;
> >   
> > 
> > That excludes the requests actually in the hw queue, just those in the sw
> > queue. Or both.
> 
> This looks good to me, I was just reluctant to suggest new software 
> tracking. Happy to go with this and to set the metric meaning to this?

I'm not convinced it's the metric we want either! :( Going back to the
wsim angle, I do think we want queue-depth / loadavg, the count of all
runnable tasks including those passed to hw. So what I think we want is
load/runnable = (last_seqno - current_seqno) + atomic_read(^^)

We can do the averaging trick as the same, so that we have a consistent
metric over the sample period, it will just be allowed to exceed 100%.

Then since you already have current_seqno around, SAMPLE_BUSY becomes
current_seqno != last_seqno. fw avoidance for everyone, \o/

> >> +static void frequency_sample(struct drm_i915_private *dev_priv)
> >> +{
> >> +       if (dev_priv->pmu.enable &
> >> +           config_enabled_mask(I915_PMU_ACTUAL_FREQUENCY)) {
> >> +               u64 val;
> >> +
> >> +               val = dev_priv->rps.cur_freq;
> >> +               if (dev_priv->gt.awake &&
> >> +                   intel_runtime_pm_get_if_in_use(dev_priv)) {
> >> +                       val = intel_get_cagf(dev_priv,
> >> +                                            I915_READ_NOTRACE(GEN6_RPSTAT1));
> >> +                       intel_runtime_pm_put(dev_priv);
> >> +               }
> >> +               val = intel_gpu_freq(dev_priv, val);
> >> +               dev_priv->pmu.sample[__I915_SAMPLE_FREQ_ACT] += val * PERIOD;
> >> +       }
> >> +
> >> +       if (dev_priv->pmu.enable &
> >> +           config_enabled_mask(I915_PMU_REQUESTED_FREQUENCY)) {
> >> +               u64 val = intel_gpu_freq(dev_priv, dev_priv->rps.cur_freq);
> >> +
> >> +               dev_priv->pmu.sample[__I915_SAMPLE_FREQ_REQ] += val * PERIOD;
> >> +       }
> > 
> > Ok. Do we need a comment to explain the importance of averaging here?
> > Should we be doing trapezium integration?
> 
> After reading up on it, do you mean increasing the accuracy by something 
> like sample[] += val * PERIOD + ((val * PERIOD - sample[]) * PERIOD / 2) ?

Hmm, I haven't seen that formulation before (it feels that using the
cumulative total like that is wrong). I was just thinking of
doing

sample[] += (prev + val) * PERIOD / 2;

which is no better in the long run, so the improvement is only at the
endpoints.

> I am not sure if it makes sense here since our freq changes are much 
> slower and less granular than the sampling period.

What's the current sampling frequency? At one point, I was just using
200Hz at which point rps starts being at around the same frequency.
 
> But it sounds like a good idea for other samplers, no? They are 
> potentially much more granular than the sampling period.

An unhelpful digression, sorry.

> >> +               ret = 0;
> >> +               switch (event->attr.config) {
> >> +               case I915_PMU_INTERRUPTS:
> >> +                       break;
> >> +               case I915_PMU_ACTUAL_FREQUENCY:
> >> +                       if (IS_VALLEYVIEW(i915) || IS_CHERRYVIEW(i915))
> >> +                                /* Requires a mutex for sampling! */
> >> +                               ret = -ENODEV;
> >> +               case I915_PMU_REQUESTED_FREQUENCY:
> >> +                       if (INTEL_GEN(i915) < 6)
> >> +                               ret = -ENODEV;
> >> +                       break;
> >> +               case I915_PMU_RC6_RESIDENCY:
> >> +                       if (!HAS_RC6(i915))
> >> +                               ret = -ENODEV;
> >> +                       break;
> >> +               case I915_PMU_RC6p_RESIDENCY:
> >> +               case I915_PMU_RC6pp_RESIDENCY:
> >> +                       if (!HAS_RC6p(i915))
> >> +                               ret = -ENODEV;
> > 
> > Must introduced you to my HAS_RC6pp() patch.
> 
> Can I have a reminder please? You have a patch which adds it? I thought 
> from the code that where we have RC6p we also have RC6pp, wrong?

https://patchwork.freedesktop.org/patch/146444/

We don't have any hw where RC6pp was validated.

> >> diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
> >> index fe25a01c81f2..682abafecc8c 100644
> >> --- a/include/uapi/drm/i915_drm.h
> >> +++ b/include/uapi/drm/i915_drm.h
> >> @@ -86,6 +86,63 @@ enum i915_mocs_table_index {
> >>          I915_MOCS_CACHED,
> >>   };
> > 
> > Hmm, how about uapi/drm/i915_perf.h? (Or i915_pmu.h)
> > We are not adding traditional drm-ioctl functionality (except for some
> > common enums). But I guess it is not huge, so no big concern. Just
> > thinking that we essentially copy this into igt, so we should really
> > make it a copy!
> 
> No objections from me. Definitely want it?

I wouldn't say no. I wouldn't suggest it must be done either. :)

> >> +#define I915_PMU_RC6_RESIDENCY         __I915_PMU_OTHER(3)
> >> +#define I915_PMU_RC6p_RESIDENCY                __I915_PMU_OTHER(4)
> >> +#define I915_PMU_RC6pp_RESIDENCY       __I915_PMU_OTHER(5)
> >> +
> 
> Would we want to add some more counters?

At a reasonable frequency, as you say almost every new feature will want
some counter or two. No more pushing stats into debugfs that no one
cares about (not looking at the guc)!
 
> I915_PMU_ENGINE_PREEPMT - number of preemption events on engine
> I915_PMU_ENGINE_RESCHEDULE - number of requests which needed to be moved 
> in the tree due new request arriving
> 
> Or leave all this for later, since you also had some ideas on what to add.

Keep a second patch around to add a counter or two, so we can check how
painful it would be to add more -- and hence minimise that pain now.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 3/8] drm/i915/pmu: Expose a PMU interface for perf queries
  2017-09-26 12:28     ` Tvrtko Ursulin
  2017-09-26 13:00       ` Chris Wilson
@ 2017-09-26 13:14       ` Chris Wilson
  1 sibling, 0 replies; 30+ messages in thread
From: Chris Wilson @ 2017-09-26 13:14 UTC (permalink / raw)
  To: Tvrtko Ursulin, Tvrtko Ursulin, Intel-gfx; +Cc: Peter Zijlstra

Quoting Tvrtko Ursulin (2017-09-26 13:28:03)
> 
> On 25/09/2017 18:37, Chris Wilson wrote:
> > diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
> > index 4eb1a76731b2..19b8b31afbbc 100644
> > --- a/drivers/gpu/drm/i915/i915_gem_request.c
> > +++ b/drivers/gpu/drm/i915/i915_gem_request.c
> > @@ -477,6 +477,7 @@ void __i915_gem_request_submit(struct drm_i915_gem_request *request)
> >          engine->emit_breadcrumb(request,
> >                                  request->ring->vaddr + request->postfix);
> >   
> > +       atomic_dec(&request->engine->queued);
> >          spin_lock(&request->timeline->lock);
> >          list_move_tail(&request->link, &timeline->requests);
> >          spin_unlock(&request->timeline->lock);
> > @@ -525,6 +526,7 @@ void __i915_gem_request_unsubmit(struct drm_i915_gem_request *request)
> >          spin_lock(&timeline->lock);
> >          list_move(&request->link, &timeline->requests);
> >          spin_unlock(&timeline->lock);
> > +       atomic_inc(&request->engine->queued);
> >   
> >          /* We don't need to wake_up any waiters on request->execute, they
> >           * will get woken by any other event or us re-adding this request
> > @@ -556,6 +558,7 @@ submit_notify(struct i915_sw_fence *fence, enum i915_sw_fence_notify state)
> >          switch (state) {
> >          case FENCE_COMPLETE:
> >                  trace_i915_gem_request_submit(request);
> > +               atomic_inc(&request->engine->queued);
> >                  request->engine->submit_request(request);
> 
> Or have the counter normal unsigned int under the engine timeline lock, 
> moving into the execlists_submit_request etc?
> 
> >                  break;
> >   
> > 
> > That excludes the requests actually in the hw queue, just those in the sw
> > queue. Or both.
> 
> This looks good to me, I was just reluctant to suggest new software 
> tracking. Happy to go with this and to set the metric meaning to this?

It's always a cost-benefit analysis. The cost is usually pretty clear;
the benefits less so! Though there's a hidden cost, being ABI we want to
make sure that it is meaningful for a large user base and remain
applicable for the next 5 years or so (without becoming a burden upon
ourselves).

In this case, we have a potential consumer that we think will be poc for
a good chunk of applications, and we are trying to find just the right
counter(s) that will satisfy it for the next few years. So the benefit
will be clear as well (or not and we can scrap it before committing
ourselves to it).
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 7/8] drm/i915/pmu: Wire up engine busy stats to PMU
  2017-09-26 12:32     ` Tvrtko Ursulin
@ 2017-09-26 18:46       ` Rogozhkin, Dmitry V
  2017-09-26 20:01         ` Chris Wilson
  2017-09-27  7:59         ` Tvrtko Ursulin
  2017-09-26 20:05       ` Chris Wilson
  1 sibling, 2 replies; 30+ messages in thread
From: Rogozhkin, Dmitry V @ 2017-09-26 18:46 UTC (permalink / raw)
  To: tvrtko.ursulin; +Cc: Intel-gfx

On Tue, 2017-09-26 at 13:32 +0100, Tvrtko Ursulin wrote:
> On 25/09/2017 18:48, Chris Wilson wrote:
> > Quoting Tvrtko Ursulin (2017-09-25 16:15:42)
> >> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> >>
> >> We can use engine busy stats instead of the MMIO sampling timer
> >> for better efficiency.
> >>
> >> As minimum this saves period * num_engines / sec mmio reads,
> >> and in a better case, when only engine busy samplers are active,
> >> it enables us to not kick off the sampling timer at all.
> > 
> > Or you could inspect port_isset(execlists.port).
> > You can avoid the mmio for this case also by just using HWSP. It's just
> > that I never enabled busy tracking in isolation, so I always ended up
> > using the mmio.
> 
> This would be for execlists only. I could change the main patch to do 
> this, you think it is worth it?

You know, I wonder why we limit this by execlists? Is that because
scheduler is working only for execlists and doesn't work for ringbuffers
on older HW? But consider the following. If we don't have a scheduler,
then we have FIFO queue and either hw semaphore or sw sync. For the
userspace applications real execution and wait are not actually
distinguishable: for him engine is busy, it either executes or it is
stalled and can't execute anything else, thus - it is busy.

From this perspective we can consider to extend what we do currently for
execlists to cover FIFO ringbuffers. How do you think? Other metrics
like SEMA or WAIT would be second level of details and will mean
distribution of BUSY time: we spent SEMA time on semaphore or WAIT time
on the wait and actively ran something BUSY-SEMA(WAIT) time.

By the way, with the BDW+ and execlists, is my understanding right that
we report WAIT metric and SEMA is always 0?

> 
> > Stronger argument is that this method give more precise timing than
> > stochastic sampling.
> > 
> >>
> >> v2: Rebase.
> >> v3:
> >>   * Rebase, comments.
> >>   * Leave engine busyness controls out of workers.
> >> v4: Checkpatch cleanup.
> >> v5: Added comment to pmu_needs_timer change.
> >> v6:
> >>   * Rebase.
> >>   * Fix style of some comments. (Chris Wilson)
> >>
> >> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > 
> > With a better explanation of why this is preferred,
> > Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
> 
> Thanks,
> 
> Tvrtko
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 7/8] drm/i915/pmu: Wire up engine busy stats to PMU
  2017-09-26 18:46       ` Rogozhkin, Dmitry V
@ 2017-09-26 20:01         ` Chris Wilson
  2017-09-27  7:59         ` Tvrtko Ursulin
  1 sibling, 0 replies; 30+ messages in thread
From: Chris Wilson @ 2017-09-26 20:01 UTC (permalink / raw)
  To: Rogozhkin, Dmitry V, tvrtko.ursulin; +Cc: Intel-gfx

Quoting Rogozhkin, Dmitry V (2017-09-26 19:46:48)
> On Tue, 2017-09-26 at 13:32 +0100, Tvrtko Ursulin wrote:
> > On 25/09/2017 18:48, Chris Wilson wrote:
> > > Quoting Tvrtko Ursulin (2017-09-25 16:15:42)
> > >> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > >>
> > >> We can use engine busy stats instead of the MMIO sampling timer
> > >> for better efficiency.
> > >>
> > >> As minimum this saves period * num_engines / sec mmio reads,
> > >> and in a better case, when only engine busy samplers are active,
> > >> it enables us to not kick off the sampling timer at all.
> > > 
> > > Or you could inspect port_isset(execlists.port).
> > > You can avoid the mmio for this case also by just using HWSP. It's just
> > > that I never enabled busy tracking in isolation, so I always ended up
> > > using the mmio.
> > 
> > This would be for execlists only. I could change the main patch to do 
> > this, you think it is worth it?
> 
> You know, I wonder why we limit this by execlists?

Because there's only one ringbuffer for legacy and we don't virtualise
the contexts to have distinct buffers onto which we build the different
execlists. It's not impossible to emulate execlists on top; the only win
would be to remove the sema interengine waits (and external waits) and
replace them with cpu waits, unblocking the pipeline. Reducing
ringbuffer to execlist performance isn't appealing however.

> Is that because
> scheduler is working only for execlists and doesn't work for ringbuffers
> on older HW? But consider the following. If we don't have a scheduler,
> then we have FIFO queue and either hw semaphore or sw sync. For the
> userspace applications real execution and wait are not actually
> distinguishable: for him engine is busy, it either executes or it is
> stalled and can't execute anything else, thus - it is busy.

I've used the sema stall notification to reduce said stalls and allow
greater parallelism, knowing that they are caused by inter-engine
dependencies.

The question is whether knowing e.g. global statistics on the number of
requests waiting for external fences is interesting, or just the
runnable length. For the application, the question is more or less what
is the length of the dependency for this batch -- how early can I expect
this to run? Such questions will vary based on the scheduler policy.

> From this perspective we can consider to extend what we do currently for
> execlists to cover FIFO ringbuffers.

You mean how to extend the ringbuffer counters to cover execlists, and
beyond.

> How do you think? Other metrics
> like SEMA or WAIT would be second level of details and will mean
> distribution of BUSY time: we spent SEMA time on semaphore or WAIT time
> on the wait and actively ran something BUSY-SEMA(WAIT) time.

That's what they mean currently (and how I tried to display them in
overlay).
 
> By the way, with the BDW+ and execlists, is my understanding right that
> we report WAIT metric and SEMA is always 0?

WAIT is still used under execlists. SEMA I'm hopping that MI_SEMA_WAIT |
POLL results in the RING_CTL bit being asserted, that will be useful for
Vulkan, for example.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 7/8] drm/i915/pmu: Wire up engine busy stats to PMU
  2017-09-26 12:32     ` Tvrtko Ursulin
  2017-09-26 18:46       ` Rogozhkin, Dmitry V
@ 2017-09-26 20:05       ` Chris Wilson
  2017-09-27  8:10         ` Tvrtko Ursulin
  1 sibling, 1 reply; 30+ messages in thread
From: Chris Wilson @ 2017-09-26 20:05 UTC (permalink / raw)
  To: Tvrtko Ursulin, Tvrtko Ursulin, Intel-gfx

Quoting Tvrtko Ursulin (2017-09-26 13:32:25)
> 
> On 25/09/2017 18:48, Chris Wilson wrote:
> > Quoting Tvrtko Ursulin (2017-09-25 16:15:42)
> >> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> >>
> >> We can use engine busy stats instead of the MMIO sampling timer
> >> for better efficiency.
> >>
> >> As minimum this saves period * num_engines / sec mmio reads,
> >> and in a better case, when only engine busy samplers are active,
> >> it enables us to not kick off the sampling timer at all.
> > 
> > Or you could inspect port_isset(execlists.port).
> > You can avoid the mmio for this case also by just using HWSP. It's just
> > that I never enabled busy tracking in isolation, so I always ended up
> > using the mmio.
> 
> This would be for execlists only. I could change the main patch to do 
> this, you think it is worth it?

I'm thinking we do the busy = last_seqno != current_seqno approach for
the first patch. Then this patch is purely about the accuracy
improvement for execlists, taking advantage of the existing interrupts.

We could do something similar by forcing the user interrupt and treating
that as context-out, likely to be much more fiddly than execlists and
those processors did not take kindly to a flood of interrupts from the
gpu. (Or that may just be a side-effect of the interrupt handler from a
few years ago.)
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 7/8] drm/i915/pmu: Wire up engine busy stats to PMU
  2017-09-26 18:46       ` Rogozhkin, Dmitry V
  2017-09-26 20:01         ` Chris Wilson
@ 2017-09-27  7:59         ` Tvrtko Ursulin
  1 sibling, 0 replies; 30+ messages in thread
From: Tvrtko Ursulin @ 2017-09-27  7:59 UTC (permalink / raw)
  To: Rogozhkin, Dmitry V; +Cc: Intel-gfx


On 26/09/2017 19:46, Rogozhkin, Dmitry V wrote:
> On Tue, 2017-09-26 at 13:32 +0100, Tvrtko Ursulin wrote:
>> On 25/09/2017 18:48, Chris Wilson wrote:
>>> Quoting Tvrtko Ursulin (2017-09-25 16:15:42)
>>>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>>
>>>> We can use engine busy stats instead of the MMIO sampling timer
>>>> for better efficiency.
>>>>
>>>> As minimum this saves period * num_engines / sec mmio reads,
>>>> and in a better case, when only engine busy samplers are active,
>>>> it enables us to not kick off the sampling timer at all.
>>>
>>> Or you could inspect port_isset(execlists.port).
>>> You can avoid the mmio for this case also by just using HWSP. It's just
>>> that I never enabled busy tracking in isolation, so I always ended up
>>> using the mmio.
>>
>> This would be for execlists only. I could change the main patch to do
>> this, you think it is worth it?
> 
> You know, I wonder why we limit this by execlists? Is that because

I am not sure there wasn't a slight misunderstanding here. By "execlists 
only" I simply meant the _optimisation_ of getting engine busyness via 
port_isset instead of MMIO reads would be for execlists only (and guc).

So we'd have:

1. legacy - mmio sampling
2. execlists - software sampling in the first pmu batch, then precise 
engine busyness in patches that follow
3. guc - like execlists in the first pmu patch, and the for precise 
busyness perhaps we just use the same context_in/out approach

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 7/8] drm/i915/pmu: Wire up engine busy stats to PMU
  2017-09-26 20:05       ` Chris Wilson
@ 2017-09-27  8:10         ` Tvrtko Ursulin
  0 siblings, 0 replies; 30+ messages in thread
From: Tvrtko Ursulin @ 2017-09-27  8:10 UTC (permalink / raw)
  To: Chris Wilson, Tvrtko Ursulin, Intel-gfx


On 26/09/2017 21:05, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2017-09-26 13:32:25)
>>
>> On 25/09/2017 18:48, Chris Wilson wrote:
>>> Quoting Tvrtko Ursulin (2017-09-25 16:15:42)
>>>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>>
>>>> We can use engine busy stats instead of the MMIO sampling timer
>>>> for better efficiency.
>>>>
>>>> As minimum this saves period * num_engines / sec mmio reads,
>>>> and in a better case, when only engine busy samplers are active,
>>>> it enables us to not kick off the sampling timer at all.
>>>
>>> Or you could inspect port_isset(execlists.port).
>>> You can avoid the mmio for this case also by just using HWSP. It's just
>>> that I never enabled busy tracking in isolation, so I always ended up
>>> using the mmio.
>>
>> This would be for execlists only. I could change the main patch to do
>> this, you think it is worth it?
> 
> I'm thinking we do the busy = last_seqno != current_seqno approach for
> the first patch. Then this patch is purely about the accuracy
> improvement for execlists, taking advantage of the existing interrupts.

Was just saying, ok, either port_isset or last_seqno != current_seqno.

And then for queued, I was thinking to change it from time to count. 
That would give queue depth to userspace which is an interesting metric 
to build any balancing on top of.

I did a quick experiment yesterday and even queue depth expressed as the 
flawed last_seqno - current_seqno is potentially showing a small but 
consistent improvement against pure busyness based balancing (load = 
busy * qd). So I am curious how it would look with a more correct queue 
depth metric.

> We could do something similar by forcing the user interrupt and treating
> that as context-out, likely to be much more fiddly than execlists and
> those processors did not take kindly to a flood of interrupts from the
> gpu. (Or that may just be a side-effect of the interrupt handler from a
> few years ago.)

For ringbuff? I think not so interesting.

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2017-09-27  8:11 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-25 15:15 [PATCH v5 0/8] i915 PMU and engine busy stats Tvrtko Ursulin
2017-09-25 15:15 ` [PATCH 1/8] drm/i915: Convert intel_rc6_residency_us to ns Tvrtko Ursulin
2017-09-25 17:08   ` Chris Wilson
2017-09-25 15:15 ` [PATCH 2/8] drm/i915: Extract intel_get_cagf Tvrtko Ursulin
2017-09-25 15:15 ` [PATCH 3/8] drm/i915/pmu: Expose a PMU interface for perf queries Tvrtko Ursulin
2017-09-25 17:37   ` Chris Wilson
2017-09-26 12:28     ` Tvrtko Ursulin
2017-09-26 13:00       ` Chris Wilson
2017-09-26 13:14       ` Chris Wilson
2017-09-25 15:15 ` [PATCH 4/8] drm/i915/pmu: Suspend sampling when GPU is idle Tvrtko Ursulin
2017-09-25 17:39   ` Chris Wilson
2017-09-25 15:15 ` [PATCH 5/8] drm/i915: Wrap context schedule notification Tvrtko Ursulin
2017-09-25 17:40   ` Chris Wilson
2017-09-25 18:40     ` Chris Wilson
2017-09-25 15:15 ` [PATCH 6/8] drm/i915: Engine busy time tracking Tvrtko Ursulin
2017-09-25 17:43   ` Chris Wilson
2017-09-26 12:30     ` Tvrtko Ursulin
2017-09-25 15:15 ` [PATCH 7/8] drm/i915/pmu: Wire up engine busy stats to PMU Tvrtko Ursulin
2017-09-25 17:48   ` Chris Wilson
2017-09-26 12:32     ` Tvrtko Ursulin
2017-09-26 18:46       ` Rogozhkin, Dmitry V
2017-09-26 20:01         ` Chris Wilson
2017-09-27  7:59         ` Tvrtko Ursulin
2017-09-26 20:05       ` Chris Wilson
2017-09-27  8:10         ` Tvrtko Ursulin
2017-09-25 15:15 ` [PATCH 8/8] drm/i915: Gate engine stats collection with a static key Tvrtko Ursulin
2017-09-25 17:56   ` Chris Wilson
2017-09-26 12:42     ` Tvrtko Ursulin
2017-09-25 15:46 ` ✓ Fi.CI.BAT: success for i915 PMU and engine busy stats (rev12) Patchwork
2017-09-25 21:22 ` ✗ Fi.CI.IGT: failure " Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.