All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 00/15] Enable OA unit for Gen 8 and 9 in i915 perf
@ 2017-04-12 15:55 Robert Bragg
  2017-04-12 15:55 ` [PATCH v4 01/15] drm/i915/perf: fix gen7_append_oa_reports comment Robert Bragg
                   ` (18 more replies)
  0 siblings, 19 replies; 87+ messages in thread
From: Robert Bragg @ 2017-04-12 15:55 UTC (permalink / raw)
  To: intel-gfx

Updates based on latest review feedback from Matthew and Lionel and includes an
update to the TestOa register config for SKL GT2 compared the last series (based
on the latest XML files from VPG)

Although the _[SUB]SLICE_MASK GETPARM patches were reviewed, it's worth
mentioning there was a TODO comment in the last patches about rebasing the
parameter ID before upstreaming which is removed now, and there's a minimal
comment for what the new parameters are, consistent with other more recently
added parameters. Conveniently the actual IDs didn't need rebasing so there's
no last moment uapi change for gputop, mesa and igt.

The series is longer just because I've included the gen7 prep patches (already
reviewed) that I haven't landed yet but the gen8+ bits depend on.

Regards,
- Robert

Robert Bragg (15):
  drm/i915/perf: fix gen7_append_oa_reports comment
  drm/i915/perf: avoid poll, read, EAGAIN busy loops
  drm/i915/perf: avoid read back of head register
  drm/i915/perf: no head/tail ref in gen7_oa_read
  drm/i915/perf: improve tail race workaround
  drm/i915/perf: improve invalid OA format debug message
  drm/i915/perf: better pipeline aged/aging tail updates
  drm/i915/perf: rate limit spurious oa report notice
  drm/i915: expose _SLICE_MASK GETPARM
  drm/i915: expose _SUBSLICE_MASK GETPARM
  drm/i915/perf: Add 'render basic' Gen8+ OA unit configs
  drm/i915/perf: Add OA unit support for Gen 8+
  drm/i915/perf: Add more OA configs for BDW, CHV, SKL + BXT
  drm/i915/perf: per-gen timebase for checking sample freq
  drm/i915/perf: remove perf.hook_lock

 drivers/gpu/drm/i915/Makefile           |    8 +-
 drivers/gpu/drm/i915/i915_drv.c         |   10 +
 drivers/gpu/drm/i915/i915_drv.h         |  121 +-
 drivers/gpu/drm/i915/i915_gem_context.h |    1 +
 drivers/gpu/drm/i915/i915_oa_bdw.c      | 5154 +++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_bdw.h      |   38 +
 drivers/gpu/drm/i915/i915_oa_bxt.c      | 2541 +++++++++++++++
 drivers/gpu/drm/i915/i915_oa_bxt.h      |   38 +
 drivers/gpu/drm/i915/i915_oa_chv.c      | 2730 ++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_chv.h      |   38 +
 drivers/gpu/drm/i915/i915_oa_hsw.c      |   58 +-
 drivers/gpu/drm/i915/i915_oa_sklgt2.c   | 3302 ++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_sklgt2.h   |   38 +
 drivers/gpu/drm/i915/i915_oa_sklgt3.c   | 2856 +++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_sklgt3.h   |   38 +
 drivers/gpu/drm/i915/i915_oa_sklgt4.c   | 2910 +++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_sklgt4.h   |   38 +
 drivers/gpu/drm/i915/i915_perf.c        | 1341 ++++++--
 drivers/gpu/drm/i915/i915_reg.h         |   22 +
 drivers/gpu/drm/i915/intel_lrc.c        |    5 +
 include/uapi/drm/i915_drm.h             |   27 +-
 21 files changed, 21076 insertions(+), 238 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_oa_bdw.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_bdw.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_bxt.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_bxt.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_chv.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_chv.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt2.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt2.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt3.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt3.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt4.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt4.h

-- 
2.12.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* [PATCH v4 01/15] drm/i915/perf: fix gen7_append_oa_reports comment
  2017-04-12 15:55 [PATCH v4 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Robert Bragg
@ 2017-04-12 15:55 ` Robert Bragg
  2017-04-12 15:55 ` [PATCH v4 02/15] drm/i915/perf: avoid poll, read, EAGAIN busy loops Robert Bragg
                   ` (17 subsequent siblings)
  18 siblings, 0 replies; 87+ messages in thread
From: Robert Bragg @ 2017-04-12 15:55 UTC (permalink / raw)
  To: intel-gfx

If I'm going to complain about a back-to-front convention then the least
I can do is not muddle the comment up too.

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_perf.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 060b171480d5..78fef53b45c9 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -431,7 +431,7 @@ static int append_oa_sample(struct i915_perf_stream *stream,
  * userspace.
  *
  * Note: reports are consumed from the head, and appended to the
- * tail, so the head chases the tail?... If you think that's mad
+ * tail, so the tail chases the head?... If you think that's mad
  * and back-to-front you're not alone, but this follows the
  * Gen PRM naming convention.
  *
-- 
2.12.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v4 02/15] drm/i915/perf: avoid poll, read, EAGAIN busy loops
  2017-04-12 15:55 [PATCH v4 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Robert Bragg
  2017-04-12 15:55 ` [PATCH v4 01/15] drm/i915/perf: fix gen7_append_oa_reports comment Robert Bragg
@ 2017-04-12 15:55 ` Robert Bragg
  2017-04-12 15:55 ` [PATCH v4 03/15] drm/i915/perf: avoid read back of head register Robert Bragg
                   ` (16 subsequent siblings)
  18 siblings, 0 replies; 87+ messages in thread
From: Robert Bragg @ 2017-04-12 15:55 UTC (permalink / raw)
  To: intel-gfx

If the function for checking whether there is OA buffer data available
(during a poll or blocking read) has false positives then we want to
avoid a situation where the subsequent read() returns EAGAIN (after
a more accurate check) followed by a poll() immediately reporting
the same false positive POLLIN event and effectively maintaining a
busy loop until there really is data.

This makes sure that we clear the .pollin event status whenever we
return EAGAIN to userspace which will throttle subsequent POLLIN events
and repeated attempts to read to the 5ms intervals of the hrtimer
callback we have.

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_perf.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 78fef53b45c9..f59f6dd20922 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -1352,7 +1352,15 @@ static ssize_t i915_perf_read(struct file *file,
 		mutex_unlock(&dev_priv->perf.lock);
 	}
 
-	if (ret >= 0) {
+	/* We allow the poll checking to sometimes report false positive POLLIN
+	 * events where we might actually report EAGAIN on read() if there's
+	 * not really any data available. In this situation though we don't
+	 * want to enter a busy loop between poll() reporting a POLLIN event
+	 * and read() returning -EAGAIN. Clearing the oa.pollin state here
+	 * effectively ensures we back off until the next hrtimer callback
+	 * before reporting another POLLIN event.
+	 */
+	if (ret >= 0 || ret == -EAGAIN) {
 		/* Maybe make ->pollin per-stream state if we support multiple
 		 * concurrent streams in the future.
 		 */
-- 
2.12.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v4 03/15] drm/i915/perf: avoid read back of head register
  2017-04-12 15:55 [PATCH v4 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Robert Bragg
  2017-04-12 15:55 ` [PATCH v4 01/15] drm/i915/perf: fix gen7_append_oa_reports comment Robert Bragg
  2017-04-12 15:55 ` [PATCH v4 02/15] drm/i915/perf: avoid poll, read, EAGAIN busy loops Robert Bragg
@ 2017-04-12 15:55 ` Robert Bragg
  2017-04-12 15:55 ` [PATCH v4 04/15] drm/i915/perf: no head/tail ref in gen7_oa_read Robert Bragg
                   ` (15 subsequent siblings)
  18 siblings, 0 replies; 87+ messages in thread
From: Robert Bragg @ 2017-04-12 15:55 UTC (permalink / raw)
  To: intel-gfx

There's no need for the driver to keep reading back the head pointer
from hardware since the hardware doesn't update it automatically. This
way we can treat any invalid head pointer value as a software/driver
bug instead of spurious hardware behaviour.

This change is also a small stepping stone towards re-working how
the head and tail state is managed as part of an improved workaround
for the tail register race condition.

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h  | 11 ++++++++++
 drivers/gpu/drm/i915/i915_perf.c | 46 ++++++++++++++++++----------------------
 2 files changed, 32 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 1af4e6f5410c..2f8a7a4f29df 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2466,6 +2466,17 @@ struct drm_i915_private {
 				u8 *vaddr;
 				int format;
 				int format_size;
+
+				/**
+				 * Although we can always read back the head
+				 * pointer register, we prefer to avoid
+				 * trusting the HW state, just to avoid any
+				 * risk that some hardware condition could
+				 * somehow bump the head pointer unpredictably
+				 * and cause us to forward the wrong OA buffer
+				 * data to userspace.
+				 */
+				u32 head;
 			} oa_buffer;
 
 			u32 gen7_latched_oastatus1;
diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index f59f6dd20922..f47d1cc2144b 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -322,9 +322,8 @@ struct perf_open_properties {
 static bool gen7_oa_buffer_is_empty_fop_unlocked(struct drm_i915_private *dev_priv)
 {
 	int report_size = dev_priv->perf.oa.oa_buffer.format_size;
-	u32 oastatus2 = I915_READ(GEN7_OASTATUS2);
 	u32 oastatus1 = I915_READ(GEN7_OASTATUS1);
-	u32 head = oastatus2 & GEN7_OASTATUS2_HEAD_MASK;
+	u32 head = dev_priv->perf.oa.oa_buffer.head;
 	u32 tail = oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
 
 	return OA_TAKEN(tail, head) <
@@ -458,16 +457,24 @@ static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 		return -EIO;
 
 	head = *head_ptr - gtt_offset;
+
+	/* An out of bounds or misaligned head pointer implies a driver bug
+	 * since we are in full control of head pointer which should only
+	 * be incremented by multiples of the report size (notably also
+	 * all a power of two).
+	 */
+	if (WARN_ONCE(head > OA_BUFFER_SIZE || head % report_size,
+		      "Inconsistent OA buffer head pointer = %u\n", head))
+		return -EIO;
+
 	tail -= gtt_offset;
 
 	/* The OA unit is expected to wrap the tail pointer according to the OA
-	 * buffer size and since we should never write a misaligned head
-	 * pointer we don't expect to read one back either...
+	 * buffer size
 	 */
-	if (tail > OA_BUFFER_SIZE || head > OA_BUFFER_SIZE ||
-	    head % report_size) {
-		DRM_ERROR("Inconsistent OA buffer pointer (head = %u, tail = %u): force restart\n",
-			  head, tail);
+	if (tail > OA_BUFFER_SIZE) {
+		DRM_ERROR("Inconsistent OA buffer tail pointer = %u: force restart\n",
+			  tail);
 		dev_priv->perf.oa.ops.oa_disable(dev_priv);
 		dev_priv->perf.oa.ops.oa_enable(dev_priv);
 		*head_ptr = I915_READ(GEN7_OASTATUS2) &
@@ -562,8 +569,6 @@ static int gen7_oa_read(struct i915_perf_stream *stream,
 			size_t *offset)
 {
 	struct drm_i915_private *dev_priv = stream->dev_priv;
-	int report_size = dev_priv->perf.oa.oa_buffer.format_size;
-	u32 oastatus2;
 	u32 oastatus1;
 	u32 head;
 	u32 tail;
@@ -572,10 +577,9 @@ static int gen7_oa_read(struct i915_perf_stream *stream,
 	if (WARN_ON(!dev_priv->perf.oa.oa_buffer.vaddr))
 		return -EIO;
 
-	oastatus2 = I915_READ(GEN7_OASTATUS2);
 	oastatus1 = I915_READ(GEN7_OASTATUS1);
 
-	head = oastatus2 & GEN7_OASTATUS2_HEAD_MASK;
+	head = dev_priv->perf.oa.oa_buffer.head;
 	tail = oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
 
 	/* XXX: On Haswell we don't have a safe way to clear oastatus1
@@ -616,10 +620,9 @@ static int gen7_oa_read(struct i915_perf_stream *stream,
 		dev_priv->perf.oa.ops.oa_disable(dev_priv);
 		dev_priv->perf.oa.ops.oa_enable(dev_priv);
 
-		oastatus2 = I915_READ(GEN7_OASTATUS2);
 		oastatus1 = I915_READ(GEN7_OASTATUS1);
 
-		head = oastatus2 & GEN7_OASTATUS2_HEAD_MASK;
+		head = dev_priv->perf.oa.oa_buffer.head;
 		tail = oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
 	}
 
@@ -635,17 +638,6 @@ static int gen7_oa_read(struct i915_perf_stream *stream,
 	ret = gen7_append_oa_reports(stream, buf, count, offset,
 				     &head, tail);
 
-	/* All the report sizes are a power of two and the
-	 * head should always be incremented by some multiple
-	 * of the report size.
-	 *
-	 * A warning here, but notably if we later read back a
-	 * misaligned pointer we will treat that as a bug since
-	 * it could lead to a buffer overrun.
-	 */
-	WARN_ONCE(head & (report_size - 1),
-		  "i915: Writing misaligned OA head pointer");
-
 	/* Note: we update the head pointer here even if an error
 	 * was returned since the error may represent a short read
 	 * where some some reports were successfully copied.
@@ -653,6 +645,7 @@ static int gen7_oa_read(struct i915_perf_stream *stream,
 	I915_WRITE(GEN7_OASTATUS2,
 		   ((head & GEN7_OASTATUS2_HEAD_MASK) |
 		    OA_MEM_SELECT_GGTT));
+	dev_priv->perf.oa.oa_buffer.head = head;
 
 	return ret;
 }
@@ -834,7 +827,10 @@ static void gen7_init_oa_buffer(struct drm_i915_private *dev_priv)
 	 * before OASTATUS1, but after OASTATUS2
 	 */
 	I915_WRITE(GEN7_OASTATUS2, gtt_offset | OA_MEM_SELECT_GGTT); /* head */
+	dev_priv->perf.oa.oa_buffer.head = gtt_offset;
+
 	I915_WRITE(GEN7_OABUFFER, gtt_offset);
+
 	I915_WRITE(GEN7_OASTATUS1, gtt_offset | OABUFFER_SIZE_16M); /* tail */
 
 	/* On Haswell we have to track which OASTATUS1 flags we've
-- 
2.12.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v4 04/15] drm/i915/perf: no head/tail ref in gen7_oa_read
  2017-04-12 15:55 [PATCH v4 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Robert Bragg
                   ` (2 preceding siblings ...)
  2017-04-12 15:55 ` [PATCH v4 03/15] drm/i915/perf: avoid read back of head register Robert Bragg
@ 2017-04-12 15:55 ` Robert Bragg
  2017-04-12 15:55 ` [PATCH v4 05/15] drm/i915/perf: improve tail race workaround Robert Bragg
                   ` (14 subsequent siblings)
  18 siblings, 0 replies; 87+ messages in thread
From: Robert Bragg @ 2017-04-12 15:55 UTC (permalink / raw)
  To: intel-gfx

This avoids redundantly passing an (inout) head and tail pointer to
gen7_append_oa_reports() from gen7_oa_read which doesn't need to
reference either itself.

Moving the head/tail reads and writes into gen7_append_oa_reports should
have no functional effect except to avoid some redundant head pointer
writes in cases where nothing was copied to userspace.

This is a stepping stone towards updating how the head and tail pointer
state is managed to improve the workaround for the OA unit's tail
pointer race. It reduces the number of places we need to read/write the
head and tail pointers.

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_perf.c | 51 +++++++++++++++-------------------------
 1 file changed, 19 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index f47d1cc2144b..83dc67a635fb 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -420,8 +420,6 @@ static int append_oa_sample(struct i915_perf_stream *stream,
  * @buf: destination buffer given by userspace
  * @count: the number of bytes userspace wants to read
  * @offset: (inout): the current position for writing into @buf
- * @head_ptr: (inout): the current oa buffer cpu read position
- * @tail: the current oa buffer gpu write position
  *
  * Notably any error condition resulting in a short read (-%ENOSPC or
  * -%EFAULT) will be returned even though one or more records may
@@ -439,9 +437,7 @@ static int append_oa_sample(struct i915_perf_stream *stream,
 static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 				  char __user *buf,
 				  size_t count,
-				  size_t *offset,
-				  u32 *head_ptr,
-				  u32 tail)
+				  size_t *offset)
 {
 	struct drm_i915_private *dev_priv = stream->dev_priv;
 	int report_size = dev_priv->perf.oa.oa_buffer.format_size;
@@ -449,14 +445,15 @@ static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 	int tail_margin = dev_priv->perf.oa.tail_margin;
 	u32 gtt_offset = i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
 	u32 mask = (OA_BUFFER_SIZE - 1);
-	u32 head;
+	size_t start_offset = *offset;
+	u32 head, oastatus1, tail;
 	u32 taken;
 	int ret = 0;
 
 	if (WARN_ON(!stream->enabled))
 		return -EIO;
 
-	head = *head_ptr - gtt_offset;
+	head = dev_priv->perf.oa.oa_buffer.head - gtt_offset;
 
 	/* An out of bounds or misaligned head pointer implies a driver bug
 	 * since we are in full control of head pointer which should only
@@ -467,7 +464,8 @@ static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 		      "Inconsistent OA buffer head pointer = %u\n", head))
 		return -EIO;
 
-	tail -= gtt_offset;
+	oastatus1 = I915_READ(GEN7_OASTATUS1);
+	tail = (oastatus1 & GEN7_OASTATUS1_TAIL_MASK) - gtt_offset;
 
 	/* The OA unit is expected to wrap the tail pointer according to the OA
 	 * buffer size
@@ -477,8 +475,6 @@ static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 			  tail);
 		dev_priv->perf.oa.ops.oa_disable(dev_priv);
 		dev_priv->perf.oa.ops.oa_enable(dev_priv);
-		*head_ptr = I915_READ(GEN7_OASTATUS2) &
-			GEN7_OASTATUS2_HEAD_MASK;
 		return -EIO;
 	}
 
@@ -542,7 +538,18 @@ static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 		report32[0] = 0;
 	}
 
-	*head_ptr = gtt_offset + head;
+
+	if (start_offset != *offset) {
+		/* We removed the gtt_offset for the copy loop above, indexing
+		 * relative to oa_buf_base so put back here...
+		 */
+		head += gtt_offset;
+
+		I915_WRITE(GEN7_OASTATUS2,
+			   ((head & GEN7_OASTATUS2_HEAD_MASK) |
+			    OA_MEM_SELECT_GGTT));
+		dev_priv->perf.oa.oa_buffer.head = head;
+	}
 
 	return ret;
 }
@@ -570,8 +577,6 @@ static int gen7_oa_read(struct i915_perf_stream *stream,
 {
 	struct drm_i915_private *dev_priv = stream->dev_priv;
 	u32 oastatus1;
-	u32 head;
-	u32 tail;
 	int ret;
 
 	if (WARN_ON(!dev_priv->perf.oa.oa_buffer.vaddr))
@@ -579,9 +584,6 @@ static int gen7_oa_read(struct i915_perf_stream *stream,
 
 	oastatus1 = I915_READ(GEN7_OASTATUS1);
 
-	head = dev_priv->perf.oa.oa_buffer.head;
-	tail = oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
-
 	/* XXX: On Haswell we don't have a safe way to clear oastatus1
 	 * bits while the OA unit is enabled (while the tail pointer
 	 * may be updated asynchronously) so we ignore status bits
@@ -621,9 +623,6 @@ static int gen7_oa_read(struct i915_perf_stream *stream,
 		dev_priv->perf.oa.ops.oa_enable(dev_priv);
 
 		oastatus1 = I915_READ(GEN7_OASTATUS1);
-
-		head = dev_priv->perf.oa.oa_buffer.head;
-		tail = oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
 	}
 
 	if (unlikely(oastatus1 & GEN7_OASTATUS1_REPORT_LOST)) {
@@ -635,19 +634,7 @@ static int gen7_oa_read(struct i915_perf_stream *stream,
 			GEN7_OASTATUS1_REPORT_LOST;
 	}
 
-	ret = gen7_append_oa_reports(stream, buf, count, offset,
-				     &head, tail);
-
-	/* Note: we update the head pointer here even if an error
-	 * was returned since the error may represent a short read
-	 * where some some reports were successfully copied.
-	 */
-	I915_WRITE(GEN7_OASTATUS2,
-		   ((head & GEN7_OASTATUS2_HEAD_MASK) |
-		    OA_MEM_SELECT_GGTT));
-	dev_priv->perf.oa.oa_buffer.head = head;
-
-	return ret;
+	return gen7_append_oa_reports(stream, buf, count, offset);
 }
 
 /**
-- 
2.12.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v4 05/15] drm/i915/perf: improve tail race workaround
  2017-04-12 15:55 [PATCH v4 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Robert Bragg
                   ` (3 preceding siblings ...)
  2017-04-12 15:55 ` [PATCH v4 04/15] drm/i915/perf: no head/tail ref in gen7_oa_read Robert Bragg
@ 2017-04-12 15:55 ` Robert Bragg
  2017-04-12 15:55 ` [PATCH v4 06/15] drm/i915/perf: improve invalid OA format debug message Robert Bragg
                   ` (13 subsequent siblings)
  18 siblings, 0 replies; 87+ messages in thread
From: Robert Bragg @ 2017-04-12 15:55 UTC (permalink / raw)
  To: intel-gfx

There's a HW race condition between OA unit tail pointer register
updates and writes to memory whereby the tail pointer can sometimes get
ahead of what's been written out to the OA buffer so far (in terms of
what's visible to the CPU).

Although this can be observed explicitly while copying reports to
userspace by checking for a zeroed report-id field in tail reports, we
want to account for this earlier, as part of the _oa_buffer_check to
avoid lots of redundant read() attempts.

Previously the driver used to define an effective tail pointer that
lagged the real pointer by a 'tail margin' measured in bytes derived
from OA_TAIL_MARGIN_NSEC and the configured sampling frequency.
Unfortunately this was flawed considering that the OA unit may also
automatically generate non-periodic reports (such as on context switch)
or the OA unit may be enabled without any periodic sampling.

This improves how we define a tail pointer for reading that lags the
real tail pointer by at least %OA_TAIL_MARGIN_NSEC nanoseconds, which
gives enough time for the corresponding reports to become visible to the
CPU.

The driver now maintains two tail pointers:
 1) An 'aging' tail with an associated timestamp that is tracked until we
    can trust the corresponding data is visible to the CPU; at which point
    it is considered 'aged'.
 2) An 'aged' tail that can be used for read()ing.

The two separate pointers let us decouple read()s from tail pointer aging.

The tail pointers are checked and updated at a limited rate within a
hrtimer callback (the same callback that is used for delivering POLLIN
events) and since we're now measuring the wall clock time elapsed since
a given tail pointer was read the mechanism no longer cares about
the OA unit's periodic sampling frequency.

The natural place to handle the tail pointer updates was in
gen7_oa_buffer_is_empty() which is called as part of blocking reads and
the hrtimer callback used for polling, and so this was renamed to
oa_buffer_check() considering the added side effect while checking
whether the buffer contains data.

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h  |  60 ++++++++-
 drivers/gpu/drm/i915/i915_perf.c | 277 ++++++++++++++++++++++++++-------------
 2 files changed, 241 insertions(+), 96 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 2f8a7a4f29df..088c4c60bd38 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2094,7 +2094,7 @@ struct i915_oa_ops {
 		    size_t *offset);
 
 	/**
-	 * @oa_buffer_is_empty: Check if OA buffer empty (false positives OK)
+	 * @oa_buffer_check: Check for OA buffer data + update tail
 	 *
 	 * This is either called via fops or the poll check hrtimer (atomic
 	 * ctx) without any locks taken.
@@ -2107,7 +2107,7 @@ struct i915_oa_ops {
 	 * here, which will be handled gracefully - likely resulting in an
 	 * %EAGAIN error for userspace.
 	 */
-	bool (*oa_buffer_is_empty)(struct drm_i915_private *dev_priv);
+	bool (*oa_buffer_check)(struct drm_i915_private *dev_priv);
 };
 
 struct intel_cdclk_state {
@@ -2450,9 +2450,6 @@ struct drm_i915_private {
 
 			bool periodic;
 			int period_exponent;
-			int timestamp_frequency;
-
-			int tail_margin;
 
 			int metrics_set;
 
@@ -2468,6 +2465,59 @@ struct drm_i915_private {
 				int format_size;
 
 				/**
+				 * Locks reads and writes to all head/tail state
+				 *
+				 * Consider: the head and tail pointer state
+				 * needs to be read consistently from a hrtimer
+				 * callback (atomic context) and read() fop
+				 * (user context) with tail pointer updates
+				 * happening in atomic context and head updates
+				 * in user context and the (unlikely)
+				 * possibility of read() errors needing to
+				 * reset all head/tail state.
+				 *
+				 * Note: Contention or performance aren't
+				 * currently a significant concern here
+				 * considering the relatively low frequency of
+				 * hrtimer callbacks (5ms period) and that
+				 * reads typically only happen in response to a
+				 * hrtimer event and likely complete before the
+				 * next callback.
+				 *
+				 * Note: This lock is not held *while* reading
+				 * and copying data to userspace so the value
+				 * of head observed in htrimer callbacks won't
+				 * represent any partial consumption of data.
+				 */
+				spinlock_t ptr_lock;
+
+				/**
+				 * One 'aging' tail pointer and one 'aged'
+				 * tail pointer ready to used for reading.
+				 *
+				 * Initial values of 0xffffffff are invalid
+				 * and imply that an update is required
+				 * (and should be ignored by an attempted
+				 * read)
+				 */
+				struct {
+					u32 offset;
+				} tails[2];
+
+				/**
+				 * Index for the aged tail ready to read()
+				 * data up to.
+				 */
+				unsigned int aged_tail_idx;
+
+				/**
+				 * A monotonic timestamp for when the current
+				 * aging tail pointer was read; used to
+				 * determine when it is old enough to trust.
+				 */
+				u64 aging_timestamp;
+
+				/**
 				 * Although we can always read back the head
 				 * pointer register, we prefer to avoid
 				 * trusting the HW state, just to avoid any
diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 83dc67a635fb..18734e1926b9 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -205,25 +205,49 @@
 
 #define OA_TAKEN(tail, head)	((tail - head) & (OA_BUFFER_SIZE - 1))
 
-/* There's a HW race condition between OA unit tail pointer register updates and
+/**
+ * DOC: OA Tail Pointer Race
+ *
+ * There's a HW race condition between OA unit tail pointer register updates and
  * writes to memory whereby the tail pointer can sometimes get ahead of what's
- * been written out to the OA buffer so far.
+ * been written out to the OA buffer so far (in terms of what's visible to the
+ * CPU).
+ *
+ * Although this can be observed explicitly while copying reports to userspace
+ * by checking for a zeroed report-id field in tail reports, we want to account
+ * for this earlier, as part of the _oa_buffer_check to avoid lots of redundant
+ * read() attempts.
+ *
+ * In effect we define a tail pointer for reading that lags the real tail
+ * pointer by at least %OA_TAIL_MARGIN_NSEC nanoseconds, which gives enough
+ * time for the corresponding reports to become visible to the CPU.
+ *
+ * To manage this we actually track two tail pointers:
+ *  1) An 'aging' tail with an associated timestamp that is tracked until we
+ *     can trust the corresponding data is visible to the CPU; at which point
+ *     it is considered 'aged'.
+ *  2) An 'aged' tail that can be used for read()ing.
  *
- * Although this can be observed explicitly by checking for a zeroed report-id
- * field in tail reports, it seems preferable to account for this earlier e.g.
- * as part of the _oa_buffer_is_empty checks to minimize -EAGAIN polling cycles
- * in this situation.
+ * The two separate pointers let us decouple read()s from tail pointer aging.
  *
- * To give time for the most recent reports to land before they may be copied to
- * userspace, the driver operates as if the tail pointer effectively lags behind
- * the HW tail pointer by 'tail_margin' bytes. The margin in bytes is calculated
- * based on this constant in nanoseconds, the current OA sampling exponent
- * and current report size.
+ * The tail pointers are checked and updated at a limited rate within a hrtimer
+ * callback (the same callback that is used for delivering POLLIN events)
  *
- * There is also a fallback check while reading to simply skip over reports with
- * a zeroed report-id.
+ * Initially the tails are marked invalid with %INVALID_TAIL_PTR which
+ * indicates that an updated tail pointer is needed.
+ *
+ * Most of the implementation details for this workaround are in
+ * gen7_oa_buffer_check_unlocked() and gen7_appand_oa_reports()
+ *
+ * Note for posterity: previously the driver used to define an effective tail
+ * pointer that lagged the real pointer by a 'tail margin' measured in bytes
+ * derived from %OA_TAIL_MARGIN_NSEC and the configured sampling frequency.
+ * This was flawed considering that the OA unit may also automatically generate
+ * non-periodic reports (such as on context switch) or the OA unit may be
+ * enabled without any periodic sampling.
  */
 #define OA_TAIL_MARGIN_NSEC	100000ULL
+#define INVALID_TAIL_PTR	0xffffffff
 
 /* frequency for checking whether the OA unit has written new reports to the
  * circular OA buffer...
@@ -308,26 +332,116 @@ struct perf_open_properties {
 	int oa_period_exponent;
 };
 
-/* NB: This is either called via fops or the poll check hrtimer (atomic ctx)
+/**
+ * gen7_oa_buffer_check_unlocked - check for data and update tail ptr state
+ * @dev_priv: i915 device instance
  *
- * It's safe to read OA config state here unlocked, assuming that this is only
- * called while the stream is enabled, while the global OA configuration can't
- * be modified.
+ * This is either called via fops (for blocking reads in user ctx) or the poll
+ * check hrtimer (atomic ctx) to check the OA buffer tail pointer and check
+ * if there is data available for userspace to read.
  *
- * Note: we don't lock around the head/tail reads even though there's the slim
- * possibility of read() fop errors forcing a re-init of the OA buffer
- * pointers.  A race here could result in a false positive !empty status which
- * is acceptable.
+ * This function is central to providing a workaround for the OA unit tail
+ * pointer having a race with respect to what data is visible to the CPU.
+ * It is responsible for reading tail pointers from the hardware and giving
+ * the pointers time to 'age' before they are made available for reading.
+ * (See description of OA_TAIL_MARGIN_NSEC above for further details.)
+ *
+ * Besides returning true when there is data available to read() this function
+ * also has the side effect of updating the oa_buffer.tails[], .aging_timestamp
+ * and .aged_tail_idx state used for reading.
+ *
+ * Note: It's safe to read OA config state here unlocked, assuming that this is
+ * only called while the stream is enabled, while the global OA configuration
+ * can't be modified.
+ *
+ * Returns: %true if the OA buffer contains data, else %false
  */
-static bool gen7_oa_buffer_is_empty_fop_unlocked(struct drm_i915_private *dev_priv)
+static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
 {
 	int report_size = dev_priv->perf.oa.oa_buffer.format_size;
-	u32 oastatus1 = I915_READ(GEN7_OASTATUS1);
-	u32 head = dev_priv->perf.oa.oa_buffer.head;
-	u32 tail = oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
+	unsigned long flags;
+	unsigned int aged_idx;
+	u32 oastatus1;
+	u32 head, hw_tail, aged_tail, aging_tail;
+	u64 now;
+
+	/* We have to consider the (unlikely) possibility that read() errors
+	 * could result in an OA buffer reset which might reset the head,
+	 * tails[] and aged_tail state.
+	 */
+	spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	/* NB: The head we observe here might effectively be a little out of
+	 * date (between head and tails[aged_idx].offset if there is currently
+	 * a read() in progress.
+	 */
+	head = dev_priv->perf.oa.oa_buffer.head;
+
+	aged_idx = dev_priv->perf.oa.oa_buffer.aged_tail_idx;
+	aged_tail = dev_priv->perf.oa.oa_buffer.tails[aged_idx].offset;
+	aging_tail = dev_priv->perf.oa.oa_buffer.tails[!aged_idx].offset;
+
+	oastatus1 = I915_READ(GEN7_OASTATUS1);
+	hw_tail = oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
+
+	/* The tail pointer increases in 64 byte increments,
+	 * not in report_size steps...
+	 */
+	hw_tail &= ~(report_size - 1);
+
+	now = ktime_get_mono_fast_ns();
+
+	/* Update the aging tail
+	 *
+	 * We throttle aging tail updates until we have a new tail that
+	 * represents >= one report more data than is already available for
+	 * reading. This ensures there will be enough data for a successful
+	 * read once this new pointer has aged and ensures we will give the new
+	 * pointer time to age.
+	 */
+	if (aging_tail == INVALID_TAIL_PTR &&
+	    (aged_tail == INVALID_TAIL_PTR ||
+	     OA_TAKEN(hw_tail, aged_tail) >= report_size)) {
+		struct i915_vma *vma = dev_priv->perf.oa.oa_buffer.vma;
+		u32 gtt_offset = i915_ggtt_offset(vma);
+
+		/* Be paranoid and do a bounds check on the pointer read back
+		 * from hardware, just in case some spurious hardware condition
+		 * could put the tail out of bounds...
+		 */
+		if (hw_tail >= gtt_offset &&
+		    hw_tail < (gtt_offset + OA_BUFFER_SIZE)) {
+			dev_priv->perf.oa.oa_buffer.tails[!aged_idx].offset =
+				aging_tail = hw_tail;
+			dev_priv->perf.oa.oa_buffer.aging_timestamp = now;
+		} else {
+			DRM_ERROR("Ignoring spurious out of range OA buffer tail pointer = %u\n",
+				  hw_tail);
+		}
+	}
+
+	/* Update the aged tail
+	 *
+	 * Flip the tail pointer available for read()s once the aging tail is
+	 * old enough to trust that the corresponding data will be visible to
+	 * the CPU...
+	 */
+	if (aging_tail != INVALID_TAIL_PTR &&
+	    ((now - dev_priv->perf.oa.oa_buffer.aging_timestamp) >
+	     OA_TAIL_MARGIN_NSEC)) {
+		aged_idx ^= 1;
+		dev_priv->perf.oa.oa_buffer.aged_tail_idx = aged_idx;
+
+		aged_tail = aging_tail;
 
-	return OA_TAKEN(tail, head) <
-		dev_priv->perf.oa.tail_margin + report_size;
+		/* Mark that we need a new pointer to start aging... */
+		dev_priv->perf.oa.oa_buffer.tails[!aged_idx].offset = INVALID_TAIL_PTR;
+	}
+
+	spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	return aged_tail == INVALID_TAIL_PTR ?
+		false : OA_TAKEN(aged_tail, head) >= report_size;
 }
 
 /**
@@ -442,58 +556,50 @@ static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 	struct drm_i915_private *dev_priv = stream->dev_priv;
 	int report_size = dev_priv->perf.oa.oa_buffer.format_size;
 	u8 *oa_buf_base = dev_priv->perf.oa.oa_buffer.vaddr;
-	int tail_margin = dev_priv->perf.oa.tail_margin;
 	u32 gtt_offset = i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
 	u32 mask = (OA_BUFFER_SIZE - 1);
 	size_t start_offset = *offset;
-	u32 head, oastatus1, tail;
+	unsigned long flags;
+	unsigned int aged_tail_idx;
+	u32 head, tail;
 	u32 taken;
 	int ret = 0;
 
 	if (WARN_ON(!stream->enabled))
 		return -EIO;
 
-	head = dev_priv->perf.oa.oa_buffer.head - gtt_offset;
+	spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
 
-	/* An out of bounds or misaligned head pointer implies a driver bug
-	 * since we are in full control of head pointer which should only
-	 * be incremented by multiples of the report size (notably also
-	 * all a power of two).
-	 */
-	if (WARN_ONCE(head > OA_BUFFER_SIZE || head % report_size,
-		      "Inconsistent OA buffer head pointer = %u\n", head))
-		return -EIO;
+	head = dev_priv->perf.oa.oa_buffer.head;
+	aged_tail_idx = dev_priv->perf.oa.oa_buffer.aged_tail_idx;
+	tail = dev_priv->perf.oa.oa_buffer.tails[aged_tail_idx].offset;
 
-	oastatus1 = I915_READ(GEN7_OASTATUS1);
-	tail = (oastatus1 & GEN7_OASTATUS1_TAIL_MASK) - gtt_offset;
+	spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
 
-	/* The OA unit is expected to wrap the tail pointer according to the OA
-	 * buffer size
+	/* An invalid tail pointer here means we're still waiting for the poll
+	 * hrtimer callback to give us a pointer
 	 */
-	if (tail > OA_BUFFER_SIZE) {
-		DRM_ERROR("Inconsistent OA buffer tail pointer = %u: force restart\n",
-			  tail);
-		dev_priv->perf.oa.ops.oa_disable(dev_priv);
-		dev_priv->perf.oa.ops.oa_enable(dev_priv);
-		return -EIO;
-	}
-
+	if (tail == INVALID_TAIL_PTR)
+		return -EAGAIN;
 
-	/* The tail pointer increases in 64 byte increments, not in report_size
-	 * steps...
+	/* NB: oa_buffer.head/tail include the gtt_offset which we don't want
+	 * while indexing relative to oa_buf_base.
 	 */
-	tail &= ~(report_size - 1);
+	head -= gtt_offset;
+	tail -= gtt_offset;
 
-	/* Move the tail pointer back by the current tail_margin to account for
-	 * the possibility that the latest reports may not have really landed
-	 * in memory yet...
+	/* An out of bounds or misaligned head or tail pointer implies a driver
+	 * bug since we validate + align the tail pointers we read from the
+	 * hardware and we are in full control of the head pointer which should
+	 * only be incremented by multiples of the report size (notably also
+	 * all a power of two).
 	 */
+	if (WARN_ONCE(head > OA_BUFFER_SIZE || head % report_size ||
+		      tail > OA_BUFFER_SIZE || tail % report_size,
+		      "Inconsistent OA buffer pointers: head = %u, tail = %u\n",
+		      head, tail))
+		return -EIO;
 
-	if (OA_TAKEN(tail, head) < report_size + tail_margin)
-		return -EAGAIN;
-
-	tail -= tail_margin;
-	tail &= mask;
 
 	for (/* none */;
 	     (taken = OA_TAKEN(tail, head));
@@ -540,6 +646,8 @@ static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 
 
 	if (start_offset != *offset) {
+		spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
 		/* We removed the gtt_offset for the copy loop above, indexing
 		 * relative to oa_buf_base so put back here...
 		 */
@@ -549,6 +657,8 @@ static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 			   ((head & GEN7_OASTATUS2_HEAD_MASK) |
 			    OA_MEM_SELECT_GGTT));
 		dev_priv->perf.oa.oa_buffer.head = head;
+
+		spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
 	}
 
 	return ret;
@@ -659,14 +769,8 @@ static int i915_oa_wait_unlocked(struct i915_perf_stream *stream)
 	if (!dev_priv->perf.oa.periodic)
 		return -EIO;
 
-	/* Note: the oa_buffer_is_empty() condition is ok to run unlocked as it
-	 * just performs mmio reads of the OA buffer head + tail pointers and
-	 * it's assumed we're handling some operation that implies the stream
-	 * can't be destroyed until completion (such as a read()) that ensures
-	 * the device + OA buffer can't disappear
-	 */
 	return wait_event_interruptible(dev_priv->perf.oa.poll_wq,
-					!dev_priv->perf.oa.ops.oa_buffer_is_empty(dev_priv));
+					dev_priv->perf.oa.ops.oa_buffer_check(dev_priv));
 }
 
 /**
@@ -809,6 +913,9 @@ static void i915_oa_stream_destroy(struct i915_perf_stream *stream)
 static void gen7_init_oa_buffer(struct drm_i915_private *dev_priv)
 {
 	u32 gtt_offset = i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
+	unsigned long flags;
+
+	spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
 
 	/* Pre-DevBDW: OABUFFER must be set with counters off,
 	 * before OASTATUS1, but after OASTATUS2
@@ -820,6 +927,12 @@ static void gen7_init_oa_buffer(struct drm_i915_private *dev_priv)
 
 	I915_WRITE(GEN7_OASTATUS1, gtt_offset | OABUFFER_SIZE_16M); /* tail */
 
+	/* Mark that we need updated tail pointers to read from... */
+	dev_priv->perf.oa.oa_buffer.tails[0].offset = INVALID_TAIL_PTR;
+	dev_priv->perf.oa.oa_buffer.tails[1].offset = INVALID_TAIL_PTR;
+
+	spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
 	/* On Haswell we have to track which OASTATUS1 flags we've
 	 * already seen since they can't be cleared while periodic
 	 * sampling is enabled.
@@ -1077,12 +1190,6 @@ static void i915_oa_stream_disable(struct i915_perf_stream *stream)
 		hrtimer_cancel(&dev_priv->perf.oa.poll_check_timer);
 }
 
-static u64 oa_exponent_to_ns(struct drm_i915_private *dev_priv, int exponent)
-{
-	return div_u64(1000000000ULL * (2ULL << exponent),
-		       dev_priv->perf.oa.timestamp_frequency);
-}
-
 static const struct i915_perf_stream_ops i915_oa_stream_ops = {
 	.destroy = i915_oa_stream_destroy,
 	.enable = i915_oa_stream_enable,
@@ -1173,20 +1280,9 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream,
 	dev_priv->perf.oa.metrics_set = props->metrics_set;
 
 	dev_priv->perf.oa.periodic = props->oa_periodic;
-	if (dev_priv->perf.oa.periodic) {
-		u32 tail;
-
+	if (dev_priv->perf.oa.periodic)
 		dev_priv->perf.oa.period_exponent = props->oa_period_exponent;
 
-		/* See comment for OA_TAIL_MARGIN_NSEC for details
-		 * about this tail_margin...
-		 */
-		tail = div64_u64(OA_TAIL_MARGIN_NSEC,
-				 oa_exponent_to_ns(dev_priv,
-						   props->oa_period_exponent));
-		dev_priv->perf.oa.tail_margin = (tail + 1) * format_size;
-	}
-
 	if (stream->ctx) {
 		ret = oa_get_render_ctx_id(stream);
 		if (ret)
@@ -1359,7 +1455,7 @@ static enum hrtimer_restart oa_poll_check_timer_cb(struct hrtimer *hrtimer)
 		container_of(hrtimer, typeof(*dev_priv),
 			     perf.oa.poll_check_timer);
 
-	if (!dev_priv->perf.oa.ops.oa_buffer_is_empty(dev_priv)) {
+	if (dev_priv->perf.oa.ops.oa_buffer_check(dev_priv)) {
 		dev_priv->perf.oa.pollin = true;
 		wake_up(&dev_priv->perf.oa.poll_wq);
 	}
@@ -2054,6 +2150,7 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
 	INIT_LIST_HEAD(&dev_priv->perf.streams);
 	mutex_init(&dev_priv->perf.lock);
 	spin_lock_init(&dev_priv->perf.hook_lock);
+	spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);
 
 	dev_priv->perf.oa.ops.init_oa_buffer = gen7_init_oa_buffer;
 	dev_priv->perf.oa.ops.enable_metric_set = hsw_enable_metric_set;
@@ -2061,10 +2158,8 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
 	dev_priv->perf.oa.ops.oa_enable = gen7_oa_enable;
 	dev_priv->perf.oa.ops.oa_disable = gen7_oa_disable;
 	dev_priv->perf.oa.ops.read = gen7_oa_read;
-	dev_priv->perf.oa.ops.oa_buffer_is_empty =
-		gen7_oa_buffer_is_empty_fop_unlocked;
-
-	dev_priv->perf.oa.timestamp_frequency = 12500000;
+	dev_priv->perf.oa.ops.oa_buffer_check =
+		gen7_oa_buffer_check_unlocked;
 
 	dev_priv->perf.oa.oa_formats = hsw_oa_formats;
 
-- 
2.12.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v4 06/15] drm/i915/perf: improve invalid OA format debug message
  2017-04-12 15:55 [PATCH v4 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Robert Bragg
                   ` (4 preceding siblings ...)
  2017-04-12 15:55 ` [PATCH v4 05/15] drm/i915/perf: improve tail race workaround Robert Bragg
@ 2017-04-12 15:55 ` Robert Bragg
  2017-04-12 15:55 ` [PATCH v4 07/15] drm/i915/perf: better pipeline aged/aging tail updates Robert Bragg
                   ` (12 subsequent siblings)
  18 siblings, 0 replies; 87+ messages in thread
From: Robert Bragg @ 2017-04-12 15:55 UTC (permalink / raw)
  To: intel-gfx

A minor improvement to debugging output

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_perf.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 18734e1926b9..08cc2b0dd734 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -1904,11 +1904,13 @@ static int read_properties_unlocked(struct drm_i915_private *dev_priv,
 			break;
 		case DRM_I915_PERF_PROP_OA_FORMAT:
 			if (value == 0 || value >= I915_OA_FORMAT_MAX) {
-				DRM_DEBUG("Invalid OA report format\n");
+				DRM_DEBUG("Out-of-range OA report format %llu\n",
+					  value);
 				return -EINVAL;
 			}
 			if (!dev_priv->perf.oa.oa_formats[value].size) {
-				DRM_DEBUG("Invalid OA report format\n");
+				DRM_DEBUG("Unsupported OA report format %llu\n",
+					  value);
 				return -EINVAL;
 			}
 			props->oa_format = value;
-- 
2.12.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v4 07/15] drm/i915/perf: better pipeline aged/aging tail updates
  2017-04-12 15:55 [PATCH v4 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Robert Bragg
                   ` (5 preceding siblings ...)
  2017-04-12 15:55 ` [PATCH v4 06/15] drm/i915/perf: improve invalid OA format debug message Robert Bragg
@ 2017-04-12 15:55 ` Robert Bragg
  2017-04-12 15:55 ` [PATCH v4 08/15] drm/i915/perf: rate limit spurious oa report notice Robert Bragg
                   ` (11 subsequent siblings)
  18 siblings, 0 replies; 87+ messages in thread
From: Robert Bragg @ 2017-04-12 15:55 UTC (permalink / raw)
  To: intel-gfx

This updates the tail pointer race workaround handling to updating the
'aged' pointer before looking to start aging a new one. There's the
possibility that there is already new data available and so we can
immediately start aging a new pointer without having to first wait for a
later hrtimer callback (and then another to age).

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_perf.c | 41 ++++++++++++++++++++++------------------
 1 file changed, 23 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 08cc2b0dd734..5738b99caa5b 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -391,6 +391,29 @@ static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
 
 	now = ktime_get_mono_fast_ns();
 
+	/* Update the aged tail
+	 *
+	 * Flip the tail pointer available for read()s once the aging tail is
+	 * old enough to trust that the corresponding data will be visible to
+	 * the CPU...
+	 *
+	 * Do this before updating the aging pointer in case we may be able to
+	 * immediately start aging a new pointer too (if new data has become
+	 * available) without needing to wait for a later hrtimer callback.
+	 */
+	if (aging_tail != INVALID_TAIL_PTR &&
+	    ((now - dev_priv->perf.oa.oa_buffer.aging_timestamp) >
+	     OA_TAIL_MARGIN_NSEC)) {
+		aged_idx ^= 1;
+		dev_priv->perf.oa.oa_buffer.aged_tail_idx = aged_idx;
+
+		aged_tail = aging_tail;
+
+		/* Mark that we need a new pointer to start aging... */
+		dev_priv->perf.oa.oa_buffer.tails[!aged_idx].offset = INVALID_TAIL_PTR;
+		aging_tail = INVALID_TAIL_PTR;
+	}
+
 	/* Update the aging tail
 	 *
 	 * We throttle aging tail updates until we have a new tail that
@@ -420,24 +443,6 @@ static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
 		}
 	}
 
-	/* Update the aged tail
-	 *
-	 * Flip the tail pointer available for read()s once the aging tail is
-	 * old enough to trust that the corresponding data will be visible to
-	 * the CPU...
-	 */
-	if (aging_tail != INVALID_TAIL_PTR &&
-	    ((now - dev_priv->perf.oa.oa_buffer.aging_timestamp) >
-	     OA_TAIL_MARGIN_NSEC)) {
-		aged_idx ^= 1;
-		dev_priv->perf.oa.oa_buffer.aged_tail_idx = aged_idx;
-
-		aged_tail = aging_tail;
-
-		/* Mark that we need a new pointer to start aging... */
-		dev_priv->perf.oa.oa_buffer.tails[!aged_idx].offset = INVALID_TAIL_PTR;
-	}
-
 	spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
 
 	return aged_tail == INVALID_TAIL_PTR ?
-- 
2.12.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v4 08/15] drm/i915/perf: rate limit spurious oa report notice
  2017-04-12 15:55 [PATCH v4 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Robert Bragg
                   ` (6 preceding siblings ...)
  2017-04-12 15:55 ` [PATCH v4 07/15] drm/i915/perf: better pipeline aged/aging tail updates Robert Bragg
@ 2017-04-12 15:55 ` Robert Bragg
  2017-04-12 15:55 ` [PATCH v4 09/15] drm/i915: expose _SLICE_MASK GETPARM Robert Bragg
                   ` (10 subsequent siblings)
  18 siblings, 0 replies; 87+ messages in thread
From: Robert Bragg @ 2017-04-12 15:55 UTC (permalink / raw)
  To: intel-gfx

This change is pre-emptively aiming to avoid a potential cause of kernel
logging noise in case some condition were to result in us seeing invalid
OA reports.

The workaround for the OA unit's tail pointer race condition is what
avoids the primary known cause of invalid reports being seen and with
that in place we aren't expecting to see this notice but it can't be
entirely ruled out.

Just in case some condition does lead to the notice then it's likely
that it will be triggered repeatedly while attempting to append a
sequence of reports and depending on the configured OA sampling
frequency that might be a large number of repeat notices.

v2: (Chris) avoid inconsistent warning on throttle with
    printk_ratelimit()
v3: (Matt) init and summarise with stream init/close not driver init/fini

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h  |  6 ++++++
 drivers/gpu/drm/i915/i915_perf.c | 28 +++++++++++++++++++++++++++-
 2 files changed, 33 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 088c4c60bd38..a0e34934a11f 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2448,6 +2448,12 @@ struct drm_i915_private {
 			wait_queue_head_t poll_wq;
 			bool pollin;
 
+			/**
+			 * For rate limiting any notifications of spurious
+			 * invalid OA reports
+			 */
+			struct ratelimit_state spurious_report_rs;
+
 			bool periodic;
 			int period_exponent;
 
diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 5738b99caa5b..3277a52ce98e 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -632,7 +632,8 @@ static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 		 * copying it to userspace...
 		 */
 		if (report32[0] == 0) {
-			DRM_NOTE("Skipping spurious, invalid OA report\n");
+			if (__ratelimit(&dev_priv->perf.oa.spurious_report_rs))
+				DRM_NOTE("Skipping spurious, invalid OA report\n");
 			continue;
 		}
 
@@ -913,6 +914,11 @@ static void i915_oa_stream_destroy(struct i915_perf_stream *stream)
 		oa_put_render_ctx_id(stream);
 
 	dev_priv->perf.oa.exclusive_stream = NULL;
+
+	if (dev_priv->perf.oa.spurious_report_rs.missed) {
+		DRM_NOTE("%d spurious OA report notices suppressed due to ratelimiting\n",
+			 dev_priv->perf.oa.spurious_report_rs.missed);
+	}
 }
 
 static void gen7_init_oa_buffer(struct drm_i915_private *dev_priv)
@@ -1268,6 +1274,26 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream,
 		return -EINVAL;
 	}
 
+	/* We set up some ratelimit state to potentially throttle any _NOTES
+	 * about spurious, invalid OA reports which we don't forward to
+	 * userspace.
+	 *
+	 * The initialization is associated with opening the stream (not driver
+	 * init) considering we print a _NOTE about any throttling when closing
+	 * the stream instead of waiting until driver _fini which no one would
+	 * ever see.
+	 *
+	 * Using the same limiting factors as printk_ratelimit()
+	 */
+	ratelimit_state_init(&dev_priv->perf.oa.spurious_report_rs,
+			     5 * HZ, 10);
+	/* Since we use a DRM_NOTE for spurious reports it would be
+	 * inconsistent to let __ratelimit() automatically print a warning for
+	 * throttling.
+	 */
+	ratelimit_set_flags(&dev_priv->perf.oa.spurious_report_rs,
+			    RATELIMIT_MSG_ON_RELEASE);
+
 	stream->sample_size = sizeof(struct drm_i915_perf_record_header);
 
 	format_size = dev_priv->perf.oa.oa_formats[props->oa_format].size;
-- 
2.12.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v4 09/15] drm/i915: expose _SLICE_MASK GETPARM
  2017-04-12 15:55 [PATCH v4 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Robert Bragg
                   ` (7 preceding siblings ...)
  2017-04-12 15:55 ` [PATCH v4 08/15] drm/i915/perf: rate limit spurious oa report notice Robert Bragg
@ 2017-04-12 15:55 ` Robert Bragg
  2017-04-12 15:55 ` [PATCH v4 10/15] drm/i915: expose _SUBSLICE_MASK GETPARM Robert Bragg
                   ` (9 subsequent siblings)
  18 siblings, 0 replies; 87+ messages in thread
From: Robert Bragg @ 2017-04-12 15:55 UTC (permalink / raw)
  To: intel-gfx

Enables userspace to determine the number of slices enabled and also
know what specific slices are enabled. This information is required, for
example, to be able to analyse some OA counter reports where the counter
configuration depends on the HW slice configuration.

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Acked-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.c | 5 +++++
 include/uapi/drm/i915_drm.h     | 3 +++
 2 files changed, 8 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index bd85e3826b72..76a724b7cc22 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -357,6 +357,11 @@ static int i915_getparam(struct drm_device *dev, void *data,
 		 */
 		value = 1;
 		break;
+	case I915_PARAM_SLICE_MASK:
+		value = INTEL_INFO(dev_priv)->sseu.slice_mask;
+		if (!value)
+			return -ENODEV;
+		break;
 	default:
 		DRM_DEBUG("Unknown parameter %d\n", param->param);
 		return -EINVAL;
diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
index 9ee06ec8a2d6..99bfc3648454 100644
--- a/include/uapi/drm/i915_drm.h
+++ b/include/uapi/drm/i915_drm.h
@@ -412,6 +412,9 @@ typedef struct drm_i915_irq_wait {
  */
 #define I915_PARAM_HAS_EXEC_FENCE	 44
 
+/* Query the mask of slices available for this system */
+#define I915_PARAM_SLICE_MASK		 45
+
 typedef struct drm_i915_getparam {
 	__s32 param;
 	/*
-- 
2.12.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v4 10/15] drm/i915: expose _SUBSLICE_MASK GETPARM
  2017-04-12 15:55 [PATCH v4 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Robert Bragg
                   ` (8 preceding siblings ...)
  2017-04-12 15:55 ` [PATCH v4 09/15] drm/i915: expose _SLICE_MASK GETPARM Robert Bragg
@ 2017-04-12 15:55 ` Robert Bragg
  2017-04-12 15:55 ` [PATCH v4 11/15] drm/i915/perf: Add 'render basic' Gen8+ OA unit configs Robert Bragg
                   ` (8 subsequent siblings)
  18 siblings, 0 replies; 87+ messages in thread
From: Robert Bragg @ 2017-04-12 15:55 UTC (permalink / raw)
  To: intel-gfx

Assuming a uniform mask across all slices, this enables userspace to
determine the specific sub slices enabled. This information is required,
for example, to be able to analyse some OA counter reports where the
counter configuration depends on the HW sub slice configuration.

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Acked-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.c | 5 +++++
 include/uapi/drm/i915_drm.h     | 5 +++++
 2 files changed, 10 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index 76a724b7cc22..133ab46bf2f2 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -362,6 +362,11 @@ static int i915_getparam(struct drm_device *dev, void *data,
 		if (!value)
 			return -ENODEV;
 		break;
+	case I915_PARAM_SUBSLICE_MASK:
+		value = INTEL_INFO(dev_priv)->sseu.subslice_mask;
+		if (!value)
+			return -ENODEV;
+		break;
 	default:
 		DRM_DEBUG("Unknown parameter %d\n", param->param);
 		return -EINVAL;
diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
index 99bfc3648454..689fd7a418a7 100644
--- a/include/uapi/drm/i915_drm.h
+++ b/include/uapi/drm/i915_drm.h
@@ -415,6 +415,11 @@ typedef struct drm_i915_irq_wait {
 /* Query the mask of slices available for this system */
 #define I915_PARAM_SLICE_MASK		 45
 
+/* Assuming it's uniform for each slice, this queries the mask of subslices
+ * per-slice for this system.
+ */
+#define I915_PARAM_SUBSLICE_MASK	 46
+
 typedef struct drm_i915_getparam {
 	__s32 param;
 	/*
-- 
2.12.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v4 11/15] drm/i915/perf: Add 'render basic' Gen8+ OA unit configs
  2017-04-12 15:55 [PATCH v4 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Robert Bragg
                   ` (9 preceding siblings ...)
  2017-04-12 15:55 ` [PATCH v4 10/15] drm/i915: expose _SUBSLICE_MASK GETPARM Robert Bragg
@ 2017-04-12 15:55 ` Robert Bragg
  2017-04-12 15:55 ` [PATCH v4 12/15] drm/i915/perf: Add OA unit support for Gen 8+ Robert Bragg
                   ` (7 subsequent siblings)
  18 siblings, 0 replies; 87+ messages in thread
From: Robert Bragg @ 2017-04-12 15:55 UTC (permalink / raw)
  To: intel-gfx

Adds a static OA unit, MUX, B Counter + Flex EU configurations for basic
render metrics on Broadwell, Cherryview, Skylake and Broxton. These are
auto generated from an XML description of metric sets, currently
maintained in gputop, ref:

 https://github.com/rib/gputop
 > gputop-data/oa-*.xml
 > scripts/i915-perf-kernelgen.py

 $ make -C gputop-data -f Makefile.xml WHITELIST=RenderBasic

v2: add newlines to debug messages + fix comment (Matthew Auld)

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Acked-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
---
 drivers/gpu/drm/i915/Makefile         |   8 +-
 drivers/gpu/drm/i915/i915_drv.h       |   2 +
 drivers/gpu/drm/i915/i915_oa_bdw.c    | 380 ++++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_bdw.h    |  38 ++++
 drivers/gpu/drm/i915/i915_oa_bxt.c    | 238 +++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_bxt.h    |  38 ++++
 drivers/gpu/drm/i915/i915_oa_chv.c    | 225 ++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_chv.h    |  38 ++++
 drivers/gpu/drm/i915/i915_oa_sklgt2.c | 228 ++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_sklgt2.h |  38 ++++
 drivers/gpu/drm/i915/i915_oa_sklgt3.c | 236 +++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_sklgt3.h |  38 ++++
 drivers/gpu/drm/i915/i915_oa_sklgt4.c | 247 ++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_sklgt4.h |  38 ++++
 14 files changed, 1791 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/i915/i915_oa_bdw.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_bdw.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_bxt.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_bxt.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_chv.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_chv.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt2.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt2.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt3.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt3.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt4.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt4.h

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index 2cf04504e494..41400a138a1e 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -127,7 +127,13 @@ i915-y += i915_vgpu.o
 
 # perf code
 i915-y += i915_perf.o \
-	  i915_oa_hsw.o
+	  i915_oa_hsw.o \
+	  i915_oa_bdw.o \
+	  i915_oa_chv.o \
+	  i915_oa_sklgt2.o \
+	  i915_oa_sklgt3.o \
+	  i915_oa_sklgt4.o \
+	  i915_oa_bxt.o
 
 ifeq ($(CONFIG_DRM_I915_GVT),y)
 i915-y += intel_gvt.o
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index a0e34934a11f..13b9125cacdd 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2463,6 +2463,8 @@ struct drm_i915_private {
 			int mux_regs_len;
 			const struct i915_oa_reg *b_counter_regs;
 			int b_counter_regs_len;
+			const struct i915_oa_reg *flex_regs;
+			int flex_regs_len;
 
 			struct {
 				struct i915_vma *vma;
diff --git a/drivers/gpu/drm/i915/i915_oa_bdw.c b/drivers/gpu/drm/i915/i915_oa_bdw.c
new file mode 100644
index 000000000000..b0b1b75fb431
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_bdw.c
@@ -0,0 +1,380 @@
+/*
+ * Autogenerated file, DO NOT EDIT manually!
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/sysfs.h>
+
+#include "i915_drv.h"
+#include "i915_oa_bdw.h"
+
+enum metric_set_id {
+	METRIC_SET_ID_RENDER_BASIC = 1,
+};
+
+int i915_oa_n_builtin_metric_sets_bdw = 1;
+
+static const struct i915_oa_reg b_counter_config_render_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic_0_slices_0x01[] = {
+	{ _MMIO(0x9888), 0x143f000f },
+	{ _MMIO(0x9888), 0x14110014 },
+	{ _MMIO(0x9888), 0x14310014 },
+	{ _MMIO(0x9888), 0x14bf000f },
+	{ _MMIO(0x9888), 0x118a0317 },
+	{ _MMIO(0x9888), 0x13837be0 },
+	{ _MMIO(0x9888), 0x3b800060 },
+	{ _MMIO(0x9888), 0x3d800005 },
+	{ _MMIO(0x9888), 0x005c4000 },
+	{ _MMIO(0x9888), 0x065c8000 },
+	{ _MMIO(0x9888), 0x085cc000 },
+	{ _MMIO(0x9888), 0x003d8000 },
+	{ _MMIO(0x9888), 0x183d0800 },
+	{ _MMIO(0x9888), 0x0a3f0023 },
+	{ _MMIO(0x9888), 0x103f0000 },
+	{ _MMIO(0x9888), 0x00584000 },
+	{ _MMIO(0x9888), 0x08584000 },
+	{ _MMIO(0x9888), 0x0a5a4000 },
+	{ _MMIO(0x9888), 0x005b4000 },
+	{ _MMIO(0x9888), 0x0e5b8000 },
+	{ _MMIO(0x9888), 0x185b2400 },
+	{ _MMIO(0x9888), 0x0a1d4000 },
+	{ _MMIO(0x9888), 0x0c1f0800 },
+	{ _MMIO(0x9888), 0x0e1faa00 },
+	{ _MMIO(0x9888), 0x00384000 },
+	{ _MMIO(0x9888), 0x0e384000 },
+	{ _MMIO(0x9888), 0x16384000 },
+	{ _MMIO(0x9888), 0x18380001 },
+	{ _MMIO(0x9888), 0x00392000 },
+	{ _MMIO(0x9888), 0x06398000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a391000 },
+	{ _MMIO(0x9888), 0x00104000 },
+	{ _MMIO(0x9888), 0x08104000 },
+	{ _MMIO(0x9888), 0x00110030 },
+	{ _MMIO(0x9888), 0x08110031 },
+	{ _MMIO(0x9888), 0x10110000 },
+	{ _MMIO(0x9888), 0x00134000 },
+	{ _MMIO(0x9888), 0x16130020 },
+	{ _MMIO(0x9888), 0x06308000 },
+	{ _MMIO(0x9888), 0x08308000 },
+	{ _MMIO(0x9888), 0x06311800 },
+	{ _MMIO(0x9888), 0x08311880 },
+	{ _MMIO(0x9888), 0x10310000 },
+	{ _MMIO(0x9888), 0x0e334000 },
+	{ _MMIO(0x9888), 0x16330080 },
+	{ _MMIO(0x9888), 0x0abf1180 },
+	{ _MMIO(0x9888), 0x10bf0000 },
+	{ _MMIO(0x9888), 0x0ada8000 },
+	{ _MMIO(0x9888), 0x0a9d8000 },
+	{ _MMIO(0x9888), 0x109f0002 },
+	{ _MMIO(0x9888), 0x0ab94000 },
+	{ _MMIO(0x9888), 0x0d888000 },
+	{ _MMIO(0x9888), 0x038a0380 },
+	{ _MMIO(0x9888), 0x058a000e },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8a00a0 },
+	{ _MMIO(0x9888), 0x078a0000 },
+	{ _MMIO(0x9888), 0x098a0000 },
+	{ _MMIO(0x9888), 0x238b2820 },
+	{ _MMIO(0x9888), 0x258b2550 },
+	{ _MMIO(0x9888), 0x198c1000 },
+	{ _MMIO(0x9888), 0x0b8d8000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa0 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x0d831021 },
+	{ _MMIO(0x9888), 0x0f83572f },
+	{ _MMIO(0x9888), 0x01835680 },
+	{ _MMIO(0x9888), 0x0383002c },
+	{ _MMIO(0x9888), 0x11830000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830001 },
+	{ _MMIO(0x9888), 0x05830000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x05844000 },
+	{ _MMIO(0x9888), 0x1b80c137 },
+	{ _MMIO(0x9888), 0x1d80c147 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x15804000 },
+	{ _MMIO(0x9888), 0x4d801110 },
+	{ _MMIO(0x9888), 0x4f800331 },
+	{ _MMIO(0x9888), 0x43800802 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45801465 },
+	{ _MMIO(0x9888), 0x53801111 },
+	{ _MMIO(0x9888), 0x478014a5 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800ca5 },
+	{ _MMIO(0x9888), 0x41800003 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic_1_slices_0x02[] = {
+	{ _MMIO(0x9888), 0x143f000f },
+	{ _MMIO(0x9888), 0x14bf000f },
+	{ _MMIO(0x9888), 0x14910014 },
+	{ _MMIO(0x9888), 0x14b10014 },
+	{ _MMIO(0x9888), 0x118a0317 },
+	{ _MMIO(0x9888), 0x13837be0 },
+	{ _MMIO(0x9888), 0x3b800060 },
+	{ _MMIO(0x9888), 0x3d800005 },
+	{ _MMIO(0x9888), 0x0a3f0023 },
+	{ _MMIO(0x9888), 0x103f0000 },
+	{ _MMIO(0x9888), 0x0a5a4000 },
+	{ _MMIO(0x9888), 0x0a1d4000 },
+	{ _MMIO(0x9888), 0x0e1f8000 },
+	{ _MMIO(0x9888), 0x0a391000 },
+	{ _MMIO(0x9888), 0x00dc4000 },
+	{ _MMIO(0x9888), 0x06dc8000 },
+	{ _MMIO(0x9888), 0x08dcc000 },
+	{ _MMIO(0x9888), 0x00bd8000 },
+	{ _MMIO(0x9888), 0x18bd0800 },
+	{ _MMIO(0x9888), 0x0abf1180 },
+	{ _MMIO(0x9888), 0x10bf0000 },
+	{ _MMIO(0x9888), 0x00d84000 },
+	{ _MMIO(0x9888), 0x08d84000 },
+	{ _MMIO(0x9888), 0x0ada8000 },
+	{ _MMIO(0x9888), 0x00db4000 },
+	{ _MMIO(0x9888), 0x0edb8000 },
+	{ _MMIO(0x9888), 0x18db2400 },
+	{ _MMIO(0x9888), 0x0a9d8000 },
+	{ _MMIO(0x9888), 0x0c9f0800 },
+	{ _MMIO(0x9888), 0x0e9f2a00 },
+	{ _MMIO(0x9888), 0x109f0002 },
+	{ _MMIO(0x9888), 0x00b84000 },
+	{ _MMIO(0x9888), 0x0eb84000 },
+	{ _MMIO(0x9888), 0x16b84000 },
+	{ _MMIO(0x9888), 0x18b80001 },
+	{ _MMIO(0x9888), 0x00b92000 },
+	{ _MMIO(0x9888), 0x06b98000 },
+	{ _MMIO(0x9888), 0x08b9a000 },
+	{ _MMIO(0x9888), 0x0ab94000 },
+	{ _MMIO(0x9888), 0x00904000 },
+	{ _MMIO(0x9888), 0x08904000 },
+	{ _MMIO(0x9888), 0x00910030 },
+	{ _MMIO(0x9888), 0x08910031 },
+	{ _MMIO(0x9888), 0x10910000 },
+	{ _MMIO(0x9888), 0x00934000 },
+	{ _MMIO(0x9888), 0x16930020 },
+	{ _MMIO(0x9888), 0x06b08000 },
+	{ _MMIO(0x9888), 0x08b08000 },
+	{ _MMIO(0x9888), 0x06b11800 },
+	{ _MMIO(0x9888), 0x08b11880 },
+	{ _MMIO(0x9888), 0x10b10000 },
+	{ _MMIO(0x9888), 0x0eb34000 },
+	{ _MMIO(0x9888), 0x16b30080 },
+	{ _MMIO(0x9888), 0x01888000 },
+	{ _MMIO(0x9888), 0x0d88b800 },
+	{ _MMIO(0x9888), 0x038a0380 },
+	{ _MMIO(0x9888), 0x058a000e },
+	{ _MMIO(0x9888), 0x1b8a0080 },
+	{ _MMIO(0x9888), 0x078a0000 },
+	{ _MMIO(0x9888), 0x098a0000 },
+	{ _MMIO(0x9888), 0x238b2840 },
+	{ _MMIO(0x9888), 0x258b26a0 },
+	{ _MMIO(0x9888), 0x018c4000 },
+	{ _MMIO(0x9888), 0x0f8c4000 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c1100 },
+	{ _MMIO(0x9888), 0x018d2000 },
+	{ _MMIO(0x9888), 0x078d8000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8d8000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa0 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x0d831021 },
+	{ _MMIO(0x9888), 0x0f83572f },
+	{ _MMIO(0x9888), 0x01835680 },
+	{ _MMIO(0x9888), 0x0383002c },
+	{ _MMIO(0x9888), 0x11830000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830001 },
+	{ _MMIO(0x9888), 0x05830000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x05844000 },
+	{ _MMIO(0x9888), 0x1b80c137 },
+	{ _MMIO(0x9888), 0x1d80c147 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x15804000 },
+	{ _MMIO(0x9888), 0x4d801550 },
+	{ _MMIO(0x9888), 0x4f800331 },
+	{ _MMIO(0x9888), 0x43800802 },
+	{ _MMIO(0x9888), 0x51800400 },
+	{ _MMIO(0x9888), 0x458004a1 },
+	{ _MMIO(0x9888), 0x53805555 },
+	{ _MMIO(0x9888), 0x47800421 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f801421 },
+	{ _MMIO(0x9888), 0x41800845 },
+};
+
+static const struct i915_oa_reg *
+get_render_basic_mux_config(struct drm_i915_private *dev_priv,
+			    int *len)
+{
+	if (INTEL_INFO(dev_priv)->sseu.slice_mask & 0x01) {
+		*len = ARRAY_SIZE(mux_config_render_basic_0_slices_0x01);
+		return mux_config_render_basic_0_slices_0x01;
+	} else if (INTEL_INFO(dev_priv)->sseu.slice_mask & 0x02) {
+		*len = ARRAY_SIZE(mux_config_render_basic_1_slices_0x02);
+		return mux_config_render_basic_1_slices_0x02;
+	}
+
+	*len = 0;
+	return NULL;
+}
+
+int i915_oa_select_metric_set_bdw(struct drm_i915_private *dev_priv)
+{
+	dev_priv->perf.oa.mux_regs = NULL;
+	dev_priv->perf.oa.mux_regs_len = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_render_basic_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "b541bd57-0e0f-4154-b4c0-5858010a2bf7",
+	.attrs =  attrs_render_basic,
+};
+
+int
+i915_perf_register_sysfs_bdw(struct drm_i915_private *dev_priv)
+{
+	int mux_len;
+	int ret = 0;
+
+	if (get_render_basic_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+		if (ret)
+			goto error_render_basic;
+	}
+
+	return 0;
+
+error_render_basic:
+	return ret;
+}
+
+void
+i915_perf_unregister_sysfs_bdw(struct drm_i915_private *dev_priv)
+{
+	int mux_len;
+
+	if (get_render_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+}
diff --git a/drivers/gpu/drm/i915/i915_oa_bdw.h b/drivers/gpu/drm/i915/i915_oa_bdw.h
new file mode 100644
index 000000000000..ea6969aaf91a
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_bdw.h
@@ -0,0 +1,38 @@
+/*
+ * Autogenerated file, DO NOT EDIT manually!
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_OA_BDW_H__
+#define __I915_OA_BDW_H__
+
+extern int i915_oa_n_builtin_metric_sets_bdw;
+
+extern int i915_oa_select_metric_set_bdw(struct drm_i915_private *dev_priv);
+
+extern int i915_perf_register_sysfs_bdw(struct drm_i915_private *dev_priv);
+
+extern void i915_perf_unregister_sysfs_bdw(struct drm_i915_private *dev_priv);
+
+#endif
diff --git a/drivers/gpu/drm/i915/i915_oa_bxt.c b/drivers/gpu/drm/i915/i915_oa_bxt.c
new file mode 100644
index 000000000000..b10e44643bc1
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_bxt.c
@@ -0,0 +1,238 @@
+/*
+ * Autogenerated file, DO NOT EDIT manually!
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/sysfs.h>
+
+#include "i915_drv.h"
+#include "i915_oa_bxt.h"
+
+enum metric_set_id {
+	METRIC_SET_ID_RENDER_BASIC = 1,
+};
+
+int i915_oa_n_builtin_metric_sets_bxt = 1;
+
+static const struct i915_oa_reg b_counter_config_render_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic_0_sku_gte_0x03[] = {
+	{ _MMIO(0x9888), 0x166c00f0 },
+	{ _MMIO(0x9888), 0x12120280 },
+	{ _MMIO(0x9888), 0x12320280 },
+	{ _MMIO(0x9888), 0x11930317 },
+	{ _MMIO(0x9888), 0x159303df },
+	{ _MMIO(0x9888), 0x3f900c00 },
+	{ _MMIO(0x9888), 0x419000a0 },
+	{ _MMIO(0x9888), 0x002d1000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x082d5000 },
+	{ _MMIO(0x9888), 0x0a2d1000 },
+	{ _MMIO(0x9888), 0x0c2e0800 },
+	{ _MMIO(0x9888), 0x0e2e5900 },
+	{ _MMIO(0x9888), 0x0a4c8000 },
+	{ _MMIO(0x9888), 0x0c4c8000 },
+	{ _MMIO(0x9888), 0x0e4c4000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e2000 },
+	{ _MMIO(0x9888), 0x1c4f0010 },
+	{ _MMIO(0x9888), 0x0a6c0053 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1a0fcc00 },
+	{ _MMIO(0x9888), 0x1c0f0002 },
+	{ _MMIO(0x9888), 0x1c2c0040 },
+	{ _MMIO(0x9888), 0x00101000 },
+	{ _MMIO(0x9888), 0x04101000 },
+	{ _MMIO(0x9888), 0x00114000 },
+	{ _MMIO(0x9888), 0x08114000 },
+	{ _MMIO(0x9888), 0x00120020 },
+	{ _MMIO(0x9888), 0x08120021 },
+	{ _MMIO(0x9888), 0x00141000 },
+	{ _MMIO(0x9888), 0x08141000 },
+	{ _MMIO(0x9888), 0x02308000 },
+	{ _MMIO(0x9888), 0x04302000 },
+	{ _MMIO(0x9888), 0x06318000 },
+	{ _MMIO(0x9888), 0x08318000 },
+	{ _MMIO(0x9888), 0x06320800 },
+	{ _MMIO(0x9888), 0x08320840 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x06344000 },
+	{ _MMIO(0x9888), 0x08344000 },
+	{ _MMIO(0x9888), 0x0d931831 },
+	{ _MMIO(0x9888), 0x0f939f3f },
+	{ _MMIO(0x9888), 0x01939e80 },
+	{ _MMIO(0x9888), 0x039303bc },
+	{ _MMIO(0x9888), 0x0593000e },
+	{ _MMIO(0x9888), 0x1993002a },
+	{ _MMIO(0x9888), 0x07930000 },
+	{ _MMIO(0x9888), 0x09930000 },
+	{ _MMIO(0x9888), 0x1d900177 },
+	{ _MMIO(0x9888), 0x1f900187 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25904000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x53901110 },
+	{ _MMIO(0x9888), 0x43900423 },
+	{ _MMIO(0x9888), 0x55900111 },
+	{ _MMIO(0x9888), 0x47900c02 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900020 },
+	{ _MMIO(0x9888), 0x59901111 },
+	{ _MMIO(0x9888), 0x4b900421 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900001 },
+	{ _MMIO(0x9888), 0x45900821 },
+};
+
+static const struct i915_oa_reg *
+get_render_basic_mux_config(struct drm_i915_private *dev_priv,
+			    int *len)
+{
+	if (dev_priv->drm.pdev->revision >= 0x03) {
+		*len = ARRAY_SIZE(mux_config_render_basic_0_sku_gte_0x03);
+		return mux_config_render_basic_0_sku_gte_0x03;
+	}
+
+	*len = 0;
+	return NULL;
+}
+
+int i915_oa_select_metric_set_bxt(struct drm_i915_private *dev_priv)
+{
+	dev_priv->perf.oa.mux_regs = NULL;
+	dev_priv->perf.oa.mux_regs_len = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_render_basic_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "22b9519a-e9ba-4c41-8b54-f4f8ca14fa0a",
+	.attrs =  attrs_render_basic,
+};
+
+int
+i915_perf_register_sysfs_bxt(struct drm_i915_private *dev_priv)
+{
+	int mux_len;
+	int ret = 0;
+
+	if (get_render_basic_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+		if (ret)
+			goto error_render_basic;
+	}
+
+	return 0;
+
+error_render_basic:
+	return ret;
+}
+
+void
+i915_perf_unregister_sysfs_bxt(struct drm_i915_private *dev_priv)
+{
+	int mux_len;
+
+	if (get_render_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+}
diff --git a/drivers/gpu/drm/i915/i915_oa_bxt.h b/drivers/gpu/drm/i915/i915_oa_bxt.h
new file mode 100644
index 000000000000..9e86662c891d
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_bxt.h
@@ -0,0 +1,38 @@
+/*
+ * Autogenerated file, DO NOT EDIT manually!
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_OA_BXT_H__
+#define __I915_OA_BXT_H__
+
+extern int i915_oa_n_builtin_metric_sets_bxt;
+
+extern int i915_oa_select_metric_set_bxt(struct drm_i915_private *dev_priv);
+
+extern int i915_perf_register_sysfs_bxt(struct drm_i915_private *dev_priv);
+
+extern void i915_perf_unregister_sysfs_bxt(struct drm_i915_private *dev_priv);
+
+#endif
diff --git a/drivers/gpu/drm/i915/i915_oa_chv.c b/drivers/gpu/drm/i915/i915_oa_chv.c
new file mode 100644
index 000000000000..1c2889e819c8
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_chv.c
@@ -0,0 +1,225 @@
+/*
+ * Autogenerated file, DO NOT EDIT manually!
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/sysfs.h>
+
+#include "i915_drv.h"
+#include "i915_oa_chv.h"
+
+enum metric_set_id {
+	METRIC_SET_ID_RENDER_BASIC = 1,
+};
+
+int i915_oa_n_builtin_metric_sets_chv = 1;
+
+static const struct i915_oa_reg b_counter_config_render_basic[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic[] = {
+	{ _MMIO(0x9888), 0x59800000 },
+	{ _MMIO(0x9888), 0x59800001 },
+	{ _MMIO(0x9888), 0x285a0006 },
+	{ _MMIO(0x9888), 0x2c110014 },
+	{ _MMIO(0x9888), 0x2e110000 },
+	{ _MMIO(0x9888), 0x2c310014 },
+	{ _MMIO(0x9888), 0x2e310000 },
+	{ _MMIO(0x9888), 0x2b8303df },
+	{ _MMIO(0x9888), 0x3580024f },
+	{ _MMIO(0x9888), 0x00580888 },
+	{ _MMIO(0x9888), 0x1e5a0015 },
+	{ _MMIO(0x9888), 0x205a0014 },
+	{ _MMIO(0x9888), 0x045a0000 },
+	{ _MMIO(0x9888), 0x025a0000 },
+	{ _MMIO(0x9888), 0x02180500 },
+	{ _MMIO(0x9888), 0x00190555 },
+	{ _MMIO(0x9888), 0x021d0500 },
+	{ _MMIO(0x9888), 0x021f0a00 },
+	{ _MMIO(0x9888), 0x00380444 },
+	{ _MMIO(0x9888), 0x02390500 },
+	{ _MMIO(0x9888), 0x003a0666 },
+	{ _MMIO(0x9888), 0x00100111 },
+	{ _MMIO(0x9888), 0x06110030 },
+	{ _MMIO(0x9888), 0x0a110031 },
+	{ _MMIO(0x9888), 0x0e110046 },
+	{ _MMIO(0x9888), 0x04110000 },
+	{ _MMIO(0x9888), 0x00110000 },
+	{ _MMIO(0x9888), 0x00130111 },
+	{ _MMIO(0x9888), 0x00300444 },
+	{ _MMIO(0x9888), 0x08310030 },
+	{ _MMIO(0x9888), 0x0c310031 },
+	{ _MMIO(0x9888), 0x10310046 },
+	{ _MMIO(0x9888), 0x04310000 },
+	{ _MMIO(0x9888), 0x00310000 },
+	{ _MMIO(0x9888), 0x00330444 },
+	{ _MMIO(0x9888), 0x038a0a00 },
+	{ _MMIO(0x9888), 0x018b0fff },
+	{ _MMIO(0x9888), 0x038b0a00 },
+	{ _MMIO(0x9888), 0x01855000 },
+	{ _MMIO(0x9888), 0x03850055 },
+	{ _MMIO(0x9888), 0x13830021 },
+	{ _MMIO(0x9888), 0x15830020 },
+	{ _MMIO(0x9888), 0x1783002f },
+	{ _MMIO(0x9888), 0x1983002e },
+	{ _MMIO(0x9888), 0x1b83002d },
+	{ _MMIO(0x9888), 0x1d83002c },
+	{ _MMIO(0x9888), 0x05830000 },
+	{ _MMIO(0x9888), 0x01840555 },
+	{ _MMIO(0x9888), 0x03840500 },
+	{ _MMIO(0x9888), 0x23800074 },
+	{ _MMIO(0x9888), 0x2580007d },
+	{ _MMIO(0x9888), 0x05800000 },
+	{ _MMIO(0x9888), 0x01805000 },
+	{ _MMIO(0x9888), 0x03800055 },
+	{ _MMIO(0x9888), 0x01865000 },
+	{ _MMIO(0x9888), 0x03860055 },
+	{ _MMIO(0x9888), 0x01875000 },
+	{ _MMIO(0x9888), 0x03870055 },
+	{ _MMIO(0x9888), 0x418000aa },
+	{ _MMIO(0x9888), 0x4380000a },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x4780000a },
+	{ _MMIO(0x9888), 0x49800000 },
+	{ _MMIO(0x9888), 0x4b800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x55800000 },
+	{ _MMIO(0x9888), 0x57800000 },
+	{ _MMIO(0x9888), 0x59800000 },
+};
+
+static const struct i915_oa_reg *
+get_render_basic_mux_config(struct drm_i915_private *dev_priv,
+			    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_render_basic);
+	return mux_config_render_basic;
+}
+
+int i915_oa_select_metric_set_chv(struct drm_i915_private *dev_priv)
+{
+	dev_priv->perf.oa.mux_regs = NULL;
+	dev_priv->perf.oa.mux_regs_len = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_render_basic_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "9d8a3af5-c02c-4a4a-b947-f1672469e0fb",
+	.attrs =  attrs_render_basic,
+};
+
+int
+i915_perf_register_sysfs_chv(struct drm_i915_private *dev_priv)
+{
+	int mux_len;
+	int ret = 0;
+
+	if (get_render_basic_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+		if (ret)
+			goto error_render_basic;
+	}
+
+	return 0;
+
+error_render_basic:
+	return ret;
+}
+
+void
+i915_perf_unregister_sysfs_chv(struct drm_i915_private *dev_priv)
+{
+	int mux_len;
+
+	if (get_render_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+}
diff --git a/drivers/gpu/drm/i915/i915_oa_chv.h b/drivers/gpu/drm/i915/i915_oa_chv.h
new file mode 100644
index 000000000000..e494200ff04f
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_chv.h
@@ -0,0 +1,38 @@
+/*
+ * Autogenerated file, DO NOT EDIT manually!
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_OA_CHV_H__
+#define __I915_OA_CHV_H__
+
+extern int i915_oa_n_builtin_metric_sets_chv;
+
+extern int i915_oa_select_metric_set_chv(struct drm_i915_private *dev_priv);
+
+extern int i915_perf_register_sysfs_chv(struct drm_i915_private *dev_priv);
+
+extern void i915_perf_unregister_sysfs_chv(struct drm_i915_private *dev_priv);
+
+#endif
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt2.c b/drivers/gpu/drm/i915/i915_oa_sklgt2.c
new file mode 100644
index 000000000000..ea0317a4781c
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt2.c
@@ -0,0 +1,228 @@
+/*
+ * Autogenerated file, DO NOT EDIT manually!
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/sysfs.h>
+
+#include "i915_drv.h"
+#include "i915_oa_sklgt2.h"
+
+enum metric_set_id {
+	METRIC_SET_ID_RENDER_BASIC = 1,
+};
+
+int i915_oa_n_builtin_metric_sets_sklgt2 = 1;
+
+static const struct i915_oa_reg b_counter_config_render_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic_1_sku_gte_0x02[] = {
+	{ _MMIO(0x9888), 0x166c01e0 },
+	{ _MMIO(0x9888), 0x12170280 },
+	{ _MMIO(0x9888), 0x12370280 },
+	{ _MMIO(0x9888), 0x11930317 },
+	{ _MMIO(0x9888), 0x159303df },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x1a4e0080 },
+	{ _MMIO(0x9888), 0x0a6c0053 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x0a1b4000 },
+	{ _MMIO(0x9888), 0x1c1c0001 },
+	{ _MMIO(0x9888), 0x002f1000 },
+	{ _MMIO(0x9888), 0x042f1000 },
+	{ _MMIO(0x9888), 0x004c4000 },
+	{ _MMIO(0x9888), 0x0a4c8400 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0d2000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f6600 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x162c2200 },
+	{ _MMIO(0x9888), 0x062d8000 },
+	{ _MMIO(0x9888), 0x082d8000 },
+	{ _MMIO(0x9888), 0x00133000 },
+	{ _MMIO(0x9888), 0x08133000 },
+	{ _MMIO(0x9888), 0x00170020 },
+	{ _MMIO(0x9888), 0x08170021 },
+	{ _MMIO(0x9888), 0x10170000 },
+	{ _MMIO(0x9888), 0x0633c000 },
+	{ _MMIO(0x9888), 0x0833c000 },
+	{ _MMIO(0x9888), 0x06370800 },
+	{ _MMIO(0x9888), 0x08370840 },
+	{ _MMIO(0x9888), 0x10370000 },
+	{ _MMIO(0x9888), 0x0d933031 },
+	{ _MMIO(0x9888), 0x0f933e3f },
+	{ _MMIO(0x9888), 0x01933d00 },
+	{ _MMIO(0x9888), 0x0393073c },
+	{ _MMIO(0x9888), 0x0593000e },
+	{ _MMIO(0x9888), 0x1d930000 },
+	{ _MMIO(0x9888), 0x19930000 },
+	{ _MMIO(0x9888), 0x1b930000 },
+	{ _MMIO(0x9888), 0x1d900157 },
+	{ _MMIO(0x9888), 0x1f900158 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d908000 },
+	{ _MMIO(0x9888), 0x2f908000 },
+	{ _MMIO(0x9888), 0x31908000 },
+	{ _MMIO(0x9888), 0x15908000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1190001f },
+	{ _MMIO(0x9888), 0x51904400 },
+	{ _MMIO(0x9888), 0x41900020 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c21 },
+	{ _MMIO(0x9888), 0x47900061 },
+	{ _MMIO(0x9888), 0x57904440 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900004 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x53904444 },
+};
+
+static const struct i915_oa_reg *
+get_render_basic_mux_config(struct drm_i915_private *dev_priv,
+			    int *len)
+{
+	if (dev_priv->drm.pdev->revision >= 0x02) {
+		*len = ARRAY_SIZE(mux_config_render_basic_1_sku_gte_0x02);
+		return mux_config_render_basic_1_sku_gte_0x02;
+	}
+
+	*len = 0;
+	return NULL;
+}
+
+int i915_oa_select_metric_set_sklgt2(struct drm_i915_private *dev_priv)
+{
+	dev_priv->perf.oa.mux_regs = NULL;
+	dev_priv->perf.oa.mux_regs_len = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_render_basic_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "f519e481-24d2-4d42-87c9-3fdd12c00202",
+	.attrs =  attrs_render_basic,
+};
+
+int
+i915_perf_register_sysfs_sklgt2(struct drm_i915_private *dev_priv)
+{
+	int mux_len;
+	int ret = 0;
+
+	if (get_render_basic_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+		if (ret)
+			goto error_render_basic;
+	}
+
+	return 0;
+
+error_render_basic:
+	return ret;
+}
+
+void
+i915_perf_unregister_sysfs_sklgt2(struct drm_i915_private *dev_priv)
+{
+	int mux_len;
+
+	if (get_render_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+}
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt2.h b/drivers/gpu/drm/i915/i915_oa_sklgt2.h
new file mode 100644
index 000000000000..e99b03f1bf9e
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt2.h
@@ -0,0 +1,38 @@
+/*
+ * Autogenerated file, DO NOT EDIT manually!
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_OA_SKLGT2_H__
+#define __I915_OA_SKLGT2_H__
+
+extern int i915_oa_n_builtin_metric_sets_sklgt2;
+
+extern int i915_oa_select_metric_set_sklgt2(struct drm_i915_private *dev_priv);
+
+extern int i915_perf_register_sysfs_sklgt2(struct drm_i915_private *dev_priv);
+
+extern void i915_perf_unregister_sysfs_sklgt2(struct drm_i915_private *dev_priv);
+
+#endif
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt3.c b/drivers/gpu/drm/i915/i915_oa_sklgt3.c
new file mode 100644
index 000000000000..a21211136191
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt3.c
@@ -0,0 +1,236 @@
+/*
+ * Autogenerated file, DO NOT EDIT manually!
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/sysfs.h>
+
+#include "i915_drv.h"
+#include "i915_oa_sklgt3.h"
+
+enum metric_set_id {
+	METRIC_SET_ID_RENDER_BASIC = 1,
+};
+
+int i915_oa_n_builtin_metric_sets_sklgt3 = 1;
+
+static const struct i915_oa_reg b_counter_config_render_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic[] = {
+	{ _MMIO(0x9888), 0x166c01e0 },
+	{ _MMIO(0x9888), 0x12170280 },
+	{ _MMIO(0x9888), 0x12370280 },
+	{ _MMIO(0x9888), 0x16ec01e0 },
+	{ _MMIO(0x9888), 0x11930317 },
+	{ _MMIO(0x9888), 0x159303df },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x1a4e0380 },
+	{ _MMIO(0x9888), 0x0a6c0053 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x0a1b4000 },
+	{ _MMIO(0x9888), 0x1c1c0001 },
+	{ _MMIO(0x9888), 0x002f1000 },
+	{ _MMIO(0x9888), 0x042f1000 },
+	{ _MMIO(0x9888), 0x004c4000 },
+	{ _MMIO(0x9888), 0x0a4c8400 },
+	{ _MMIO(0x9888), 0x0c4c0002 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f6600 },
+	{ _MMIO(0x9888), 0x100f0001 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x162ca200 },
+	{ _MMIO(0x9888), 0x062d8000 },
+	{ _MMIO(0x9888), 0x082d8000 },
+	{ _MMIO(0x9888), 0x00133000 },
+	{ _MMIO(0x9888), 0x08133000 },
+	{ _MMIO(0x9888), 0x00170020 },
+	{ _MMIO(0x9888), 0x08170021 },
+	{ _MMIO(0x9888), 0x10170000 },
+	{ _MMIO(0x9888), 0x0633c000 },
+	{ _MMIO(0x9888), 0x0833c000 },
+	{ _MMIO(0x9888), 0x06370800 },
+	{ _MMIO(0x9888), 0x08370840 },
+	{ _MMIO(0x9888), 0x10370000 },
+	{ _MMIO(0x9888), 0x1ace0200 },
+	{ _MMIO(0x9888), 0x0aec5300 },
+	{ _MMIO(0x9888), 0x10ec0000 },
+	{ _MMIO(0x9888), 0x1cec0000 },
+	{ _MMIO(0x9888), 0x0a9b8000 },
+	{ _MMIO(0x9888), 0x1c9c0002 },
+	{ _MMIO(0x9888), 0x0ccc0002 },
+	{ _MMIO(0x9888), 0x0a8d8000 },
+	{ _MMIO(0x9888), 0x108f0001 },
+	{ _MMIO(0x9888), 0x16ac8000 },
+	{ _MMIO(0x9888), 0x0d933031 },
+	{ _MMIO(0x9888), 0x0f933e3f },
+	{ _MMIO(0x9888), 0x01933d00 },
+	{ _MMIO(0x9888), 0x0393073c },
+	{ _MMIO(0x9888), 0x0593000e },
+	{ _MMIO(0x9888), 0x1d930000 },
+	{ _MMIO(0x9888), 0x19930000 },
+	{ _MMIO(0x9888), 0x1b930000 },
+	{ _MMIO(0x9888), 0x1d900157 },
+	{ _MMIO(0x9888), 0x1f900158 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d908000 },
+	{ _MMIO(0x9888), 0x2f908000 },
+	{ _MMIO(0x9888), 0x31908000 },
+	{ _MMIO(0x9888), 0x15908000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1190003f },
+	{ _MMIO(0x9888), 0x51907710 },
+	{ _MMIO(0x9888), 0x419020a0 },
+	{ _MMIO(0x9888), 0x55901515 },
+	{ _MMIO(0x9888), 0x45900529 },
+	{ _MMIO(0x9888), 0x47901025 },
+	{ _MMIO(0x9888), 0x57907770 },
+	{ _MMIO(0x9888), 0x49902100 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900108 },
+	{ _MMIO(0x9888), 0x59900007 },
+	{ _MMIO(0x9888), 0x43902108 },
+	{ _MMIO(0x9888), 0x53907777 },
+};
+
+static const struct i915_oa_reg *
+get_render_basic_mux_config(struct drm_i915_private *dev_priv,
+			    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_render_basic);
+	return mux_config_render_basic;
+}
+
+int i915_oa_select_metric_set_sklgt3(struct drm_i915_private *dev_priv)
+{
+	dev_priv->perf.oa.mux_regs = NULL;
+	dev_priv->perf.oa.mux_regs_len = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_render_basic_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "4616d450-2393-4836-8146-53c5ed84d359",
+	.attrs =  attrs_render_basic,
+};
+
+int
+i915_perf_register_sysfs_sklgt3(struct drm_i915_private *dev_priv)
+{
+	int mux_len;
+	int ret = 0;
+
+	if (get_render_basic_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+		if (ret)
+			goto error_render_basic;
+	}
+
+	return 0;
+
+error_render_basic:
+	return ret;
+}
+
+void
+i915_perf_unregister_sysfs_sklgt3(struct drm_i915_private *dev_priv)
+{
+	int mux_len;
+
+	if (get_render_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+}
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt3.h b/drivers/gpu/drm/i915/i915_oa_sklgt3.h
new file mode 100644
index 000000000000..ff7ec3206bf2
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt3.h
@@ -0,0 +1,38 @@
+/*
+ * Autogenerated file, DO NOT EDIT manually!
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_OA_SKLGT3_H__
+#define __I915_OA_SKLGT3_H__
+
+extern int i915_oa_n_builtin_metric_sets_sklgt3;
+
+extern int i915_oa_select_metric_set_sklgt3(struct drm_i915_private *dev_priv);
+
+extern int i915_perf_register_sysfs_sklgt3(struct drm_i915_private *dev_priv);
+
+extern void i915_perf_unregister_sysfs_sklgt3(struct drm_i915_private *dev_priv);
+
+#endif
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt4.c b/drivers/gpu/drm/i915/i915_oa_sklgt4.c
new file mode 100644
index 000000000000..1af4ebe7d51f
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt4.c
@@ -0,0 +1,247 @@
+/*
+ * Autogenerated file, DO NOT EDIT manually!
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/sysfs.h>
+
+#include "i915_drv.h"
+#include "i915_oa_sklgt4.h"
+
+enum metric_set_id {
+	METRIC_SET_ID_RENDER_BASIC = 1,
+};
+
+int i915_oa_n_builtin_metric_sets_sklgt4 = 1;
+
+static const struct i915_oa_reg b_counter_config_render_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic[] = {
+	{ _MMIO(0x9888), 0x166c01e0 },
+	{ _MMIO(0x9888), 0x12170280 },
+	{ _MMIO(0x9888), 0x12370280 },
+	{ _MMIO(0x9888), 0x16ec01e0 },
+	{ _MMIO(0x9888), 0x176c01e0 },
+	{ _MMIO(0x9888), 0x11930317 },
+	{ _MMIO(0x9888), 0x159303df },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x1a4e03b0 },
+	{ _MMIO(0x9888), 0x0a6c0053 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x0a1b4000 },
+	{ _MMIO(0x9888), 0x1c1c0001 },
+	{ _MMIO(0x9888), 0x002f1000 },
+	{ _MMIO(0x9888), 0x042f1000 },
+	{ _MMIO(0x9888), 0x004c4000 },
+	{ _MMIO(0x9888), 0x0a4ca400 },
+	{ _MMIO(0x9888), 0x0c4c0002 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f5600 },
+	{ _MMIO(0x9888), 0x100f0001 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x162caa00 },
+	{ _MMIO(0x9888), 0x062d8000 },
+	{ _MMIO(0x9888), 0x00133000 },
+	{ _MMIO(0x9888), 0x08133000 },
+	{ _MMIO(0x9888), 0x00170020 },
+	{ _MMIO(0x9888), 0x08170021 },
+	{ _MMIO(0x9888), 0x10170000 },
+	{ _MMIO(0x9888), 0x0633c000 },
+	{ _MMIO(0x9888), 0x06370800 },
+	{ _MMIO(0x9888), 0x10370000 },
+	{ _MMIO(0x9888), 0x1ace0230 },
+	{ _MMIO(0x9888), 0x0aec5300 },
+	{ _MMIO(0x9888), 0x10ec0000 },
+	{ _MMIO(0x9888), 0x1cec0000 },
+	{ _MMIO(0x9888), 0x0a9b8000 },
+	{ _MMIO(0x9888), 0x1c9c0002 },
+	{ _MMIO(0x9888), 0x0acc2000 },
+	{ _MMIO(0x9888), 0x0ccc0002 },
+	{ _MMIO(0x9888), 0x088d8000 },
+	{ _MMIO(0x9888), 0x0a8d8000 },
+	{ _MMIO(0x9888), 0x0e8f1000 },
+	{ _MMIO(0x9888), 0x108f0001 },
+	{ _MMIO(0x9888), 0x16ac8800 },
+	{ _MMIO(0x9888), 0x1b4e0020 },
+	{ _MMIO(0x9888), 0x096c5300 },
+	{ _MMIO(0x9888), 0x116c0000 },
+	{ _MMIO(0x9888), 0x1d6c0000 },
+	{ _MMIO(0x9888), 0x091b8000 },
+	{ _MMIO(0x9888), 0x1b1c8000 },
+	{ _MMIO(0x9888), 0x0b4c2000 },
+	{ _MMIO(0x9888), 0x090d8000 },
+	{ _MMIO(0x9888), 0x0f0f1000 },
+	{ _MMIO(0x9888), 0x172c0800 },
+	{ _MMIO(0x9888), 0x0d933031 },
+	{ _MMIO(0x9888), 0x0f933e3f },
+	{ _MMIO(0x9888), 0x01933d00 },
+	{ _MMIO(0x9888), 0x0393073c },
+	{ _MMIO(0x9888), 0x0593000e },
+	{ _MMIO(0x9888), 0x1d930000 },
+	{ _MMIO(0x9888), 0x19930000 },
+	{ _MMIO(0x9888), 0x1b930000 },
+	{ _MMIO(0x9888), 0x1d900157 },
+	{ _MMIO(0x9888), 0x1f900158 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d908000 },
+	{ _MMIO(0x9888), 0x2f908000 },
+	{ _MMIO(0x9888), 0x31908000 },
+	{ _MMIO(0x9888), 0x15908000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1190003f },
+	{ _MMIO(0x9888), 0x5190ff30 },
+	{ _MMIO(0x9888), 0x41900060 },
+	{ _MMIO(0x9888), 0x55903033 },
+	{ _MMIO(0x9888), 0x45901421 },
+	{ _MMIO(0x9888), 0x47900803 },
+	{ _MMIO(0x9888), 0x5790fff1 },
+	{ _MMIO(0x9888), 0x49900001 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x5990000f },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x5390ffff },
+};
+
+static const struct i915_oa_reg *
+get_render_basic_mux_config(struct drm_i915_private *dev_priv,
+			    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_render_basic);
+	return mux_config_render_basic;
+}
+
+int i915_oa_select_metric_set_sklgt4(struct drm_i915_private *dev_priv)
+{
+	dev_priv->perf.oa.mux_regs = NULL;
+	dev_priv->perf.oa.mux_regs_len = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_render_basic_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "bad77c24-cc64-480d-99bf-e7b740713800",
+	.attrs =  attrs_render_basic,
+};
+
+int
+i915_perf_register_sysfs_sklgt4(struct drm_i915_private *dev_priv)
+{
+	int mux_len;
+	int ret = 0;
+
+	if (get_render_basic_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+		if (ret)
+			goto error_render_basic;
+	}
+
+	return 0;
+
+error_render_basic:
+	return ret;
+}
+
+void
+i915_perf_unregister_sysfs_sklgt4(struct drm_i915_private *dev_priv)
+{
+	int mux_len;
+
+	if (get_render_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+}
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt4.h b/drivers/gpu/drm/i915/i915_oa_sklgt4.h
new file mode 100644
index 000000000000..4faa8c4579cd
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt4.h
@@ -0,0 +1,38 @@
+/*
+ * Autogenerated file, DO NOT EDIT manually!
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_OA_SKLGT4_H__
+#define __I915_OA_SKLGT4_H__
+
+extern int i915_oa_n_builtin_metric_sets_sklgt4;
+
+extern int i915_oa_select_metric_set_sklgt4(struct drm_i915_private *dev_priv);
+
+extern int i915_perf_register_sysfs_sklgt4(struct drm_i915_private *dev_priv);
+
+extern void i915_perf_unregister_sysfs_sklgt4(struct drm_i915_private *dev_priv);
+
+#endif
-- 
2.12.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v4 12/15] drm/i915/perf: Add OA unit support for Gen 8+
  2017-04-12 15:55 [PATCH v4 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Robert Bragg
                   ` (10 preceding siblings ...)
  2017-04-12 15:55 ` [PATCH v4 11/15] drm/i915/perf: Add 'render basic' Gen8+ OA unit configs Robert Bragg
@ 2017-04-12 15:55 ` Robert Bragg
  2017-04-12 17:47   ` Matthew Auld
  2017-04-12 15:55 ` [PATCH v4 13/15] drm/i915/perf: Add more OA configs for BDW, CHV, SKL + BXT Robert Bragg
                   ` (6 subsequent siblings)
  18 siblings, 1 reply; 87+ messages in thread
From: Robert Bragg @ 2017-04-12 15:55 UTC (permalink / raw)
  To: intel-gfx

Enables access to OA unit metrics for BDW, CHV, SKL and BXT which all
share (more-or-less) the same OA unit design.

Of particular note in comparison to Haswell: some OA unit HW config
state has become per-context state and as a consequence it is somewhat
more complicated to manage synchronous state changes from the cpu while
there's no guarantee of what context (if any) is currently actively
running on the gpu.

The periodic sampling frequency which can be particularly useful for
system-wide analysis (as opposed to command stream synchronised
MI_REPORT_PERF_COUNT commands) is perhaps the most surprising state to
have become per-context save and restored (while the OABUFFER
destination is still a shared, system-wide resource).

This support for gen8+ takes care to consider a number of timing
challenges involved in synchronously updating per-context state
primarily by programming all config state from the cpu and updating all
current and saved contexts synchronously while the OA unit is still
disabled.

The driver intentionally avoids depending on command streamer
programming to update OA state considering the lack of synchronization
between the automatic loading of OACTXCONTROL state (that includes the
periodic sampling state and enable state) on context restore and the
parsing of any general purpose BB the driver can control. I.e. this
implementation is careful to avoid the possibility of a context restore
temporarily enabling any out-of-date periodic sampling state. In
addition to the risk of transiently-out-of-date state being loaded
automatically; there are also internal HW latencies involved in the
loading of MUX configurations which would be difficult to account for
from the command streamer (and we only want to enable the unit when once
the MUX configuration is complete).

Since the Gen8+ OA unit design no longer supports clock gating the unit
off for a single given context (which effectively stopped any progress
of counters while any other context was running) and instead supports
tagging OA reports with a context ID for filtering on the CPU, it means
we can no longer hide the system-wide progress of counters from a
non-privileged application only interested in metrics for its own
context. Although we could theoretically try and subtract the progress
of other contexts before forwarding reports via read() we aren't in a
position to filter reports captured via MI_REPORT_PERF_COUNT commands.
As a result, for Gen8+, we always require the
dev.i915.perf_stream_paranoid to be unset for any access to OA metrics
if not root.

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Acked-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h         |  45 +-
 drivers/gpu/drm/i915/i915_gem_context.h |   1 +
 drivers/gpu/drm/i915/i915_perf.c        | 949 +++++++++++++++++++++++++++++---
 drivers/gpu/drm/i915/i915_reg.h         |  22 +
 drivers/gpu/drm/i915/intel_lrc.c        |   5 +
 include/uapi/drm/i915_drm.h             |  19 +-
 6 files changed, 948 insertions(+), 93 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 13b9125cacdd..b8dcf281db53 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2061,9 +2061,17 @@ struct i915_oa_ops {
 	void (*init_oa_buffer)(struct drm_i915_private *dev_priv);
 
 	/**
-	 * @enable_metric_set: Applies any MUX configuration to set up the
-	 * Boolean and Custom (B/C) counters that are part of the counter
-	 * reports being sampled. May apply system constraints such as
+	 * @select_metric_set: The auto generated code that checks whether a
+	 * requested OA config is applicable to the system and if so sets up
+	 * the mux, oa and flex eu register config pointers according to the
+	 * current dev_priv->perf.oa.metrics_set.
+	 */
+	int (*select_metric_set)(struct drm_i915_private *dev_priv);
+
+	/**
+	 * @enable_metric_set: Selects and applies any MUX configuration to set
+	 * up the Boolean and Custom (B/C) counters that are part of the
+	 * counter reports being sampled. May apply system constraints such as
 	 * disabling EU clock gating as required.
 	 */
 	int (*enable_metric_set)(struct drm_i915_private *dev_priv);
@@ -2094,20 +2102,13 @@ struct i915_oa_ops {
 		    size_t *offset);
 
 	/**
-	 * @oa_buffer_check: Check for OA buffer data + update tail
-	 *
-	 * This is either called via fops or the poll check hrtimer (atomic
-	 * ctx) without any locks taken.
+	 * @oa_hw_tail_read: read the OA tail pointer register
 	 *
-	 * It's safe to read OA config state here unlocked, assuming that this
-	 * is only called while the stream is enabled, while the global OA
-	 * configuration can't be modified.
-	 *
-	 * Efficiency is more important than avoiding some false positives
-	 * here, which will be handled gracefully - likely resulting in an
-	 * %EAGAIN error for userspace.
+	 * In particular this enables us to share all the fiddly code for
+	 * handling the OA unit tail pointer race that affects multiple
+	 * generations.
 	 */
-	bool (*oa_buffer_check)(struct drm_i915_private *dev_priv);
+	u32 (*oa_hw_tail_read)(struct drm_i915_private *dev_priv);
 };
 
 struct intel_cdclk_state {
@@ -2469,6 +2470,7 @@ struct drm_i915_private {
 			struct {
 				struct i915_vma *vma;
 				u8 *vaddr;
+				u32 last_ctx_id;
 				int format;
 				int format_size;
 
@@ -2538,6 +2540,14 @@ struct drm_i915_private {
 			} oa_buffer;
 
 			u32 gen7_latched_oastatus1;
+			u32 ctx_oactxctrl_offset;
+			u32 ctx_flexeu0_offset;
+
+			/* The RPT_ID/reason field for Gen8+ includes a bit
+			 * to determine if the CTX ID in the report is valid
+			 * but the specific bit differs between Gen 8 and 9
+			 */
+			u32 gen8_valid_ctx_bit;
 
 			struct i915_oa_ops ops;
 			const struct i915_oa_format *oa_formats;
@@ -2848,6 +2858,8 @@ intel_info(const struct drm_i915_private *dev_priv)
 #define IS_KBL_ULX(dev_priv)	(INTEL_DEVID(dev_priv) == 0x590E || \
 				 INTEL_DEVID(dev_priv) == 0x5915 || \
 				 INTEL_DEVID(dev_priv) == 0x591E)
+#define IS_SKL_GT2(dev_priv)	(IS_SKYLAKE(dev_priv) && \
+				 (INTEL_DEVID(dev_priv) & 0x00F0) == 0x0010)
 #define IS_SKL_GT3(dev_priv)	(IS_SKYLAKE(dev_priv) && \
 				 (INTEL_DEVID(dev_priv) & 0x00F0) == 0x0020)
 #define IS_SKL_GT4(dev_priv)	(IS_SKYLAKE(dev_priv) && \
@@ -3612,6 +3624,9 @@ i915_gem_context_lookup_timeline(struct i915_gem_context *ctx,
 
 int i915_perf_open_ioctl(struct drm_device *dev, void *data,
 			 struct drm_file *file);
+void i915_oa_update_reg_state(struct intel_engine_cs *engine,
+			      struct i915_gem_context *ctx,
+			      uint32_t *reg_state);
 
 /* i915_gem_evict.c */
 int __must_check i915_gem_evict_something(struct i915_address_space *vm,
diff --git a/drivers/gpu/drm/i915/i915_gem_context.h b/drivers/gpu/drm/i915/i915_gem_context.h
index 4af2ab94558b..3f4ce73bea43 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.h
+++ b/drivers/gpu/drm/i915/i915_gem_context.h
@@ -151,6 +151,7 @@ struct i915_gem_context {
 		u64 lrc_desc;
 		int pin_count;
 		bool initialised;
+		atomic_t oa_state_dirty;
 	} engine[I915_NUM_ENGINES];
 
 	/** ring_size: size for allocating the per-engine ring buffer */
diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 3277a52ce98e..611f996bece7 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -196,6 +196,12 @@
 
 #include "i915_drv.h"
 #include "i915_oa_hsw.h"
+#include "i915_oa_bdw.h"
+#include "i915_oa_chv.h"
+#include "i915_oa_sklgt2.h"
+#include "i915_oa_sklgt3.h"
+#include "i915_oa_sklgt4.h"
+#include "i915_oa_bxt.h"
 
 /* HW requires this to be a power of two, between 128k and 16M, though driver
  * is currently generally designed assuming the largest 16M size is used such
@@ -215,7 +221,7 @@
  *
  * Although this can be observed explicitly while copying reports to userspace
  * by checking for a zeroed report-id field in tail reports, we want to account
- * for this earlier, as part of the _oa_buffer_check to avoid lots of redundant
+ * for this earlier, as part of the oa_buffer_check to avoid lots of redundant
  * read() attempts.
  *
  * In effect we define a tail pointer for reading that lags the real tail
@@ -237,7 +243,7 @@
  * indicates that an updated tail pointer is needed.
  *
  * Most of the implementation details for this workaround are in
- * gen7_oa_buffer_check_unlocked() and gen7_appand_oa_reports()
+ * oa_buffer_check_unlocked() and _append_oa_reports()
  *
  * Note for posterity: previously the driver used to define an effective tail
  * pointer that lagged the real pointer by a 'tail margin' measured in bytes
@@ -272,6 +278,13 @@ static u32 i915_perf_stream_paranoid = true;
 
 #define INVALID_CTX_ID 0xffffffff
 
+/* On Gen8+ automatically triggered OA reports include a 'reason' field... */
+#define OAREPORT_REASON_MASK           0x3f
+#define OAREPORT_REASON_SHIFT          19
+#define OAREPORT_REASON_TIMER          (1<<0)
+#define OAREPORT_REASON_CTX_SWITCH     (1<<3)
+#define OAREPORT_REASON_CLK_RATIO      (1<<5)
+
 
 /* For sysctl proc_dointvec_minmax of i915_oa_max_sample_rate
  *
@@ -303,6 +316,13 @@ static struct i915_oa_format hsw_oa_formats[I915_OA_FORMAT_MAX] = {
 	[I915_OA_FORMAT_C4_B8]	    = { 7, 64 },
 };
 
+static struct i915_oa_format gen8_plus_oa_formats[I915_OA_FORMAT_MAX] = {
+	[I915_OA_FORMAT_A12]		    = { 0, 64 },
+	[I915_OA_FORMAT_A12_B8_C8]	    = { 2, 128 },
+	[I915_OA_FORMAT_A32u40_A4u32_B8_C8] = { 5, 256 },
+	[I915_OA_FORMAT_C4_B8]		    = { 7, 64 },
+};
+
 #define SAMPLE_OA_REPORT      (1<<0)
 
 /**
@@ -332,8 +352,20 @@ struct perf_open_properties {
 	int oa_period_exponent;
 };
 
+static u32 gen8_oa_hw_tail_read(struct drm_i915_private *dev_priv)
+{
+	return I915_READ(GEN8_OATAILPTR) & GEN8_OATAILPTR_MASK;
+}
+
+static u32 gen7_oa_hw_tail_read(struct drm_i915_private *dev_priv)
+{
+	u32 oastatus1 = I915_READ(GEN7_OASTATUS1);
+
+	return oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
+}
+
 /**
- * gen7_oa_buffer_check_unlocked - check for data and update tail ptr state
+ * oa_buffer_check_unlocked - check for data and update tail ptr state
  * @dev_priv: i915 device instance
  *
  * This is either called via fops (for blocking reads in user ctx) or the poll
@@ -356,12 +388,11 @@ struct perf_open_properties {
  *
  * Returns: %true if the OA buffer contains data, else %false
  */
-static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
+static bool oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
 {
 	int report_size = dev_priv->perf.oa.oa_buffer.format_size;
 	unsigned long flags;
 	unsigned int aged_idx;
-	u32 oastatus1;
 	u32 head, hw_tail, aged_tail, aging_tail;
 	u64 now;
 
@@ -381,8 +412,7 @@ static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
 	aged_tail = dev_priv->perf.oa.oa_buffer.tails[aged_idx].offset;
 	aging_tail = dev_priv->perf.oa.oa_buffer.tails[!aged_idx].offset;
 
-	oastatus1 = I915_READ(GEN7_OASTATUS1);
-	hw_tail = oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
+	hw_tail = dev_priv->perf.oa.ops.oa_hw_tail_read(dev_priv);
 
 	/* The tail pointer increases in 64 byte increments,
 	 * not in report_size steps...
@@ -404,6 +434,7 @@ static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
 	if (aging_tail != INVALID_TAIL_PTR &&
 	    ((now - dev_priv->perf.oa.oa_buffer.aging_timestamp) >
 	     OA_TAIL_MARGIN_NSEC)) {
+
 		aged_idx ^= 1;
 		dev_priv->perf.oa.oa_buffer.aged_tail_idx = aged_idx;
 
@@ -553,6 +584,287 @@ static int append_oa_sample(struct i915_perf_stream *stream,
  *
  * Returns: 0 on success, negative error code on failure.
  */
+static int gen8_append_oa_reports(struct i915_perf_stream *stream,
+				  char __user *buf,
+				  size_t count,
+				  size_t *offset)
+{
+	struct drm_i915_private *dev_priv = stream->dev_priv;
+	int report_size = dev_priv->perf.oa.oa_buffer.format_size;
+	u8 *oa_buf_base = dev_priv->perf.oa.oa_buffer.vaddr;
+	u32 gtt_offset = i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
+	u32 mask = (OA_BUFFER_SIZE - 1);
+	size_t start_offset = *offset;
+	unsigned long flags;
+	unsigned int aged_tail_idx;
+	u32 head, tail;
+	u32 taken;
+	int ret = 0;
+
+	if (WARN_ON(!stream->enabled))
+		return -EIO;
+
+	spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	head = dev_priv->perf.oa.oa_buffer.head;
+	aged_tail_idx = dev_priv->perf.oa.oa_buffer.aged_tail_idx;
+	tail = dev_priv->perf.oa.oa_buffer.tails[aged_tail_idx].offset;
+
+	spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	/* An invalid tail pointer here means we're still waiting for the poll
+	 * hrtimer callback to give us a pointer
+	 */
+	if (tail == INVALID_TAIL_PTR)
+		return -EAGAIN;
+
+	/* NB: oa_buffer.head/tail include the gtt_offset which we don't want
+	 * while indexing relative to oa_buf_base.
+	 */
+	head -= gtt_offset;
+	tail -= gtt_offset;
+
+	/* An out of bounds or misaligned head or tail pointer implies a driver
+	 * bug since we validate + align the tail pointers we read from the
+	 * hardware and we are in full control of the head pointer which should
+	 * only be incremented by multiples of the report size (notably also
+	 * all a power of two).
+	 */
+	if (WARN_ONCE(head > OA_BUFFER_SIZE || head % report_size ||
+		      tail > OA_BUFFER_SIZE || tail % report_size,
+		      "Inconsistent OA buffer pointers: head = %u, tail = %u\n",
+		      head, tail))
+		return -EIO;
+
+
+	for (/* none */;
+	     (taken = OA_TAKEN(tail, head));
+	     head = (head + report_size) & mask) {
+		u8 *report = oa_buf_base + head;
+		u32 *report32 = (void *)report;
+		u32 ctx_id;
+		u32 reason;
+
+		/* All the report sizes factor neatly into the buffer
+		 * size so we never expect to see a report split
+		 * between the beginning and end of the buffer.
+		 *
+		 * Given the initial alignment check a misalignment
+		 * here would imply a driver bug that would result
+		 * in an overrun.
+		 */
+		if (WARN_ON((OA_BUFFER_SIZE - head) < report_size)) {
+			DRM_ERROR("Spurious OA head ptr: non-integral report offset\n");
+			break;
+		}
+
+		/* The reason field includes flags identifying what
+		 * triggered this specific report (mostly timer
+		 * triggered or e.g. due to a context switch).
+		 *
+		 * This field is never expected to be zero so we can
+		 * check that the report isn't invalid before copying
+		 * it to userspace...
+		 */
+		reason = ((report32[0] >> OAREPORT_REASON_SHIFT) &
+			  OAREPORT_REASON_MASK);
+		if (reason == 0) {
+			if (__ratelimit(&dev_priv->perf.oa.spurious_report_rs))
+				DRM_NOTE("Skipping spurious, invalid OA report\n");
+			continue;
+		}
+
+		/* XXX: Just keep the lower 21 bits for now since I'm not
+		 * entirely sure if the HW touches any of the higher bits in
+		 * this field
+		 */
+		ctx_id = report32[2] & 0x1fffff;
+
+		/* Squash whatever is in the CTX_ID field if it's marked as
+		 * invalid to be sure we avoid false-positive, single-context
+		 * filtering below...
+		 *
+		 * Note: that we don't clear the valid_ctx_bit so userspace can
+		 * understand that the ID has been squashed by the kernel.
+		 */
+		if (!(report32[0] & dev_priv->perf.oa.gen8_valid_ctx_bit))
+			ctx_id = report32[2] = INVALID_CTX_ID;
+
+		/* NB: For Gen 8 the OA unit no longer supports clock gating
+		 * off for a specific context and the kernel can't securely
+		 * stop the counters from updating as system-wide / global
+		 * values.
+		 *
+		 * Automatic reports now include a context ID so reports can be
+		 * filtered on the cpu but it's not worth trying to
+		 * automatically subtract/hide counter progress for other
+		 * contexts while filtering since we can't stop userspace
+		 * issuing MI_REPORT_PERF_COUNT commands which would still
+		 * provide a side-band view of the real values.
+		 *
+		 * To allow userspace (such as Mesa/GL_INTEL_performance_query)
+		 * to normalize counters for a single filtered context then it
+		 * needs be forwarded bookend context-switch reports so that it
+		 * can track switches in between MI_REPORT_PERF_COUNT commands
+		 * and can itself subtract/ignore the progress of counters
+		 * associated with other contexts. Note that the hardware
+		 * automatically triggers reports when switching to a new
+		 * context which are tagged with the ID of the newly active
+		 * context. To avoid the complexity (and likely fragility) of
+		 * reading ahead while parsing reports to try and minimize
+		 * forwarding redundant context switch reports (i.e. between
+		 * other, unrelated contexts) we simply elect to forward them
+		 * all.
+		 *
+		 * We don't rely solely on the reason field to identify context
+		 * switches since it's not-uncommon for periodic samples to
+		 * identify a switch before any 'context switch' report.
+		 */
+		if (!dev_priv->perf.oa.exclusive_stream->ctx ||
+		    dev_priv->perf.oa.specific_ctx_id == ctx_id ||
+		    (dev_priv->perf.oa.oa_buffer.last_ctx_id ==
+		     dev_priv->perf.oa.specific_ctx_id) ||
+		    reason & OAREPORT_REASON_CTX_SWITCH) {
+
+			/* While filtering for a single context we avoid
+			 * leaking the IDs of other contexts.
+			 */
+			if (dev_priv->perf.oa.exclusive_stream->ctx &&
+			    dev_priv->perf.oa.specific_ctx_id != ctx_id) {
+				report32[2] = INVALID_CTX_ID;
+			}
+
+			ret = append_oa_sample(stream, buf, count, offset,
+					       report);
+			if (ret)
+				break;
+
+			dev_priv->perf.oa.oa_buffer.last_ctx_id = ctx_id;
+		}
+
+		/* The above reason field sanity check is based on
+		 * the assumption that the OA buffer is initially
+		 * zeroed and we reset the field after copying so the
+		 * check is still meaningful once old reports start
+		 * being overwritten.
+		 */
+		report32[0] = 0;
+	}
+
+	if (start_offset != *offset) {
+		spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+		/* We removed the gtt_offset for the copy loop above, indexing
+		 * relative to oa_buf_base so put back here...
+		 */
+		head += gtt_offset;
+
+		I915_WRITE(GEN8_OAHEADPTR, head & GEN8_OAHEADPTR_MASK);
+		dev_priv->perf.oa.oa_buffer.head = head;
+
+		spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+	}
+
+	return ret;
+}
+
+/**
+ * gen8_oa_read - copy status records then buffered OA reports
+ * @stream: An i915-perf stream opened for OA metrics
+ * @buf: destination buffer given by userspace
+ * @count: the number of bytes userspace wants to read
+ * @offset: (inout): the current position for writing into @buf
+ *
+ * Checks OA unit status registers and if necessary appends corresponding
+ * status records for userspace (such as for a buffer full condition) and then
+ * initiate appending any buffered OA reports.
+ *
+ * Updates @offset according to the number of bytes successfully copied into
+ * the userspace buffer.
+ *
+ * NB: some data may be successfully copied to the userspace buffer
+ * even if an error is returned, and this is reflected in the
+ * updated @offset.
+ *
+ * Returns: zero on success or a negative error code
+ */
+static int gen8_oa_read(struct i915_perf_stream *stream,
+			char __user *buf,
+			size_t count,
+			size_t *offset)
+{
+	struct drm_i915_private *dev_priv = stream->dev_priv;
+	u32 oastatus;
+	int ret;
+
+	if (WARN_ON(!dev_priv->perf.oa.oa_buffer.vaddr))
+		return -EIO;
+
+	oastatus = I915_READ(GEN8_OASTATUS);
+
+	/* We treat OABUFFER_OVERFLOW as a significant error:
+	 *
+	 * Although theoretically we could handle this more gracefully
+	 * sometimes, some Gens don't correctly suppress certain
+	 * automatically triggered reports in this condition and so we
+	 * have to assume that old reports are now being trampled
+	 * over.
+	 *
+	 * Considering how we don't currently give userspace control
+	 * over the OA buffer size and always configure a large 16MB
+	 * buffer, then a buffer overflow does anyway likely indicate
+	 * that something has gone quite badly wrong.
+	 */
+	if (oastatus & GEN8_OASTATUS_OABUFFER_OVERFLOW) {
+		ret = append_oa_status(stream, buf, count, offset,
+				       DRM_I915_PERF_RECORD_OA_BUFFER_LOST);
+		if (ret)
+			return ret;
+
+		DRM_DEBUG("OA buffer overflow (exponent = %d): force restart\n",
+			  dev_priv->perf.oa.period_exponent);
+
+		dev_priv->perf.oa.ops.oa_disable(dev_priv);
+		dev_priv->perf.oa.ops.oa_enable(dev_priv);
+
+		/* Note: .oa_enable() is expected to re-init the oabuffer
+		 * and reset GEN8_OASTATUS for us
+		 */
+		oastatus = I915_READ(GEN8_OASTATUS);
+	}
+
+	if (oastatus & GEN8_OASTATUS_REPORT_LOST) {
+		ret = append_oa_status(stream, buf, count, offset,
+				       DRM_I915_PERF_RECORD_OA_REPORT_LOST);
+		if (ret)
+			return ret;
+		I915_WRITE(GEN8_OASTATUS,
+			   oastatus & ~GEN8_OASTATUS_REPORT_LOST);
+	}
+
+	return gen8_append_oa_reports(stream, buf, count, offset);
+}
+
+/**
+ * Copies all buffered OA reports into userspace read() buffer.
+ * @stream: An i915-perf stream opened for OA metrics
+ * @buf: destination buffer given by userspace
+ * @count: the number of bytes userspace wants to read
+ * @offset: (inout): the current position for writing into @buf
+ *
+ * Notably any error condition resulting in a short read (-%ENOSPC or
+ * -%EFAULT) will be returned even though one or more records may
+ * have been successfully copied. In this case it's up to the caller
+ * to decide if the error should be squashed before returning to
+ * userspace.
+ *
+ * Note: reports are consumed from the head, and appended to the
+ * tail, so the tail chases the head?... If you think that's mad
+ * and back-to-front you're not alone, but this follows the
+ * Gen PRM naming convention.
+ *
+ * Returns: 0 on success, negative error code on failure.
+ */
 static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 				  char __user *buf,
 				  size_t count,
@@ -733,7 +1045,8 @@ static int gen7_oa_read(struct i915_perf_stream *stream,
 		if (ret)
 			return ret;
 
-		DRM_DEBUG("OA buffer overflow: force restart\n");
+		DRM_DEBUG("OA buffer overflow (exponent = %d): force restart\n",
+			  dev_priv->perf.oa.period_exponent);
 
 		dev_priv->perf.oa.ops.oa_disable(dev_priv);
 		dev_priv->perf.oa.ops.oa_enable(dev_priv);
@@ -776,7 +1089,7 @@ static int i915_oa_wait_unlocked(struct i915_perf_stream *stream)
 		return -EIO;
 
 	return wait_event_interruptible(dev_priv->perf.oa.poll_wq,
-					dev_priv->perf.oa.ops.oa_buffer_check(dev_priv));
+					oa_buffer_check_unlocked(dev_priv));
 }
 
 /**
@@ -833,33 +1146,37 @@ static int i915_oa_read(struct i915_perf_stream *stream,
 static int oa_get_render_ctx_id(struct i915_perf_stream *stream)
 {
 	struct drm_i915_private *dev_priv = stream->dev_priv;
-	struct intel_engine_cs *engine = dev_priv->engine[RCS];
-	int ret;
 
-	ret = i915_mutex_lock_interruptible(&dev_priv->drm);
-	if (ret)
-		return ret;
+	if (i915.enable_execlists)
+		dev_priv->perf.oa.specific_ctx_id = stream->ctx->hw_id;
+	else {
+		struct intel_engine_cs *engine = dev_priv->engine[RCS];
+		int ret;
 
-	/* As the ID is the gtt offset of the context's vma we pin
-	 * the vma to ensure the ID remains fixed.
-	 *
-	 * NB: implied RCS engine...
-	 */
-	ret = engine->context_pin(engine, stream->ctx);
-	if (ret)
-		goto unlock;
+		ret = i915_mutex_lock_interruptible(&dev_priv->drm);
+		if (ret)
+			return ret;
 
-	/* Explicitly track the ID (instead of calling i915_ggtt_offset()
-	 * on the fly) considering the difference with gen8+ and
-	 * execlists
-	 */
-	dev_priv->perf.oa.specific_ctx_id =
-		i915_ggtt_offset(stream->ctx->engine[engine->id].state);
+		/* As the ID is the gtt offset of the context's vma we pin
+		 * the vma to ensure the ID remains fixed.
+		 */
+		ret = engine->context_pin(engine, stream->ctx);
+		if (ret) {
+			mutex_unlock(&dev_priv->drm.struct_mutex);
+			return ret;
+		}
 
-unlock:
-	mutex_unlock(&dev_priv->drm.struct_mutex);
+		/* Explicitly track the ID (instead of calling
+		 * i915_ggtt_offset() on the fly) considering the difference
+		 * with gen8+ and execlists
+		 */
+		dev_priv->perf.oa.specific_ctx_id =
+			i915_ggtt_offset(stream->ctx->engine[engine->id].state);
 
-	return ret;
+		mutex_unlock(&dev_priv->drm.struct_mutex);
+	}
+
+	return 0;
 }
 
 /**
@@ -874,12 +1191,16 @@ static void oa_put_render_ctx_id(struct i915_perf_stream *stream)
 	struct drm_i915_private *dev_priv = stream->dev_priv;
 	struct intel_engine_cs *engine = dev_priv->engine[RCS];
 
-	mutex_lock(&dev_priv->drm.struct_mutex);
+	if (i915.enable_execlists) {
+		dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
+	} else {
+		mutex_lock(&dev_priv->drm.struct_mutex);
 
-	dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
-	engine->context_unpin(engine, stream->ctx);
+		dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
+		engine->context_unpin(engine, stream->ctx);
 
-	mutex_unlock(&dev_priv->drm.struct_mutex);
+		mutex_unlock(&dev_priv->drm.struct_mutex);
+	}
 }
 
 static void
@@ -969,6 +1290,61 @@ static void gen7_init_oa_buffer(struct drm_i915_private *dev_priv)
 	dev_priv->perf.oa.pollin = false;
 }
 
+static void gen8_init_oa_buffer(struct drm_i915_private *dev_priv)
+{
+	u32 gtt_offset = i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
+	unsigned long flags;
+
+	spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	I915_WRITE(GEN8_OASTATUS, 0);
+	I915_WRITE(GEN8_OAHEADPTR, gtt_offset);
+	dev_priv->perf.oa.oa_buffer.head = gtt_offset;
+
+	I915_WRITE(GEN8_OABUFFER_UDW, 0);
+
+	/* PRM says:
+	 *
+	 *  "This MMIO must be set before the OATAILPTR
+	 *  register and after the OAHEADPTR register. This is
+	 *  to enable proper functionality of the overflow
+	 *  bit."
+	 */
+	I915_WRITE(GEN8_OABUFFER, gtt_offset |
+		   OABUFFER_SIZE_16M | OA_MEM_SELECT_GGTT);
+	I915_WRITE(GEN8_OATAILPTR, gtt_offset & GEN8_OATAILPTR_MASK);
+
+	/* Mark that we need updated tail pointers to read from... */
+	dev_priv->perf.oa.oa_buffer.tails[0].offset = INVALID_TAIL_PTR;
+	dev_priv->perf.oa.oa_buffer.tails[1].offset = INVALID_TAIL_PTR;
+
+	/* Reset state used to recognise context switches, affecting which
+	 * reports we will forward to userspace while filtering for a single
+	 * context.
+	 */
+	dev_priv->perf.oa.oa_buffer.last_ctx_id = INVALID_CTX_ID;
+
+	spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	/* NB: although the OA buffer will initially be allocated
+	 * zeroed via shmfs (and so this memset is redundant when
+	 * first allocating), we may re-init the OA buffer, either
+	 * when re-enabling a stream or in error/reset paths.
+	 *
+	 * The reason we clear the buffer for each re-init is for the
+	 * sanity check in gen8_append_oa_reports() that looks at the
+	 * reason field to make sure it's non-zero which relies on
+	 * the assumption that new reports are being written to zeroed
+	 * memory...
+	 */
+	memset(dev_priv->perf.oa.oa_buffer.vaddr, 0, OA_BUFFER_SIZE);
+
+	/* Maybe make ->pollin per-stream state if we support multiple
+	 * concurrent streams in the future.
+	 */
+	dev_priv->perf.oa.pollin = false;
+}
+
 static int alloc_oa_buffer(struct drm_i915_private *dev_priv)
 {
 	struct drm_i915_gem_object *bo;
@@ -1113,6 +1489,205 @@ static void hsw_disable_metric_set(struct drm_i915_private *dev_priv)
 				      ~GT_NOA_ENABLE));
 }
 
+/*
+ * From Broadwell PRM, 3D-Media-GPGPU -> Register State Context
+ *
+ * MMIO reads or writes to any of the registers listed in the
+ * “Register State Context image” subsections through HOST/IA
+ * MMIO interface for debug purposes must follow the steps below:
+ *
+ * - SW should set the Force Wakeup bit to prevent GT from entering C6.
+ * - Write 0x2050[31:0] = 0x00010001 (disable sequence).
+ * - Disable IDLE messaging in CS (Write 0x2050[31:0] = 0x00010001).
+ * - BDW:  Poll/Wait for register bits of 0x22AC[6:0] turn to 0x30 value.
+ * - SKL+: Poll/Wait for register bits of 0x22A4[6:0] turn to 0x30 value.
+ * - Read/Write to desired MMIO registers.
+ * - Enable IDLE messaging in CS (Write 0x2050[31:0] = 0x00010000).
+ * - Force Wakeup bit should be reset to enable C6 entry.
+ *
+ * XXX: don't nest or overlap calls to this API, it has no ref
+ * counting to track how many entities require the RCS to be
+ * blocked from being idle.
+ */
+static int gen8_begin_ctx_mmio(struct drm_i915_private *dev_priv)
+{
+	i915_reg_t fsm_reg = INTEL_GEN(dev_priv) > 8 ?
+		GEN9_RCS_FE_FSM2 : GEN6_RCS_PWR_FSM;
+	int ret = 0;
+
+	/* There's only no active context while idle in execlist mode
+	 * (though we shouldn't be using this in any other case)
+	 */
+	if (WARN_ON(!i915.enable_execlists))
+		return -EINVAL;
+
+	intel_uncore_forcewake_get(dev_priv, FORCEWAKE_RENDER);
+
+	I915_WRITE(GEN6_RC_SLEEP_PSMI_CONTROL,
+		   _MASKED_BIT_ENABLE(GEN6_PSMI_SLEEP_MSG_DISABLE));
+
+	/* Note: we don't currently have a good handle on the maximum
+	 * latency for this wake up so while we only need to hold rcs
+	 * busy from process context we at least keep the waiting
+	 * interruptible...
+	 */
+	ret = intel_wait_for_register(dev_priv, fsm_reg, 0x3f, 0x30, 50);
+	if (ret == -ETIMEDOUT)
+		DRM_ERROR("Timeout waiting to be able to begin per-ctx mmio\n");
+
+	if (ret)
+		intel_uncore_forcewake_put(dev_priv, FORCEWAKE_RENDER);
+
+	return ret;
+}
+
+static void gen8_end_ctx_mmio(struct drm_i915_private *dev_priv)
+{
+	if (WARN_ON(!i915.enable_execlists))
+		return;
+
+	I915_WRITE(GEN6_RC_SLEEP_PSMI_CONTROL,
+		   _MASKED_BIT_DISABLE(GEN6_PSMI_SLEEP_MSG_DISABLE));
+	intel_uncore_forcewake_put(dev_priv, FORCEWAKE_RENDER);
+}
+
+/* Manages updating the per-context aspects of the OA stream
+ * configuration across all contexts.
+ *
+ * The awkward consideration here is that OACTXCONTROL controls the
+ * exponent for periodic sampling which is primarily used for system
+ * wide profiling where we'd like a consistent sampling period even in
+ * the face of context switches.
+ *
+ * Our approach of updating the register state context (as opposed to
+ * say using a workaround batch buffer) ensures that the hardware
+ * won't automatically reload an out-of-date timer exponent even
+ * transiently before a WA BB could be parsed.
+ *
+ * This function needs to:
+ * - Ensure the currently running context's per-context OA state is
+ *   updated
+ * - Ensure that all existing contexts will have the correct per-context
+ *   OA state if they are scheduled for use.
+ * - Ensure any new contexts will be initialized with the correct
+ *   per-context OA state.
+ *
+ * Note: it's only the RCS/Render context that has any OA state.
+ */
+static int gen8_configure_all_contexts(struct drm_i915_private *dev_priv)
+{
+	struct i915_gem_context *ctx;
+	int ret;
+
+	ret = i915_mutex_lock_interruptible(&dev_priv->drm);
+	if (ret)
+		return ret;
+
+	/* Since execlist submission may be happening asynchronously here then
+	 * we first mark existing contexts dirty before we update the current
+	 * context so if any switches happen in the middle we can expect
+	 * that the act of scheduling will have itself ensured a consistent
+	 * OA state update.
+	 */
+	list_for_each_entry(ctx, &dev_priv->context_list, link) {
+		/* The actual update of the register state context will happen
+		 * the next time this logical ring is submitted. (See
+		 * i915_oa_update_reg_state() which hooks into
+		 * execlists_update_context())
+		 */
+		atomic_set(&ctx->engine[RCS].oa_state_dirty, 1);
+	}
+
+	mutex_unlock(&dev_priv->drm.struct_mutex);
+
+	/* Now update the current context.
+	 *
+	 * Note: Using MMIO to update per-context registers requires
+	 * some extra care...
+	 */
+	ret = gen8_begin_ctx_mmio(dev_priv);
+	if (ret) {
+		DRM_ERROR("Failed to bring RCS out of idle to update current ctx OA state\n");
+		return ret;
+	}
+
+	I915_WRITE(GEN8_OACTXCONTROL, ((dev_priv->perf.oa.period_exponent <<
+					GEN8_OA_TIMER_PERIOD_SHIFT) |
+				      (dev_priv->perf.oa.periodic ?
+				       GEN8_OA_TIMER_ENABLE : 0) |
+				      GEN8_OA_COUNTER_RESUME));
+
+	config_oa_regs(dev_priv, dev_priv->perf.oa.flex_regs,
+			dev_priv->perf.oa.flex_regs_len);
+
+	gen8_end_ctx_mmio(dev_priv);
+
+	return 0;
+}
+
+static int gen8_enable_metric_set(struct drm_i915_private *dev_priv)
+{
+	int ret = dev_priv->perf.oa.ops.select_metric_set(dev_priv);
+
+	if (ret)
+		return ret;
+
+	/* We disable slice/unslice clock ratio change reports on SKL since
+	 * they are too noisy. The HW generates a lot of redundant reports
+	 * where the ratio hasn't really changed causing a lot of redundant
+	 * work to processes and increasing the chances we'll hit buffer
+	 * overruns.
+	 *
+	 * Although we don't currently use the 'disable overrun' OABUFFER
+	 * feature it's worth noting that clock ratio reports have to be
+	 * disabled before considering to use that feature since the HW doesn't
+	 * correctly block these reports.
+	 *
+	 * Currently none of the high-level metrics we have depend on knowing
+	 * this ratio to normalize.
+	 *
+	 * Note: This register is not power context saved and restored, but
+	 * that's OK considering that we disable RC6 while the OA unit is
+	 * enabled.
+	 *
+	 * The _INCLUDE_CLK_RATIO bit allows the slice/unslice frequency to
+	 * be read back from automatically triggered reports, as part of the
+	 * RPT_ID field.
+	 */
+	if (IS_SKYLAKE(dev_priv) || IS_BROXTON(dev_priv)) {
+		I915_WRITE(GEN8_OA_DEBUG,
+			   _MASKED_BIT_ENABLE(GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS |
+					      GEN9_OA_DEBUG_INCLUDE_CLK_RATIO));
+	}
+
+	I915_WRITE(GDT_CHICKEN_BITS, 0xA0);
+	config_oa_regs(dev_priv, dev_priv->perf.oa.mux_regs,
+		       dev_priv->perf.oa.mux_regs_len);
+	I915_WRITE(GDT_CHICKEN_BITS, 0x80);
+
+	/* It takes a fairly long time for a new MUX configuration to
+	 * be be applied after these register writes. This delay
+	 * duration is take from Haswell (derived empirically based on
+	 * the render_basic config) but hopefully it covers the
+	 * maximum configuration latency for Gen8+ too...
+	 */
+	usleep_range(15000, 20000);
+
+	config_oa_regs(dev_priv, dev_priv->perf.oa.b_counter_regs,
+		       dev_priv->perf.oa.b_counter_regs_len);
+
+	ret = gen8_configure_all_contexts(dev_priv);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+static void gen8_disable_metric_set(struct drm_i915_private *dev_priv)
+{
+	/* NOP */
+}
+
 static void gen7_update_oacontrol_locked(struct drm_i915_private *dev_priv)
 {
 	lockdep_assert_held(&dev_priv->perf.hook_lock);
@@ -1157,6 +1732,29 @@ static void gen7_oa_enable(struct drm_i915_private *dev_priv)
 	spin_unlock_irqrestore(&dev_priv->perf.hook_lock, flags);
 }
 
+static void gen8_oa_enable(struct drm_i915_private *dev_priv)
+{
+	u32 report_format = dev_priv->perf.oa.oa_buffer.format;
+
+	/* Reset buf pointers so we don't forward reports from before now.
+	 *
+	 * Think carefully if considering trying to avoid this, since it
+	 * also ensures status flags and the buffer itself are cleared
+	 * in error paths, and we have checks for invalid reports based
+	 * on the assumption that certain fields are written to zeroed
+	 * memory which this helps maintains.
+	 */
+	gen8_init_oa_buffer(dev_priv);
+
+	/* Note: we don't rely on the hardware to perform single context
+	 * filtering and instead filter on the cpu based on the context-id
+	 * field of reports
+	 */
+	I915_WRITE(GEN8_OACONTROL, (report_format <<
+				    GEN8_OA_REPORT_FORMAT_SHIFT) |
+				   GEN8_OA_COUNTER_ENABLE);
+}
+
 /**
  * i915_oa_stream_enable - handle `I915_PERF_IOCTL_ENABLE` for OA stream
  * @stream: An i915 perf stream opened for OA metrics
@@ -1183,6 +1781,11 @@ static void gen7_oa_disable(struct drm_i915_private *dev_priv)
 	I915_WRITE(GEN7_OACONTROL, 0);
 }
 
+static void gen8_oa_disable(struct drm_i915_private *dev_priv)
+{
+	I915_WRITE(GEN8_OACONTROL, 0);
+}
+
 /**
  * i915_oa_stream_disable - handle `I915_PERF_IOCTL_DISABLE` for OA stream
  * @stream: An i915 perf stream opened for OA metrics
@@ -1361,6 +1964,88 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream,
 	return ret;
 }
 
+/* NB: It must always remain pointer safe to run this even if the OA unit
+ * has been disabled.
+ *
+ * It's fine to put out-of-date values into these per-context registers
+ * in the case that the OA unit has been disabled.
+ */
+static void gen8_update_reg_state_unlocked(struct intel_engine_cs *engine,
+					   struct i915_gem_context *ctx,
+					   u32 *reg_state)
+{
+	struct drm_i915_private *dev_priv = ctx->i915;
+	const struct i915_oa_reg *flex_regs = dev_priv->perf.oa.flex_regs;
+	int n_flex_regs = dev_priv->perf.oa.flex_regs_len;
+	u32 ctx_oactxctrl = dev_priv->perf.oa.ctx_oactxctrl_offset;
+	u32 ctx_flexeu0 = dev_priv->perf.oa.ctx_flexeu0_offset;
+	/* The MMIO offsets for Flex EU registers aren't contiguous */
+	u32 flex_mmio[] = {
+		i915_mmio_reg_offset(EU_PERF_CNTL0),
+		i915_mmio_reg_offset(EU_PERF_CNTL1),
+		i915_mmio_reg_offset(EU_PERF_CNTL2),
+		i915_mmio_reg_offset(EU_PERF_CNTL3),
+		i915_mmio_reg_offset(EU_PERF_CNTL4),
+		i915_mmio_reg_offset(EU_PERF_CNTL5),
+		i915_mmio_reg_offset(EU_PERF_CNTL6),
+	};
+	int i;
+
+	reg_state[ctx_oactxctrl] = i915_mmio_reg_offset(GEN8_OACTXCONTROL);
+	reg_state[ctx_oactxctrl+1] = (dev_priv->perf.oa.period_exponent <<
+				      GEN8_OA_TIMER_PERIOD_SHIFT) |
+				     (dev_priv->perf.oa.periodic ?
+				      GEN8_OA_TIMER_ENABLE : 0) |
+				     GEN8_OA_COUNTER_RESUME;
+
+	for (i = 0; i < ARRAY_SIZE(flex_mmio); i++) {
+		u32 state_offset = ctx_flexeu0 + i * 2;
+		u32 mmio = flex_mmio[i];
+
+		/* This arbitrary default will select the 'EU FPU0 Pipeline
+		 * Active' event. In the future it's anticipated that there
+		 * will be an explicit 'No Event' we can select, but not yet...
+		 */
+		u32 value = 0;
+		int j;
+
+		for (j = 0; j < n_flex_regs; j++) {
+			if (i915_mmio_reg_offset(flex_regs[j].addr) == mmio) {
+				value = flex_regs[j].value;
+				break;
+			}
+		}
+
+		reg_state[state_offset] = mmio;
+		reg_state[state_offset+1] = value;
+	}
+}
+
+void i915_oa_update_reg_state(struct intel_engine_cs *engine,
+			      struct i915_gem_context *ctx,
+			      u32 *reg_state)
+{
+	struct drm_i915_private *dev_priv = engine->i915;
+
+	if (engine->id != RCS)
+		return;
+
+	if (!dev_priv->perf.initialized)
+		return;
+
+	/* XXX: We don't take a lock here and this may run async with
+	 * respect to stream methods. Notably we don't want to block
+	 * context switches by long i915 perf read() operations.
+	 *
+	 * It's expected to always be safe to read the dev_priv->perf
+	 * state needed here, and expected to be benign to redundantly
+	 * update the state if the OA unit has been disabled since
+	 * oa_state_dirty was last set.
+	 */
+	if (atomic_cmpxchg(&ctx->engine[engine->id].oa_state_dirty, 1, 0))
+		gen8_update_reg_state_unlocked(engine, ctx, reg_state);
+}
+
 /**
  * i915_perf_read_locked - &i915_perf_stream_ops->read with error normalisation
  * @stream: An i915 perf stream
@@ -1486,7 +2171,7 @@ static enum hrtimer_restart oa_poll_check_timer_cb(struct hrtimer *hrtimer)
 		container_of(hrtimer, typeof(*dev_priv),
 			     perf.oa.poll_check_timer);
 
-	if (dev_priv->perf.oa.ops.oa_buffer_check(dev_priv)) {
+	if (oa_buffer_check_unlocked(dev_priv)) {
 		dev_priv->perf.oa.pollin = true;
 		wake_up(&dev_priv->perf.oa.poll_wq);
 	}
@@ -1775,6 +2460,7 @@ i915_perf_open_ioctl_locked(struct drm_i915_private *dev_priv,
 	struct i915_gem_context *specific_ctx = NULL;
 	struct i915_perf_stream *stream = NULL;
 	unsigned long f_flags = 0;
+	bool privileged_op = true;
 	int stream_fd;
 	int ret;
 
@@ -1792,12 +2478,28 @@ i915_perf_open_ioctl_locked(struct drm_i915_private *dev_priv,
 		}
 	}
 
+	/* On Haswell the OA unit supports clock gating off for a specific
+	 * context and in this mode there's no visibility of metrics for the
+	 * rest of the system, which we consider acceptable for a
+	 * non-privileged client.
+	 *
+	 * For Gen8+ the OA unit no longer supports clock gating off for a
+	 * specific context and the kernel can't securely stop the counters
+	 * from updating as system-wide / global values. Even though we can
+	 * filter reports based on the included context ID we can't block
+	 * clients from seeing the raw / global counter values via
+	 * MI_REPORT_PERF_COUNT commands and so consider it a privileged op to
+	 * enable the OA unit by default.
+	 */
+	if (IS_HASWELL(dev_priv) && specific_ctx)
+		privileged_op = false;
+
 	/* Similar to perf's kernel.perf_paranoid_cpu sysctl option
 	 * we check a dev.i915.perf_stream_paranoid sysctl option
 	 * to determine if it's ok to access system wide OA counters
 	 * without CAP_SYS_ADMIN privileges.
 	 */
-	if (!specific_ctx &&
+	if (privileged_op &&
 	    i915_perf_stream_paranoid && !capable(CAP_SYS_ADMIN)) {
 		DRM_DEBUG("Insufficient privileges to open system-wide i915 perf stream\n");
 		ret = -EACCES;
@@ -2069,9 +2771,6 @@ int i915_perf_open_ioctl(struct drm_device *dev, void *data,
  */
 void i915_perf_register(struct drm_i915_private *dev_priv)
 {
-	if (!IS_HASWELL(dev_priv))
-		return;
-
 	if (!dev_priv->perf.initialized)
 		return;
 
@@ -2087,11 +2786,38 @@ void i915_perf_register(struct drm_i915_private *dev_priv)
 	if (!dev_priv->perf.metrics_kobj)
 		goto exit;
 
-	if (i915_perf_register_sysfs_hsw(dev_priv)) {
-		kobject_put(dev_priv->perf.metrics_kobj);
-		dev_priv->perf.metrics_kobj = NULL;
+	if (IS_HASWELL(dev_priv)) {
+		if (i915_perf_register_sysfs_hsw(dev_priv))
+			goto sysfs_error;
+	} else if (IS_BROADWELL(dev_priv)) {
+		if (i915_perf_register_sysfs_bdw(dev_priv))
+			goto sysfs_error;
+	} else if (IS_CHERRYVIEW(dev_priv)) {
+		if (i915_perf_register_sysfs_chv(dev_priv))
+			goto sysfs_error;
+	} else if (IS_SKYLAKE(dev_priv)) {
+		if (IS_SKL_GT2(dev_priv)) {
+			if (i915_perf_register_sysfs_sklgt2(dev_priv))
+				goto sysfs_error;
+		} else if (IS_SKL_GT3(dev_priv)) {
+			if (i915_perf_register_sysfs_sklgt3(dev_priv))
+				goto sysfs_error;
+		} else if (IS_SKL_GT4(dev_priv)) {
+			if (i915_perf_register_sysfs_sklgt4(dev_priv))
+				goto sysfs_error;
+		} else
+			goto sysfs_error;
+	} else if (IS_BROXTON(dev_priv)) {
+		if (i915_perf_register_sysfs_bxt(dev_priv))
+			goto sysfs_error;
 	}
 
+	goto exit;
+
+sysfs_error:
+	kobject_put(dev_priv->perf.metrics_kobj);
+	dev_priv->perf.metrics_kobj = NULL;
+
 exit:
 	mutex_unlock(&dev_priv->perf.lock);
 }
@@ -2107,13 +2833,24 @@ void i915_perf_register(struct drm_i915_private *dev_priv)
  */
 void i915_perf_unregister(struct drm_i915_private *dev_priv)
 {
-	if (!IS_HASWELL(dev_priv))
-		return;
-
 	if (!dev_priv->perf.metrics_kobj)
 		return;
 
-	i915_perf_unregister_sysfs_hsw(dev_priv);
+        if (IS_HASWELL(dev_priv))
+                i915_perf_unregister_sysfs_hsw(dev_priv);
+        else if (IS_BROADWELL(dev_priv))
+                i915_perf_unregister_sysfs_bdw(dev_priv);
+        else if (IS_CHERRYVIEW(dev_priv))
+                i915_perf_unregister_sysfs_chv(dev_priv);
+        else if (IS_SKYLAKE(dev_priv)) {
+		if (IS_SKL_GT2(dev_priv))
+			i915_perf_unregister_sysfs_sklgt2(dev_priv);
+		else if (IS_SKL_GT3(dev_priv))
+			i915_perf_unregister_sysfs_sklgt3(dev_priv);
+		else if (IS_SKL_GT4(dev_priv))
+			i915_perf_unregister_sysfs_sklgt4(dev_priv);
+	} else if (IS_BROXTON(dev_priv))
+                i915_perf_unregister_sysfs_bxt(dev_priv);
 
 	kobject_put(dev_priv->perf.metrics_kobj);
 	dev_priv->perf.metrics_kobj = NULL;
@@ -2172,36 +2909,105 @@ static struct ctl_table dev_root[] = {
  */
 void i915_perf_init(struct drm_i915_private *dev_priv)
 {
-	if (!IS_HASWELL(dev_priv))
-		return;
-
-	hrtimer_init(&dev_priv->perf.oa.poll_check_timer,
-		     CLOCK_MONOTONIC, HRTIMER_MODE_REL);
-	dev_priv->perf.oa.poll_check_timer.function = oa_poll_check_timer_cb;
-	init_waitqueue_head(&dev_priv->perf.oa.poll_wq);
+	dev_priv->perf.oa.n_builtin_sets = 0;
+
+	if (IS_HASWELL(dev_priv)) {
+		dev_priv->perf.oa.ops.init_oa_buffer = gen7_init_oa_buffer;
+		dev_priv->perf.oa.ops.enable_metric_set = hsw_enable_metric_set;
+		dev_priv->perf.oa.ops.disable_metric_set = hsw_disable_metric_set;
+		dev_priv->perf.oa.ops.oa_enable = gen7_oa_enable;
+		dev_priv->perf.oa.ops.oa_disable = gen7_oa_disable;
+		dev_priv->perf.oa.ops.read = gen7_oa_read;
+		dev_priv->perf.oa.ops.oa_hw_tail_read =
+			gen7_oa_hw_tail_read;
+
+		dev_priv->perf.oa.oa_formats = hsw_oa_formats;
+
+		dev_priv->perf.oa.n_builtin_sets =
+			i915_oa_n_builtin_metric_sets_hsw;
+	} else if (i915.enable_execlists) {
+		/* Note: that although we could theoretically also support the
+		 * legacy ringbuffer mode on BDW (and earlier iterations of
+		 * this driver, before upstreaming did this) it didn't seem
+		 * worth the complexity to maintain now that BDW+ enable
+		 * execlist mode by default.
+		 */
 
-	INIT_LIST_HEAD(&dev_priv->perf.streams);
-	mutex_init(&dev_priv->perf.lock);
-	spin_lock_init(&dev_priv->perf.hook_lock);
-	spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);
+		if (IS_GEN8(dev_priv)) {
+			dev_priv->perf.oa.ctx_oactxctrl_offset = 0x120;
+			dev_priv->perf.oa.ctx_flexeu0_offset = 0x2ce;
+			dev_priv->perf.oa.gen8_valid_ctx_bit = (1<<25);
+
+			if (IS_BROADWELL(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_bdw;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_bdw;
+			} else if (IS_CHERRYVIEW(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_chv;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_chv;
+			}
+		} else if (IS_GEN9(dev_priv)) {
+			dev_priv->perf.oa.ctx_oactxctrl_offset = 0x128;
+			dev_priv->perf.oa.ctx_flexeu0_offset = 0x3de;
+			dev_priv->perf.oa.gen8_valid_ctx_bit = (1<<16);
+
+			if (IS_SKL_GT2(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_sklgt2;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_sklgt2;
+			} else if (IS_SKL_GT3(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_sklgt3;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_sklgt3;
+			} else if (IS_SKL_GT4(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_sklgt4;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_sklgt4;
+			} else if (IS_BROXTON(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_bxt;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_bxt;
+			}
+		}
 
-	dev_priv->perf.oa.ops.init_oa_buffer = gen7_init_oa_buffer;
-	dev_priv->perf.oa.ops.enable_metric_set = hsw_enable_metric_set;
-	dev_priv->perf.oa.ops.disable_metric_set = hsw_disable_metric_set;
-	dev_priv->perf.oa.ops.oa_enable = gen7_oa_enable;
-	dev_priv->perf.oa.ops.oa_disable = gen7_oa_disable;
-	dev_priv->perf.oa.ops.read = gen7_oa_read;
-	dev_priv->perf.oa.ops.oa_buffer_check =
-		gen7_oa_buffer_check_unlocked;
+		if (dev_priv->perf.oa.n_builtin_sets) {
+			dev_priv->perf.oa.ops.init_oa_buffer = gen8_init_oa_buffer;
+			dev_priv->perf.oa.ops.enable_metric_set =
+				gen8_enable_metric_set;
+			dev_priv->perf.oa.ops.disable_metric_set =
+				gen8_disable_metric_set;
+			dev_priv->perf.oa.ops.oa_enable = gen8_oa_enable;
+			dev_priv->perf.oa.ops.oa_disable = gen8_oa_disable;
+			dev_priv->perf.oa.ops.read = gen8_oa_read;
+			dev_priv->perf.oa.ops.oa_hw_tail_read =
+				gen8_oa_hw_tail_read;
+
+			dev_priv->perf.oa.oa_formats = gen8_plus_oa_formats;
+		}
+	}
 
-	dev_priv->perf.oa.oa_formats = hsw_oa_formats;
+	if (dev_priv->perf.oa.n_builtin_sets) {
+		hrtimer_init(&dev_priv->perf.oa.poll_check_timer,
+				CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+		dev_priv->perf.oa.poll_check_timer.function = oa_poll_check_timer_cb;
+		init_waitqueue_head(&dev_priv->perf.oa.poll_wq);
 
-	dev_priv->perf.oa.n_builtin_sets =
-		i915_oa_n_builtin_metric_sets_hsw;
+		INIT_LIST_HEAD(&dev_priv->perf.streams);
+		mutex_init(&dev_priv->perf.lock);
+		spin_lock_init(&dev_priv->perf.hook_lock);
+		spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);
 
-	dev_priv->perf.sysctl_header = register_sysctl_table(dev_root);
+		dev_priv->perf.sysctl_header = register_sysctl_table(dev_root);
 
-	dev_priv->perf.initialized = true;
+		dev_priv->perf.initialized = true;
+	}
 }
 
 /**
@@ -2216,5 +3022,6 @@ void i915_perf_fini(struct drm_i915_private *dev_priv)
 	unregister_sysctl_table(dev_priv->perf.sysctl_header);
 
 	memset(&dev_priv->perf.oa.ops, 0, sizeof(dev_priv->perf.oa.ops));
+
 	dev_priv->perf.initialized = false;
 }
diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index 4c72adae368b..c0bbba6d5b78 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -653,6 +653,12 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 
 #define GEN8_OACTXID _MMIO(0x2364)
 
+#define GEN8_OA_DEBUG _MMIO(0x2B04)
+#define  GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS    (1<<5)
+#define  GEN9_OA_DEBUG_INCLUDE_CLK_RATIO	    (1<<6)
+#define  GEN9_OA_DEBUG_DISABLE_GO_1_0_REPORTS	    (1<<2)
+#define  GEN9_OA_DEBUG_DISABLE_CTX_SWITCH_REPORTS   (1<<1)
+
 #define GEN8_OACONTROL _MMIO(0x2B00)
 #define  GEN8_OA_REPORT_FORMAT_A12	    (0<<2)
 #define  GEN8_OA_REPORT_FORMAT_A12_B8_C8    (2<<2)
@@ -674,6 +680,7 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 #define  GEN7_OABUFFER_STOP_RESUME_ENABLE   (1<<1)
 #define  GEN7_OABUFFER_RESUME		    (1<<0)
 
+#define GEN8_OABUFFER_UDW _MMIO(0x23b4)
 #define GEN8_OABUFFER _MMIO(0x2b14)
 
 #define GEN7_OASTATUS1 _MMIO(0x2364)
@@ -692,7 +699,9 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 #define  GEN8_OASTATUS_REPORT_LOST	    (1<<0)
 
 #define GEN8_OAHEADPTR _MMIO(0x2B0C)
+#define GEN8_OAHEADPTR_MASK    0xffffffc0
 #define GEN8_OATAILPTR _MMIO(0x2B10)
+#define GEN8_OATAILPTR_MASK    0xffffffc0
 
 #define OABUFFER_SIZE_128K  (0<<3)
 #define OABUFFER_SIZE_256K  (1<<3)
@@ -705,7 +714,17 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 
 #define OA_MEM_SELECT_GGTT  (1<<0)
 
+/*
+ * Flexible, Aggregate EU Counter Registers.
+ * Note: these aren't contiguous
+ */
 #define EU_PERF_CNTL0	    _MMIO(0xe458)
+#define EU_PERF_CNTL1	    _MMIO(0xe558)
+#define EU_PERF_CNTL2	    _MMIO(0xe658)
+#define EU_PERF_CNTL3	    _MMIO(0xe758)
+#define EU_PERF_CNTL4	    _MMIO(0xe45c)
+#define EU_PERF_CNTL5	    _MMIO(0xe55c)
+#define EU_PERF_CNTL6	    _MMIO(0xe65c)
 
 #define GDT_CHICKEN_BITS    _MMIO(0x9840)
 #define GT_NOA_ENABLE	    0x00000080
@@ -2325,6 +2344,9 @@ enum skl_disp_power_wells {
 #define   GEN8_RC_SEMA_IDLE_MSG_DISABLE	(1 << 12)
 #define   GEN8_FF_DOP_CLOCK_GATE_DISABLE	(1<<10)
 
+#define GEN6_RCS_PWR_FSM _MMIO(0x22ac)
+#define GEN9_RCS_FE_FSM2 _MMIO(0x22a4)
+
 /* Fuse readout registers for GT */
 #define CHV_FUSE_GT			_MMIO(VLV_DISPLAY_BASE + 0x2168)
 #define   CHV_FGT_DISABLE_SS0		(1 << 10)
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 7df278fe492e..e787656b630f 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -337,6 +337,8 @@ static u64 execlists_update_context(struct drm_i915_gem_request *rq)
 	if (ppgtt && !i915_vm_is_48bit(&ppgtt->base))
 		execlists_update_context_pdps(ppgtt, reg_state);
 
+	i915_oa_update_reg_state(rq->engine, rq->ctx, reg_state);
+
 	return ce->lrc_desc;
 }
 
@@ -1879,6 +1881,9 @@ static void execlists_init_reg_state(u32 *regs,
 		regs[CTX_LRI_HEADER_2] = MI_LOAD_REGISTER_IMM(1);
 		CTX_REG(regs, CTX_R_PWR_CLK_STATE, GEN8_R_PWR_CLK_STATE,
 			make_rpcs(dev_priv));
+
+		atomic_set(&ctx->engine[RCS].oa_state_dirty, 1);
+		i915_oa_update_reg_state(engine, ctx, regs);
 	}
 }
 
diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
index 689fd7a418a7..20d65a317492 100644
--- a/include/uapi/drm/i915_drm.h
+++ b/include/uapi/drm/i915_drm.h
@@ -1303,13 +1303,18 @@ struct drm_i915_gem_context_param {
 };
 
 enum drm_i915_oa_format {
-	I915_OA_FORMAT_A13 = 1,
-	I915_OA_FORMAT_A29,
-	I915_OA_FORMAT_A13_B8_C8,
-	I915_OA_FORMAT_B4_C8,
-	I915_OA_FORMAT_A45_B8_C8,
-	I915_OA_FORMAT_B4_C8_A16,
-	I915_OA_FORMAT_C4_B8,
+	I915_OA_FORMAT_A13 = 1,	    /* HSW only */
+	I915_OA_FORMAT_A29,	    /* HSW only */
+	I915_OA_FORMAT_A13_B8_C8,   /* HSW only */
+	I915_OA_FORMAT_B4_C8,	    /* HSW only */
+	I915_OA_FORMAT_A45_B8_C8,   /* HSW only */
+	I915_OA_FORMAT_B4_C8_A16,   /* HSW only */
+	I915_OA_FORMAT_C4_B8,	    /* HSW+ */
+
+	/* Gen8+ */
+	I915_OA_FORMAT_A12,
+	I915_OA_FORMAT_A12_B8_C8,
+	I915_OA_FORMAT_A32u40_A4u32_B8_C8,
 
 	I915_OA_FORMAT_MAX	    /* non-ABI */
 };
-- 
2.12.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v4 13/15] drm/i915/perf: Add more OA configs for BDW, CHV, SKL + BXT
  2017-04-12 15:55 [PATCH v4 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Robert Bragg
                   ` (11 preceding siblings ...)
  2017-04-12 15:55 ` [PATCH v4 12/15] drm/i915/perf: Add OA unit support for Gen 8+ Robert Bragg
@ 2017-04-12 15:55 ` Robert Bragg
  2017-04-12 15:55 ` [PATCH v4 14/15] drm/i915/perf: per-gen timebase for checking sample freq Robert Bragg
                   ` (5 subsequent siblings)
  18 siblings, 0 replies; 87+ messages in thread
From: Robert Bragg @ 2017-04-12 15:55 UTC (permalink / raw)
  To: intel-gfx

These are auto generated from an XML description of metric sets,
currently maintained in gputop, ref:

 https://github.com/rib/gputop
 > gputop-data/oa-*.xml
 > scripts/i915-perf-kernelgen.py

 $ make -C gputop-data -f Makefile.xml

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Acked-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
---
 drivers/gpu/drm/i915/i915_oa_bdw.c    | 4856 ++++++++++++++++++++++++++++++++-
 drivers/gpu/drm/i915/i915_oa_bxt.c    | 2323 +++++++++++++++-
 drivers/gpu/drm/i915/i915_oa_chv.c    | 2525 ++++++++++++++++-
 drivers/gpu/drm/i915/i915_oa_hsw.c    |   58 +-
 drivers/gpu/drm/i915/i915_oa_sklgt2.c | 3156 ++++++++++++++++++++-
 drivers/gpu/drm/i915/i915_oa_sklgt3.c | 2702 +++++++++++++++++-
 drivers/gpu/drm/i915/i915_oa_sklgt4.c | 2745 ++++++++++++++++++-
 7 files changed, 18176 insertions(+), 189 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_oa_bdw.c b/drivers/gpu/drm/i915/i915_oa_bdw.c
index b0b1b75fb431..eff198ea639b 100644
--- a/drivers/gpu/drm/i915/i915_oa_bdw.c
+++ b/drivers/gpu/drm/i915/i915_oa_bdw.c
@@ -31,9 +31,30 @@
 
 enum metric_set_id {
 	METRIC_SET_ID_RENDER_BASIC = 1,
+	METRIC_SET_ID_COMPUTE_BASIC,
+	METRIC_SET_ID_RENDER_PIPE_PROFILE,
+	METRIC_SET_ID_MEMORY_READS,
+	METRIC_SET_ID_MEMORY_WRITES,
+	METRIC_SET_ID_COMPUTE_EXTENDED,
+	METRIC_SET_ID_COMPUTE_L3_CACHE,
+	METRIC_SET_ID_DATA_PORT_READS_COALESCING,
+	METRIC_SET_ID_DATA_PORT_WRITES_COALESCING,
+	METRIC_SET_ID_HDC_AND_SF,
+	METRIC_SET_ID_L3_1,
+	METRIC_SET_ID_L3_2,
+	METRIC_SET_ID_L3_3,
+	METRIC_SET_ID_L3_4,
+	METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND,
+	METRIC_SET_ID_SAMPLER_1,
+	METRIC_SET_ID_SAMPLER_2,
+	METRIC_SET_ID_TDL_1,
+	METRIC_SET_ID_TDL_2,
+	METRIC_SET_ID_COMPUTE_EXTRA,
+	METRIC_SET_ID_VME_PIPE,
+	METRIC_SET_ID_TEST_OA,
 };
 
-int i915_oa_n_builtin_metric_sets_bdw = 1;
+int i915_oa_n_builtin_metric_sets_bdw = 22;
 
 static const struct i915_oa_reg b_counter_config_render_basic[] = {
 	{ _MMIO(0x2710), 0x00000000 },
@@ -290,66 +311,4609 @@ get_render_basic_mux_config(struct drm_i915_private *dev_priv,
 	return NULL;
 }
 
+static const struct i915_oa_reg b_counter_config_compute_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic_0_slices_0x01[] = {
+	{ _MMIO(0x9888), 0x105c00e0 },
+	{ _MMIO(0x9888), 0x105800e0 },
+	{ _MMIO(0x9888), 0x103800e0 },
+	{ _MMIO(0x9888), 0x3580001a },
+	{ _MMIO(0x9888), 0x3b800060 },
+	{ _MMIO(0x9888), 0x3d800005 },
+	{ _MMIO(0x9888), 0x065c2100 },
+	{ _MMIO(0x9888), 0x0a5c0041 },
+	{ _MMIO(0x9888), 0x0c5c6600 },
+	{ _MMIO(0x9888), 0x005c6580 },
+	{ _MMIO(0x9888), 0x085c8000 },
+	{ _MMIO(0x9888), 0x0e5c8000 },
+	{ _MMIO(0x9888), 0x00580042 },
+	{ _MMIO(0x9888), 0x08582080 },
+	{ _MMIO(0x9888), 0x0c58004c },
+	{ _MMIO(0x9888), 0x0e582580 },
+	{ _MMIO(0x9888), 0x005b4000 },
+	{ _MMIO(0x9888), 0x185b1000 },
+	{ _MMIO(0x9888), 0x1a5b0104 },
+	{ _MMIO(0x9888), 0x0c1fa800 },
+	{ _MMIO(0x9888), 0x0e1faa00 },
+	{ _MMIO(0x9888), 0x101f02aa },
+	{ _MMIO(0x9888), 0x08380042 },
+	{ _MMIO(0x9888), 0x0a382080 },
+	{ _MMIO(0x9888), 0x0e38404c },
+	{ _MMIO(0x9888), 0x0238404b },
+	{ _MMIO(0x9888), 0x00384000 },
+	{ _MMIO(0x9888), 0x16380000 },
+	{ _MMIO(0x9888), 0x18381145 },
+	{ _MMIO(0x9888), 0x04380000 },
+	{ _MMIO(0x9888), 0x0039a000 },
+	{ _MMIO(0x9888), 0x06398000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0c39a000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x02392000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8aaaa0 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x238b02a0 },
+	{ _MMIO(0x9888), 0x258b5550 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x1f850a80 },
+	{ _MMIO(0x9888), 0x2185aaa0 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x03844000 },
+	{ _MMIO(0x9888), 0x17808137 },
+	{ _MMIO(0x9888), 0x1980c147 },
+	{ _MMIO(0x9888), 0x1b80c0e5 },
+	{ _MMIO(0x9888), 0x1d80c0e3 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x13804000 },
+	{ _MMIO(0x9888), 0x15800000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d801000 },
+	{ _MMIO(0x9888), 0x4f800111 },
+	{ _MMIO(0x9888), 0x43800062 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800062 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800062 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f801062 },
+	{ _MMIO(0x9888), 0x41801084 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic_2_slices_0x02[] = {
+	{ _MMIO(0x9888), 0x10dc00e0 },
+	{ _MMIO(0x9888), 0x10d800e0 },
+	{ _MMIO(0x9888), 0x10b800e0 },
+	{ _MMIO(0x9888), 0x3580001a },
+	{ _MMIO(0x9888), 0x3b800060 },
+	{ _MMIO(0x9888), 0x3d800005 },
+	{ _MMIO(0x9888), 0x06dc2100 },
+	{ _MMIO(0x9888), 0x0adc0041 },
+	{ _MMIO(0x9888), 0x0cdc6600 },
+	{ _MMIO(0x9888), 0x00dc6580 },
+	{ _MMIO(0x9888), 0x08dc8000 },
+	{ _MMIO(0x9888), 0x0edc8000 },
+	{ _MMIO(0x9888), 0x00d80042 },
+	{ _MMIO(0x9888), 0x08d82080 },
+	{ _MMIO(0x9888), 0x0cd8004c },
+	{ _MMIO(0x9888), 0x0ed82580 },
+	{ _MMIO(0x9888), 0x00db4000 },
+	{ _MMIO(0x9888), 0x18db1000 },
+	{ _MMIO(0x9888), 0x1adb0104 },
+	{ _MMIO(0x9888), 0x0c9fa800 },
+	{ _MMIO(0x9888), 0x0e9faa00 },
+	{ _MMIO(0x9888), 0x109f02aa },
+	{ _MMIO(0x9888), 0x08b80042 },
+	{ _MMIO(0x9888), 0x0ab82080 },
+	{ _MMIO(0x9888), 0x0eb8404c },
+	{ _MMIO(0x9888), 0x02b8404b },
+	{ _MMIO(0x9888), 0x00b84000 },
+	{ _MMIO(0x9888), 0x16b80000 },
+	{ _MMIO(0x9888), 0x18b81145 },
+	{ _MMIO(0x9888), 0x04b80000 },
+	{ _MMIO(0x9888), 0x00b9a000 },
+	{ _MMIO(0x9888), 0x06b98000 },
+	{ _MMIO(0x9888), 0x08b9a000 },
+	{ _MMIO(0x9888), 0x0ab9a000 },
+	{ _MMIO(0x9888), 0x0cb9a000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x02b92000 },
+	{ _MMIO(0x9888), 0x01888000 },
+	{ _MMIO(0x9888), 0x0d88f800 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x238b0540 },
+	{ _MMIO(0x9888), 0x258baaa0 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x018c4000 },
+	{ _MMIO(0x9888), 0x0f8c4000 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c5500 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x018da000 },
+	{ _MMIO(0x9888), 0x078d8000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x038d2000 },
+	{ _MMIO(0x9888), 0x1f850a80 },
+	{ _MMIO(0x9888), 0x2185aaa0 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x03844000 },
+	{ _MMIO(0x9888), 0x17808137 },
+	{ _MMIO(0x9888), 0x1980c147 },
+	{ _MMIO(0x9888), 0x1b80c0e5 },
+	{ _MMIO(0x9888), 0x1d80c0e3 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x13804000 },
+	{ _MMIO(0x9888), 0x15800000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d805000 },
+	{ _MMIO(0x9888), 0x4f800555 },
+	{ _MMIO(0x9888), 0x43800062 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800062 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800062 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800062 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static const struct i915_oa_reg *
+get_compute_basic_mux_config(struct drm_i915_private *dev_priv,
+			     int *len)
+{
+	if (INTEL_INFO(dev_priv)->sseu.slice_mask & 0x01) {
+		*len = ARRAY_SIZE(mux_config_compute_basic_0_slices_0x01);
+		return mux_config_compute_basic_0_slices_0x01;
+	} else if (INTEL_INFO(dev_priv)->sseu.slice_mask & 0x02) {
+		*len = ARRAY_SIZE(mux_config_compute_basic_2_slices_0x02);
+		return mux_config_compute_basic_2_slices_0x02;
+	}
+
+	*len = 0;
+	return NULL;
+}
+
+static const struct i915_oa_reg b_counter_config_render_pipe_profile[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ffea },
+	{ _MMIO(0x2774), 0x00007ffc },
+	{ _MMIO(0x2778), 0x0007affa },
+	{ _MMIO(0x277c), 0x0000f5fd },
+	{ _MMIO(0x2780), 0x00079ffa },
+	{ _MMIO(0x2784), 0x0000f3fb },
+	{ _MMIO(0x2788), 0x0007bf7a },
+	{ _MMIO(0x278c), 0x0000f7e7 },
+	{ _MMIO(0x2790), 0x0007fefa },
+	{ _MMIO(0x2794), 0x0000f7cf },
+	{ _MMIO(0x2798), 0x00077ffa },
+	{ _MMIO(0x279c), 0x0000efdf },
+	{ _MMIO(0x27a0), 0x0006fffa },
+	{ _MMIO(0x27a4), 0x0000cfbf },
+	{ _MMIO(0x27a8), 0x0003fffa },
+	{ _MMIO(0x27ac), 0x00005f7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_pipe_profile[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile[] = {
+	{ _MMIO(0x9888), 0x0a1e0000 },
+	{ _MMIO(0x9888), 0x0c1f000f },
+	{ _MMIO(0x9888), 0x10176800 },
+	{ _MMIO(0x9888), 0x1191001f },
+	{ _MMIO(0x9888), 0x0b880320 },
+	{ _MMIO(0x9888), 0x01890c40 },
+	{ _MMIO(0x9888), 0x118a1c00 },
+	{ _MMIO(0x9888), 0x118d7c00 },
+	{ _MMIO(0x9888), 0x118e0020 },
+	{ _MMIO(0x9888), 0x118f4c00 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x13900001 },
+	{ _MMIO(0x9888), 0x065c4000 },
+	{ _MMIO(0x9888), 0x0c3d8000 },
+	{ _MMIO(0x9888), 0x06584000 },
+	{ _MMIO(0x9888), 0x0c5b4000 },
+	{ _MMIO(0x9888), 0x081e0040 },
+	{ _MMIO(0x9888), 0x0e1e0000 },
+	{ _MMIO(0x9888), 0x021f5400 },
+	{ _MMIO(0x9888), 0x001f0000 },
+	{ _MMIO(0x9888), 0x101f0010 },
+	{ _MMIO(0x9888), 0x0e1f0080 },
+	{ _MMIO(0x9888), 0x0c384000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x0c13c000 },
+	{ _MMIO(0x9888), 0x06164000 },
+	{ _MMIO(0x9888), 0x06170012 },
+	{ _MMIO(0x9888), 0x00170000 },
+	{ _MMIO(0x9888), 0x01910005 },
+	{ _MMIO(0x9888), 0x07880002 },
+	{ _MMIO(0x9888), 0x01880c00 },
+	{ _MMIO(0x9888), 0x0f880000 },
+	{ _MMIO(0x9888), 0x0d880000 },
+	{ _MMIO(0x9888), 0x05880000 },
+	{ _MMIO(0x9888), 0x09890032 },
+	{ _MMIO(0x9888), 0x078a0800 },
+	{ _MMIO(0x9888), 0x0f8a0a00 },
+	{ _MMIO(0x9888), 0x198a4000 },
+	{ _MMIO(0x9888), 0x1b8a2000 },
+	{ _MMIO(0x9888), 0x1d8a0000 },
+	{ _MMIO(0x9888), 0x038a4000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x238b54c0 },
+	{ _MMIO(0x9888), 0x258baa55 },
+	{ _MMIO(0x9888), 0x278b0019 },
+	{ _MMIO(0x9888), 0x198c0100 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x0f8d0015 },
+	{ _MMIO(0x9888), 0x018d1000 },
+	{ _MMIO(0x9888), 0x098d8000 },
+	{ _MMIO(0x9888), 0x0b8df000 },
+	{ _MMIO(0x9888), 0x0d8d3000 },
+	{ _MMIO(0x9888), 0x038de000 },
+	{ _MMIO(0x9888), 0x058d3000 },
+	{ _MMIO(0x9888), 0x0d8e0004 },
+	{ _MMIO(0x9888), 0x058e000c },
+	{ _MMIO(0x9888), 0x098e0000 },
+	{ _MMIO(0x9888), 0x078e0000 },
+	{ _MMIO(0x9888), 0x038e0000 },
+	{ _MMIO(0x9888), 0x0b8f0020 },
+	{ _MMIO(0x9888), 0x198f0c00 },
+	{ _MMIO(0x9888), 0x078f8000 },
+	{ _MMIO(0x9888), 0x098f4000 },
+	{ _MMIO(0x9888), 0x0b900980 },
+	{ _MMIO(0x9888), 0x03900d80 },
+	{ _MMIO(0x9888), 0x01900000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d801111 },
+	{ _MMIO(0x9888), 0x3d800800 },
+	{ _MMIO(0x9888), 0x4f801011 },
+	{ _MMIO(0x9888), 0x43800443 },
+	{ _MMIO(0x9888), 0x51801111 },
+	{ _MMIO(0x9888), 0x45800422 },
+	{ _MMIO(0x9888), 0x53801111 },
+	{ _MMIO(0x9888), 0x47800c60 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800422 },
+	{ _MMIO(0x9888), 0x41800021 },
+};
+
+static const struct i915_oa_reg *
+get_render_pipe_profile_mux_config(struct drm_i915_private *dev_priv,
+				   int *len)
+{
+	*len = ARRAY_SIZE(mux_config_render_pipe_profile);
+	return mux_config_render_pipe_profile;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_reads[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f872 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_reads[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads[] = {
+	{ _MMIO(0x9888), 0x198b0343 },
+	{ _MMIO(0x9888), 0x13845800 },
+	{ _MMIO(0x9888), 0x15840018 },
+	{ _MMIO(0x9888), 0x3580001a },
+	{ _MMIO(0x9888), 0x038b6300 },
+	{ _MMIO(0x9888), 0x058b6b62 },
+	{ _MMIO(0x9888), 0x078b006a },
+	{ _MMIO(0x9888), 0x118b0000 },
+	{ _MMIO(0x9888), 0x238b0000 },
+	{ _MMIO(0x9888), 0x258b0000 },
+	{ _MMIO(0x9888), 0x1f85a080 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385000a },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x01840018 },
+	{ _MMIO(0x9888), 0x07844c80 },
+	{ _MMIO(0x9888), 0x09840d9a },
+	{ _MMIO(0x9888), 0x0b840e9c },
+	{ _MMIO(0x9888), 0x0d840f9e },
+	{ _MMIO(0x9888), 0x0f840010 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x03848000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x2f8000e5 },
+	{ _MMIO(0x9888), 0x138080e3 },
+	{ _MMIO(0x9888), 0x1580c0e1 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x11804000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f804000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3d800800 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800842 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800842 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47801042 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800084 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static const struct i915_oa_reg *
+get_memory_reads_mux_config(struct drm_i915_private *dev_priv,
+			    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_memory_reads);
+	return mux_config_memory_reads;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_writes[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f822 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_writes[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes[] = {
+	{ _MMIO(0x9888), 0x198b0343 },
+	{ _MMIO(0x9888), 0x13845400 },
+	{ _MMIO(0x9888), 0x3580001a },
+	{ _MMIO(0x9888), 0x3d800805 },
+	{ _MMIO(0x9888), 0x038b6300 },
+	{ _MMIO(0x9888), 0x058b6b62 },
+	{ _MMIO(0x9888), 0x078b006a },
+	{ _MMIO(0x9888), 0x118b0000 },
+	{ _MMIO(0x9888), 0x238b0000 },
+	{ _MMIO(0x9888), 0x258b0000 },
+	{ _MMIO(0x9888), 0x1f85a080 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x23850002 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x01840010 },
+	{ _MMIO(0x9888), 0x07844880 },
+	{ _MMIO(0x9888), 0x09840992 },
+	{ _MMIO(0x9888), 0x0b840a94 },
+	{ _MMIO(0x9888), 0x0d840b96 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x03848000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x2d800147 },
+	{ _MMIO(0x9888), 0x2f8000e5 },
+	{ _MMIO(0x9888), 0x138080e3 },
+	{ _MMIO(0x9888), 0x1580c0e1 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x11804000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f800000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800842 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800842 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47801082 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800084 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static const struct i915_oa_reg *
+get_memory_writes_mux_config(struct drm_i915_private *dev_priv,
+			     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_memory_writes);
+	return mux_config_memory_writes;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extended[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fc2a },
+	{ _MMIO(0x2774), 0x0000bf00 },
+	{ _MMIO(0x2778), 0x0007fc6a },
+	{ _MMIO(0x277c), 0x0000bf00 },
+	{ _MMIO(0x2780), 0x0007fc92 },
+	{ _MMIO(0x2784), 0x0000bf00 },
+	{ _MMIO(0x2788), 0x0007fca2 },
+	{ _MMIO(0x278c), 0x0000bf00 },
+	{ _MMIO(0x2790), 0x0007fc32 },
+	{ _MMIO(0x2794), 0x0000bf00 },
+	{ _MMIO(0x2798), 0x0007fc9a },
+	{ _MMIO(0x279c), 0x0000bf00 },
+	{ _MMIO(0x27a0), 0x0007fe6a },
+	{ _MMIO(0x27a4), 0x0000bf00 },
+	{ _MMIO(0x27a8), 0x0007fe7a },
+	{ _MMIO(0x27ac), 0x0000bf00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extended[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended_0_subslices_0x01[] = {
+	{ _MMIO(0x9888), 0x143d0160 },
+	{ _MMIO(0x9888), 0x163d2800 },
+	{ _MMIO(0x9888), 0x183d0120 },
+	{ _MMIO(0x9888), 0x105800e0 },
+	{ _MMIO(0x9888), 0x005cc000 },
+	{ _MMIO(0x9888), 0x065c8000 },
+	{ _MMIO(0x9888), 0x085cc000 },
+	{ _MMIO(0x9888), 0x0a5cc000 },
+	{ _MMIO(0x9888), 0x0c5cc000 },
+	{ _MMIO(0x9888), 0x0e5cc000 },
+	{ _MMIO(0x9888), 0x025cc000 },
+	{ _MMIO(0x9888), 0x045cc000 },
+	{ _MMIO(0x9888), 0x003d0011 },
+	{ _MMIO(0x9888), 0x063d0900 },
+	{ _MMIO(0x9888), 0x083d0a13 },
+	{ _MMIO(0x9888), 0x0a3d0b15 },
+	{ _MMIO(0x9888), 0x0c3d2317 },
+	{ _MMIO(0x9888), 0x043d21b7 },
+	{ _MMIO(0x9888), 0x103d0000 },
+	{ _MMIO(0x9888), 0x0e3d0000 },
+	{ _MMIO(0x9888), 0x1a3d0000 },
+	{ _MMIO(0x9888), 0x0e5825c1 },
+	{ _MMIO(0x9888), 0x00586100 },
+	{ _MMIO(0x9888), 0x0258204c },
+	{ _MMIO(0x9888), 0x06588000 },
+	{ _MMIO(0x9888), 0x0858c000 },
+	{ _MMIO(0x9888), 0x0a58c000 },
+	{ _MMIO(0x9888), 0x0c58c000 },
+	{ _MMIO(0x9888), 0x0458c000 },
+	{ _MMIO(0x9888), 0x005b4000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x185b5400 },
+	{ _MMIO(0x9888), 0x1a5b0155 },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x045b4000 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x085b4000 },
+	{ _MMIO(0x9888), 0x0a5b4000 },
+	{ _MMIO(0x9888), 0x0c1fa800 },
+	{ _MMIO(0x9888), 0x0e1faa2a },
+	{ _MMIO(0x9888), 0x101f02aa },
+	{ _MMIO(0x9888), 0x00384000 },
+	{ _MMIO(0x9888), 0x0e384000 },
+	{ _MMIO(0x9888), 0x16384000 },
+	{ _MMIO(0x9888), 0x18381555 },
+	{ _MMIO(0x9888), 0x02384000 },
+	{ _MMIO(0x9888), 0x04384000 },
+	{ _MMIO(0x9888), 0x06384000 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0039a000 },
+	{ _MMIO(0x9888), 0x06398000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0c39a000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8aaaa0 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x238b2aa0 },
+	{ _MMIO(0x9888), 0x258b5551 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa2 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800420 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800421 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended_2_subslices_0x02[] = {
+	{ _MMIO(0x9888), 0x105c00e0 },
+	{ _MMIO(0x9888), 0x145b0160 },
+	{ _MMIO(0x9888), 0x165b2800 },
+	{ _MMIO(0x9888), 0x185b0120 },
+	{ _MMIO(0x9888), 0x0e5c25c1 },
+	{ _MMIO(0x9888), 0x005c6100 },
+	{ _MMIO(0x9888), 0x025c204c },
+	{ _MMIO(0x9888), 0x065c8000 },
+	{ _MMIO(0x9888), 0x085cc000 },
+	{ _MMIO(0x9888), 0x0a5cc000 },
+	{ _MMIO(0x9888), 0x0c5cc000 },
+	{ _MMIO(0x9888), 0x045cc000 },
+	{ _MMIO(0x9888), 0x005b0011 },
+	{ _MMIO(0x9888), 0x065b0900 },
+	{ _MMIO(0x9888), 0x085b0a13 },
+	{ _MMIO(0x9888), 0x0a5b0b15 },
+	{ _MMIO(0x9888), 0x0c5b2317 },
+	{ _MMIO(0x9888), 0x045b21b7 },
+	{ _MMIO(0x9888), 0x105b0000 },
+	{ _MMIO(0x9888), 0x0e5b0000 },
+	{ _MMIO(0x9888), 0x1a5b0000 },
+	{ _MMIO(0x9888), 0x0c1fa800 },
+	{ _MMIO(0x9888), 0x0e1faa2a },
+	{ _MMIO(0x9888), 0x101f02aa },
+	{ _MMIO(0x9888), 0x00384000 },
+	{ _MMIO(0x9888), 0x0e384000 },
+	{ _MMIO(0x9888), 0x16384000 },
+	{ _MMIO(0x9888), 0x18381555 },
+	{ _MMIO(0x9888), 0x02384000 },
+	{ _MMIO(0x9888), 0x04384000 },
+	{ _MMIO(0x9888), 0x06384000 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0039a000 },
+	{ _MMIO(0x9888), 0x06398000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0c39a000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8aaaa0 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x238b2aa0 },
+	{ _MMIO(0x9888), 0x258b5551 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa2 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800420 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800421 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended_4_subslices_0x04[] = {
+	{ _MMIO(0x9888), 0x103800e0 },
+	{ _MMIO(0x9888), 0x143a0160 },
+	{ _MMIO(0x9888), 0x163a2800 },
+	{ _MMIO(0x9888), 0x183a0120 },
+	{ _MMIO(0x9888), 0x0c1fa800 },
+	{ _MMIO(0x9888), 0x0e1faa2a },
+	{ _MMIO(0x9888), 0x101f02aa },
+	{ _MMIO(0x9888), 0x0e38a5c1 },
+	{ _MMIO(0x9888), 0x0038a100 },
+	{ _MMIO(0x9888), 0x0238204c },
+	{ _MMIO(0x9888), 0x16388000 },
+	{ _MMIO(0x9888), 0x183802aa },
+	{ _MMIO(0x9888), 0x04380000 },
+	{ _MMIO(0x9888), 0x06380000 },
+	{ _MMIO(0x9888), 0x08388000 },
+	{ _MMIO(0x9888), 0x0a388000 },
+	{ _MMIO(0x9888), 0x0039a000 },
+	{ _MMIO(0x9888), 0x06398000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0c39a000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x003a0011 },
+	{ _MMIO(0x9888), 0x063a0900 },
+	{ _MMIO(0x9888), 0x083a0a13 },
+	{ _MMIO(0x9888), 0x0a3a0b15 },
+	{ _MMIO(0x9888), 0x0c3a2317 },
+	{ _MMIO(0x9888), 0x043a21b7 },
+	{ _MMIO(0x9888), 0x103a0000 },
+	{ _MMIO(0x9888), 0x0e3a0000 },
+	{ _MMIO(0x9888), 0x1a3a0000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8aaaa0 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x238b2aa0 },
+	{ _MMIO(0x9888), 0x258b5551 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa2 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800420 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800421 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended_1_subslices_0x08[] = {
+	{ _MMIO(0x9888), 0x14bd0160 },
+	{ _MMIO(0x9888), 0x16bd2800 },
+	{ _MMIO(0x9888), 0x18bd0120 },
+	{ _MMIO(0x9888), 0x10d800e0 },
+	{ _MMIO(0x9888), 0x00dcc000 },
+	{ _MMIO(0x9888), 0x06dc8000 },
+	{ _MMIO(0x9888), 0x08dcc000 },
+	{ _MMIO(0x9888), 0x0adcc000 },
+	{ _MMIO(0x9888), 0x0cdcc000 },
+	{ _MMIO(0x9888), 0x0edcc000 },
+	{ _MMIO(0x9888), 0x02dcc000 },
+	{ _MMIO(0x9888), 0x04dcc000 },
+	{ _MMIO(0x9888), 0x00bd0011 },
+	{ _MMIO(0x9888), 0x06bd0900 },
+	{ _MMIO(0x9888), 0x08bd0a13 },
+	{ _MMIO(0x9888), 0x0abd0b15 },
+	{ _MMIO(0x9888), 0x0cbd2317 },
+	{ _MMIO(0x9888), 0x04bd21b7 },
+	{ _MMIO(0x9888), 0x10bd0000 },
+	{ _MMIO(0x9888), 0x0ebd0000 },
+	{ _MMIO(0x9888), 0x1abd0000 },
+	{ _MMIO(0x9888), 0x0ed825c1 },
+	{ _MMIO(0x9888), 0x00d86100 },
+	{ _MMIO(0x9888), 0x02d8204c },
+	{ _MMIO(0x9888), 0x06d88000 },
+	{ _MMIO(0x9888), 0x08d8c000 },
+	{ _MMIO(0x9888), 0x0ad8c000 },
+	{ _MMIO(0x9888), 0x0cd8c000 },
+	{ _MMIO(0x9888), 0x04d8c000 },
+	{ _MMIO(0x9888), 0x00db4000 },
+	{ _MMIO(0x9888), 0x0edb4000 },
+	{ _MMIO(0x9888), 0x18db5400 },
+	{ _MMIO(0x9888), 0x1adb0155 },
+	{ _MMIO(0x9888), 0x02db4000 },
+	{ _MMIO(0x9888), 0x04db4000 },
+	{ _MMIO(0x9888), 0x06db4000 },
+	{ _MMIO(0x9888), 0x08db4000 },
+	{ _MMIO(0x9888), 0x0adb4000 },
+	{ _MMIO(0x9888), 0x0c9fa800 },
+	{ _MMIO(0x9888), 0x0e9faa2a },
+	{ _MMIO(0x9888), 0x109f02aa },
+	{ _MMIO(0x9888), 0x00b84000 },
+	{ _MMIO(0x9888), 0x0eb84000 },
+	{ _MMIO(0x9888), 0x16b84000 },
+	{ _MMIO(0x9888), 0x18b81555 },
+	{ _MMIO(0x9888), 0x02b84000 },
+	{ _MMIO(0x9888), 0x04b84000 },
+	{ _MMIO(0x9888), 0x06b84000 },
+	{ _MMIO(0x9888), 0x08b84000 },
+	{ _MMIO(0x9888), 0x0ab84000 },
+	{ _MMIO(0x9888), 0x00b9a000 },
+	{ _MMIO(0x9888), 0x06b98000 },
+	{ _MMIO(0x9888), 0x08b9a000 },
+	{ _MMIO(0x9888), 0x0ab9a000 },
+	{ _MMIO(0x9888), 0x0cb9a000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x02b9a000 },
+	{ _MMIO(0x9888), 0x04b9a000 },
+	{ _MMIO(0x9888), 0x01888000 },
+	{ _MMIO(0x9888), 0x0d88f800 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x238b5540 },
+	{ _MMIO(0x9888), 0x258baaa2 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x018c4000 },
+	{ _MMIO(0x9888), 0x0f8c4000 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c5500 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x018da000 },
+	{ _MMIO(0x9888), 0x078d8000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa2 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800420 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800421 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended_3_subslices_0x10[] = {
+	{ _MMIO(0x9888), 0x10dc00e0 },
+	{ _MMIO(0x9888), 0x14db0160 },
+	{ _MMIO(0x9888), 0x16db2800 },
+	{ _MMIO(0x9888), 0x18db0120 },
+	{ _MMIO(0x9888), 0x0edc25c1 },
+	{ _MMIO(0x9888), 0x00dc6100 },
+	{ _MMIO(0x9888), 0x02dc204c },
+	{ _MMIO(0x9888), 0x06dc8000 },
+	{ _MMIO(0x9888), 0x08dcc000 },
+	{ _MMIO(0x9888), 0x0adcc000 },
+	{ _MMIO(0x9888), 0x0cdcc000 },
+	{ _MMIO(0x9888), 0x04dcc000 },
+	{ _MMIO(0x9888), 0x00db0011 },
+	{ _MMIO(0x9888), 0x06db0900 },
+	{ _MMIO(0x9888), 0x08db0a13 },
+	{ _MMIO(0x9888), 0x0adb0b15 },
+	{ _MMIO(0x9888), 0x0cdb2317 },
+	{ _MMIO(0x9888), 0x04db21b7 },
+	{ _MMIO(0x9888), 0x10db0000 },
+	{ _MMIO(0x9888), 0x0edb0000 },
+	{ _MMIO(0x9888), 0x1adb0000 },
+	{ _MMIO(0x9888), 0x0c9fa800 },
+	{ _MMIO(0x9888), 0x0e9faa2a },
+	{ _MMIO(0x9888), 0x109f02aa },
+	{ _MMIO(0x9888), 0x00b84000 },
+	{ _MMIO(0x9888), 0x0eb84000 },
+	{ _MMIO(0x9888), 0x16b84000 },
+	{ _MMIO(0x9888), 0x18b81555 },
+	{ _MMIO(0x9888), 0x02b84000 },
+	{ _MMIO(0x9888), 0x04b84000 },
+	{ _MMIO(0x9888), 0x06b84000 },
+	{ _MMIO(0x9888), 0x08b84000 },
+	{ _MMIO(0x9888), 0x0ab84000 },
+	{ _MMIO(0x9888), 0x00b9a000 },
+	{ _MMIO(0x9888), 0x06b98000 },
+	{ _MMIO(0x9888), 0x08b9a000 },
+	{ _MMIO(0x9888), 0x0ab9a000 },
+	{ _MMIO(0x9888), 0x0cb9a000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x02b9a000 },
+	{ _MMIO(0x9888), 0x04b9a000 },
+	{ _MMIO(0x9888), 0x01888000 },
+	{ _MMIO(0x9888), 0x0d88f800 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x238b5540 },
+	{ _MMIO(0x9888), 0x258baaa2 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x018c4000 },
+	{ _MMIO(0x9888), 0x0f8c4000 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c5500 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x018da000 },
+	{ _MMIO(0x9888), 0x078d8000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa2 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800420 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800421 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended_5_subslices_0x20[] = {
+	{ _MMIO(0x9888), 0x10b800e0 },
+	{ _MMIO(0x9888), 0x14ba0160 },
+	{ _MMIO(0x9888), 0x16ba2800 },
+	{ _MMIO(0x9888), 0x18ba0120 },
+	{ _MMIO(0x9888), 0x0c9fa800 },
+	{ _MMIO(0x9888), 0x0e9faa2a },
+	{ _MMIO(0x9888), 0x109f02aa },
+	{ _MMIO(0x9888), 0x0eb8a5c1 },
+	{ _MMIO(0x9888), 0x00b8a100 },
+	{ _MMIO(0x9888), 0x02b8204c },
+	{ _MMIO(0x9888), 0x16b88000 },
+	{ _MMIO(0x9888), 0x18b802aa },
+	{ _MMIO(0x9888), 0x04b80000 },
+	{ _MMIO(0x9888), 0x06b80000 },
+	{ _MMIO(0x9888), 0x08b88000 },
+	{ _MMIO(0x9888), 0x0ab88000 },
+	{ _MMIO(0x9888), 0x00b9a000 },
+	{ _MMIO(0x9888), 0x06b98000 },
+	{ _MMIO(0x9888), 0x08b9a000 },
+	{ _MMIO(0x9888), 0x0ab9a000 },
+	{ _MMIO(0x9888), 0x0cb9a000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x02b9a000 },
+	{ _MMIO(0x9888), 0x04b9a000 },
+	{ _MMIO(0x9888), 0x00ba0011 },
+	{ _MMIO(0x9888), 0x06ba0900 },
+	{ _MMIO(0x9888), 0x08ba0a13 },
+	{ _MMIO(0x9888), 0x0aba0b15 },
+	{ _MMIO(0x9888), 0x0cba2317 },
+	{ _MMIO(0x9888), 0x04ba21b7 },
+	{ _MMIO(0x9888), 0x10ba0000 },
+	{ _MMIO(0x9888), 0x0eba0000 },
+	{ _MMIO(0x9888), 0x1aba0000 },
+	{ _MMIO(0x9888), 0x01888000 },
+	{ _MMIO(0x9888), 0x0d88f800 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x238b5540 },
+	{ _MMIO(0x9888), 0x258baaa2 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x018c4000 },
+	{ _MMIO(0x9888), 0x0f8c4000 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c5500 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x018da000 },
+	{ _MMIO(0x9888), 0x078d8000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa2 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800420 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800421 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static const struct i915_oa_reg *
+get_compute_extended_mux_config(struct drm_i915_private *dev_priv,
+				int *len)
+{
+	if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x01) {
+		*len = ARRAY_SIZE(mux_config_compute_extended_0_subslices_0x01);
+		return mux_config_compute_extended_0_subslices_0x01;
+	} else if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x02) {
+		*len = ARRAY_SIZE(mux_config_compute_extended_2_subslices_0x02);
+		return mux_config_compute_extended_2_subslices_0x02;
+	} else if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x04) {
+		*len = ARRAY_SIZE(mux_config_compute_extended_4_subslices_0x04);
+		return mux_config_compute_extended_4_subslices_0x04;
+	} else if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x08) {
+		*len = ARRAY_SIZE(mux_config_compute_extended_1_subslices_0x08);
+		return mux_config_compute_extended_1_subslices_0x08;
+	} else if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x10) {
+		*len = ARRAY_SIZE(mux_config_compute_extended_3_subslices_0x10);
+		return mux_config_compute_extended_3_subslices_0x10;
+	} else if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x20) {
+		*len = ARRAY_SIZE(mux_config_compute_extended_5_subslices_0x20);
+		return mux_config_compute_extended_5_subslices_0x20;
+	}
+
+	*len = 0;
+	return NULL;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_l3_cache[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fffa },
+	{ _MMIO(0x2774), 0x0000fefe },
+	{ _MMIO(0x2778), 0x0007fffa },
+	{ _MMIO(0x277c), 0x0000fefd },
+	{ _MMIO(0x2790), 0x0007fffa },
+	{ _MMIO(0x2794), 0x0000fbef },
+	{ _MMIO(0x2798), 0x0007fffa },
+	{ _MMIO(0x279c), 0x0000fbdf },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_l3_cache[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00101100 },
+	{ _MMIO(0xe45c), 0x00201200 },
+	{ _MMIO(0xe55c), 0x00301300 },
+	{ _MMIO(0xe65c), 0x00401400 },
+};
+
+static const struct i915_oa_reg mux_config_compute_l3_cache[] = {
+	{ _MMIO(0x9888), 0x143f00b3 },
+	{ _MMIO(0x9888), 0x14bf00b3 },
+	{ _MMIO(0x9888), 0x138303c0 },
+	{ _MMIO(0x9888), 0x3b800060 },
+	{ _MMIO(0x9888), 0x3d800805 },
+	{ _MMIO(0x9888), 0x003f0029 },
+	{ _MMIO(0x9888), 0x063f1400 },
+	{ _MMIO(0x9888), 0x083f1225 },
+	{ _MMIO(0x9888), 0x0e3f1327 },
+	{ _MMIO(0x9888), 0x103f0000 },
+	{ _MMIO(0x9888), 0x005a4000 },
+	{ _MMIO(0x9888), 0x065a8000 },
+	{ _MMIO(0x9888), 0x085ac000 },
+	{ _MMIO(0x9888), 0x0e5ac000 },
+	{ _MMIO(0x9888), 0x001d4000 },
+	{ _MMIO(0x9888), 0x061d8000 },
+	{ _MMIO(0x9888), 0x081dc000 },
+	{ _MMIO(0x9888), 0x0e1dc000 },
+	{ _MMIO(0x9888), 0x0c1f0800 },
+	{ _MMIO(0x9888), 0x0e1f2a00 },
+	{ _MMIO(0x9888), 0x101f0280 },
+	{ _MMIO(0x9888), 0x00391000 },
+	{ _MMIO(0x9888), 0x06394000 },
+	{ _MMIO(0x9888), 0x08395000 },
+	{ _MMIO(0x9888), 0x0e395000 },
+	{ _MMIO(0x9888), 0x0abf1429 },
+	{ _MMIO(0x9888), 0x0cbf1225 },
+	{ _MMIO(0x9888), 0x00bf1380 },
+	{ _MMIO(0x9888), 0x02bf0026 },
+	{ _MMIO(0x9888), 0x10bf0000 },
+	{ _MMIO(0x9888), 0x0adac000 },
+	{ _MMIO(0x9888), 0x0cdac000 },
+	{ _MMIO(0x9888), 0x00da8000 },
+	{ _MMIO(0x9888), 0x02da4000 },
+	{ _MMIO(0x9888), 0x0a9dc000 },
+	{ _MMIO(0x9888), 0x0c9dc000 },
+	{ _MMIO(0x9888), 0x009d8000 },
+	{ _MMIO(0x9888), 0x029d4000 },
+	{ _MMIO(0x9888), 0x0e9f8000 },
+	{ _MMIO(0x9888), 0x109f002a },
+	{ _MMIO(0x9888), 0x0c9fa000 },
+	{ _MMIO(0x9888), 0x0ab95000 },
+	{ _MMIO(0x9888), 0x0cb95000 },
+	{ _MMIO(0x9888), 0x00b94000 },
+	{ _MMIO(0x9888), 0x02b91000 },
+	{ _MMIO(0x9888), 0x0d88c000 },
+	{ _MMIO(0x9888), 0x0f880003 },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8a8020 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x238b0520 },
+	{ _MMIO(0x9888), 0x258ba950 },
+	{ _MMIO(0x9888), 0x278b0016 },
+	{ _MMIO(0x9888), 0x198c5400 },
+	{ _MMIO(0x9888), 0x1b8c0001 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x018d8000 },
+	{ _MMIO(0x9888), 0x038d2000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa0 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x03835180 },
+	{ _MMIO(0x9888), 0x05834022 },
+	{ _MMIO(0x9888), 0x11830000 },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x07830000 },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x05844000 },
+	{ _MMIO(0x9888), 0x1b80c137 },
+	{ _MMIO(0x9888), 0x1d80c147 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x15804000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d801000 },
+	{ _MMIO(0x9888), 0x4f800111 },
+	{ _MMIO(0x9888), 0x43800842 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800840 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800800 },
+	{ _MMIO(0x9888), 0x418014a2 },
+};
+
+static const struct i915_oa_reg *
+get_compute_l3_cache_mux_config(struct drm_i915_private *dev_priv,
+				int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_l3_cache);
+	return mux_config_compute_l3_cache;
+}
+
+static const struct i915_oa_reg b_counter_config_data_port_reads_coalescing[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0xba98ba98 },
+	{ _MMIO(0x2748), 0xba98ba98 },
+	{ _MMIO(0x2744), 0x00003377 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fff2 },
+	{ _MMIO(0x2774), 0x00007ff0 },
+	{ _MMIO(0x2778), 0x0007ffe2 },
+	{ _MMIO(0x277c), 0x00007ff0 },
+	{ _MMIO(0x2780), 0x0007ffc2 },
+	{ _MMIO(0x2784), 0x00007ff0 },
+	{ _MMIO(0x2788), 0x0007ff82 },
+	{ _MMIO(0x278c), 0x00007ff0 },
+	{ _MMIO(0x2790), 0x0007fffa },
+	{ _MMIO(0x2794), 0x0000bfef },
+	{ _MMIO(0x2798), 0x0007fffa },
+	{ _MMIO(0x279c), 0x0000bfdf },
+	{ _MMIO(0x27a0), 0x0007fffa },
+	{ _MMIO(0x27a4), 0x0000bfbf },
+	{ _MMIO(0x27a8), 0x0007fffa },
+	{ _MMIO(0x27ac), 0x0000bf7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_data_port_reads_coalescing[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_data_port_reads_coalescing_0_subslices_0x01[] = {
+	{ _MMIO(0x9888), 0x103d0005 },
+	{ _MMIO(0x9888), 0x163d240b },
+	{ _MMIO(0x9888), 0x1058022f },
+	{ _MMIO(0x9888), 0x185b5520 },
+	{ _MMIO(0x9888), 0x198b0003 },
+	{ _MMIO(0x9888), 0x005cc000 },
+	{ _MMIO(0x9888), 0x065cc000 },
+	{ _MMIO(0x9888), 0x085cc000 },
+	{ _MMIO(0x9888), 0x0a5cc000 },
+	{ _MMIO(0x9888), 0x0c5cc000 },
+	{ _MMIO(0x9888), 0x0e5cc000 },
+	{ _MMIO(0x9888), 0x025c4000 },
+	{ _MMIO(0x9888), 0x045c8000 },
+	{ _MMIO(0x9888), 0x003d0000 },
+	{ _MMIO(0x9888), 0x063d00b0 },
+	{ _MMIO(0x9888), 0x083d0182 },
+	{ _MMIO(0x9888), 0x0a3d10a0 },
+	{ _MMIO(0x9888), 0x0c3d11a2 },
+	{ _MMIO(0x9888), 0x0e3d0000 },
+	{ _MMIO(0x9888), 0x183d0000 },
+	{ _MMIO(0x9888), 0x1a3d0000 },
+	{ _MMIO(0x9888), 0x0e582242 },
+	{ _MMIO(0x9888), 0x00586700 },
+	{ _MMIO(0x9888), 0x0258004f },
+	{ _MMIO(0x9888), 0x0658c000 },
+	{ _MMIO(0x9888), 0x0858c000 },
+	{ _MMIO(0x9888), 0x0a58c000 },
+	{ _MMIO(0x9888), 0x0c58c000 },
+	{ _MMIO(0x9888), 0x045b6300 },
+	{ _MMIO(0x9888), 0x105b0000 },
+	{ _MMIO(0x9888), 0x005b4000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x1a5b0155 },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x0a5b0000 },
+	{ _MMIO(0x9888), 0x0c5b4000 },
+	{ _MMIO(0x9888), 0x0c1fa800 },
+	{ _MMIO(0x9888), 0x0e1faaa0 },
+	{ _MMIO(0x9888), 0x101f02aa },
+	{ _MMIO(0x9888), 0x00384000 },
+	{ _MMIO(0x9888), 0x0e384000 },
+	{ _MMIO(0x9888), 0x16384000 },
+	{ _MMIO(0x9888), 0x18381555 },
+	{ _MMIO(0x9888), 0x02384000 },
+	{ _MMIO(0x9888), 0x04384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0c384000 },
+	{ _MMIO(0x9888), 0x0039a000 },
+	{ _MMIO(0x9888), 0x0639a000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0c39a000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x02392000 },
+	{ _MMIO(0x9888), 0x04398000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8aaaa0 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x038b6300 },
+	{ _MMIO(0x9888), 0x058b0062 },
+	{ _MMIO(0x9888), 0x118b0000 },
+	{ _MMIO(0x9888), 0x238b02a0 },
+	{ _MMIO(0x9888), 0x258b5555 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d801000 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f800001 },
+	{ _MMIO(0x9888), 0x43800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800420 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800421 },
+	{ _MMIO(0x9888), 0x41800041 },
+};
+
+static const struct i915_oa_reg *
+get_data_port_reads_coalescing_mux_config(struct drm_i915_private *dev_priv,
+					  int *len)
+{
+	if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x01) {
+		*len = ARRAY_SIZE(mux_config_data_port_reads_coalescing_0_subslices_0x01);
+		return mux_config_data_port_reads_coalescing_0_subslices_0x01;
+	}
+
+	*len = 0;
+	return NULL;
+}
+
+static const struct i915_oa_reg b_counter_config_data_port_writes_coalescing[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0xba98ba98 },
+	{ _MMIO(0x2748), 0xba98ba98 },
+	{ _MMIO(0x2744), 0x00003377 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ff72 },
+	{ _MMIO(0x2774), 0x0000bfd0 },
+	{ _MMIO(0x2778), 0x0007ff62 },
+	{ _MMIO(0x277c), 0x0000bfd0 },
+	{ _MMIO(0x2780), 0x0007ff42 },
+	{ _MMIO(0x2784), 0x0000bfd0 },
+	{ _MMIO(0x2788), 0x0007ff02 },
+	{ _MMIO(0x278c), 0x0000bfd0 },
+	{ _MMIO(0x2790), 0x0005fff2 },
+	{ _MMIO(0x2794), 0x0000bfd0 },
+	{ _MMIO(0x2798), 0x0005ffe2 },
+	{ _MMIO(0x279c), 0x0000bfd0 },
+	{ _MMIO(0x27a0), 0x0005ffc2 },
+	{ _MMIO(0x27a4), 0x0000bfd0 },
+	{ _MMIO(0x27a8), 0x0005ff82 },
+	{ _MMIO(0x27ac), 0x0000bfd0 },
+};
+
+static const struct i915_oa_reg flex_eu_config_data_port_writes_coalescing[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_data_port_writes_coalescing_0_subslices_0x01[] = {
+	{ _MMIO(0x9888), 0x103d0005 },
+	{ _MMIO(0x9888), 0x143d0120 },
+	{ _MMIO(0x9888), 0x163d2400 },
+	{ _MMIO(0x9888), 0x1058022f },
+	{ _MMIO(0x9888), 0x105b0000 },
+	{ _MMIO(0x9888), 0x198b0003 },
+	{ _MMIO(0x9888), 0x005cc000 },
+	{ _MMIO(0x9888), 0x065cc000 },
+	{ _MMIO(0x9888), 0x085cc000 },
+	{ _MMIO(0x9888), 0x0a5cc000 },
+	{ _MMIO(0x9888), 0x0e5cc000 },
+	{ _MMIO(0x9888), 0x025c4000 },
+	{ _MMIO(0x9888), 0x045c8000 },
+	{ _MMIO(0x9888), 0x003d0000 },
+	{ _MMIO(0x9888), 0x063d0094 },
+	{ _MMIO(0x9888), 0x083d0182 },
+	{ _MMIO(0x9888), 0x0a3d1814 },
+	{ _MMIO(0x9888), 0x0e3d0000 },
+	{ _MMIO(0x9888), 0x183d0000 },
+	{ _MMIO(0x9888), 0x1a3d0000 },
+	{ _MMIO(0x9888), 0x0c3d0000 },
+	{ _MMIO(0x9888), 0x0e582242 },
+	{ _MMIO(0x9888), 0x00586700 },
+	{ _MMIO(0x9888), 0x0258004f },
+	{ _MMIO(0x9888), 0x0658c000 },
+	{ _MMIO(0x9888), 0x0858c000 },
+	{ _MMIO(0x9888), 0x0a58c000 },
+	{ _MMIO(0x9888), 0x045b6a80 },
+	{ _MMIO(0x9888), 0x005b4000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x185b5400 },
+	{ _MMIO(0x9888), 0x1a5b0141 },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x0a5b0000 },
+	{ _MMIO(0x9888), 0x0c5b4000 },
+	{ _MMIO(0x9888), 0x0c1fa800 },
+	{ _MMIO(0x9888), 0x0e1faaa0 },
+	{ _MMIO(0x9888), 0x101f0282 },
+	{ _MMIO(0x9888), 0x00384000 },
+	{ _MMIO(0x9888), 0x0e384000 },
+	{ _MMIO(0x9888), 0x16384000 },
+	{ _MMIO(0x9888), 0x18381415 },
+	{ _MMIO(0x9888), 0x02384000 },
+	{ _MMIO(0x9888), 0x04384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0c384000 },
+	{ _MMIO(0x9888), 0x0039a000 },
+	{ _MMIO(0x9888), 0x0639a000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x02392000 },
+	{ _MMIO(0x9888), 0x04398000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8a82a0 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x038b6300 },
+	{ _MMIO(0x9888), 0x058b0062 },
+	{ _MMIO(0x9888), 0x118b0000 },
+	{ _MMIO(0x9888), 0x238b02a0 },
+	{ _MMIO(0x9888), 0x258b1555 },
+	{ _MMIO(0x9888), 0x278b0014 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x21852aaa },
+	{ _MMIO(0x9888), 0x23850028 },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830141 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d801000 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f800001 },
+	{ _MMIO(0x9888), 0x43800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800420 },
+	{ _MMIO(0x9888), 0x3f800421 },
+	{ _MMIO(0x9888), 0x41800041 },
+};
+
+static const struct i915_oa_reg *
+get_data_port_writes_coalescing_mux_config(struct drm_i915_private *dev_priv,
+					   int *len)
+{
+	if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x01) {
+		*len = ARRAY_SIZE(mux_config_data_port_writes_coalescing_0_subslices_0x01);
+		return mux_config_data_port_writes_coalescing_0_subslices_0x01;
+	}
+
+	*len = 0;
+	return NULL;
+}
+
+static const struct i915_oa_reg b_counter_config_hdc_and_sf[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x10800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fff7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_hdc_and_sf[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_hdc_and_sf[] = {
+	{ _MMIO(0x9888), 0x105c0232 },
+	{ _MMIO(0x9888), 0x10580232 },
+	{ _MMIO(0x9888), 0x10380232 },
+	{ _MMIO(0x9888), 0x10dc0232 },
+	{ _MMIO(0x9888), 0x10d80232 },
+	{ _MMIO(0x9888), 0x10b80232 },
+	{ _MMIO(0x9888), 0x118e4400 },
+	{ _MMIO(0x9888), 0x025c6080 },
+	{ _MMIO(0x9888), 0x045c004b },
+	{ _MMIO(0x9888), 0x005c8000 },
+	{ _MMIO(0x9888), 0x00582080 },
+	{ _MMIO(0x9888), 0x0258004b },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x045b4000 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f00aa },
+	{ _MMIO(0x9888), 0x04386080 },
+	{ _MMIO(0x9888), 0x0638404b },
+	{ _MMIO(0x9888), 0x02384000 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a380000 },
+	{ _MMIO(0x9888), 0x0c380000 },
+	{ _MMIO(0x9888), 0x00398000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x0cdc25c1 },
+	{ _MMIO(0x9888), 0x0adcc000 },
+	{ _MMIO(0x9888), 0x0ad825c1 },
+	{ _MMIO(0x9888), 0x18db4000 },
+	{ _MMIO(0x9888), 0x1adb0001 },
+	{ _MMIO(0x9888), 0x0e9f8000 },
+	{ _MMIO(0x9888), 0x109f02aa },
+	{ _MMIO(0x9888), 0x0eb825c1 },
+	{ _MMIO(0x9888), 0x18b80154 },
+	{ _MMIO(0x9888), 0x0ab9a000 },
+	{ _MMIO(0x9888), 0x0cb9a000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x0d88c000 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x258baa05 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x198c5400 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x098dc000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x098e05c0 },
+	{ _MMIO(0x9888), 0x058e0000 },
+	{ _MMIO(0x9888), 0x198f0020 },
+	{ _MMIO(0x9888), 0x2185aa0a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x19835000 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x09848000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x19808000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x51800040 },
+	{ _MMIO(0x9888), 0x43800400 },
+	{ _MMIO(0x9888), 0x45800800 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800c62 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f801042 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x418014a4 },
+};
+
+static const struct i915_oa_reg *
+get_hdc_and_sf_mux_config(struct drm_i915_private *dev_priv,
+			  int *len)
+{
+	*len = ARRAY_SIZE(mux_config_hdc_and_sf);
+	return mux_config_hdc_and_sf;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1[] = {
+	{ _MMIO(0x9888), 0x10bf03da },
+	{ _MMIO(0x9888), 0x14bf0001 },
+	{ _MMIO(0x9888), 0x12980340 },
+	{ _MMIO(0x9888), 0x12990340 },
+	{ _MMIO(0x9888), 0x0cbf1187 },
+	{ _MMIO(0x9888), 0x0ebf1205 },
+	{ _MMIO(0x9888), 0x00bf0500 },
+	{ _MMIO(0x9888), 0x02bf042b },
+	{ _MMIO(0x9888), 0x04bf002c },
+	{ _MMIO(0x9888), 0x0cdac000 },
+	{ _MMIO(0x9888), 0x0edac000 },
+	{ _MMIO(0x9888), 0x00da8000 },
+	{ _MMIO(0x9888), 0x02dac000 },
+	{ _MMIO(0x9888), 0x04da4000 },
+	{ _MMIO(0x9888), 0x04983400 },
+	{ _MMIO(0x9888), 0x10980000 },
+	{ _MMIO(0x9888), 0x06990034 },
+	{ _MMIO(0x9888), 0x10990000 },
+	{ _MMIO(0x9888), 0x0c9dc000 },
+	{ _MMIO(0x9888), 0x0e9dc000 },
+	{ _MMIO(0x9888), 0x009d8000 },
+	{ _MMIO(0x9888), 0x029dc000 },
+	{ _MMIO(0x9888), 0x049d4000 },
+	{ _MMIO(0x9888), 0x109f02a8 },
+	{ _MMIO(0x9888), 0x0c9fa000 },
+	{ _MMIO(0x9888), 0x0e9f00ba },
+	{ _MMIO(0x9888), 0x0cb88000 },
+	{ _MMIO(0x9888), 0x0cb95000 },
+	{ _MMIO(0x9888), 0x0eb95000 },
+	{ _MMIO(0x9888), 0x00b94000 },
+	{ _MMIO(0x9888), 0x02b95000 },
+	{ _MMIO(0x9888), 0x04b91000 },
+	{ _MMIO(0x9888), 0x06b92000 },
+	{ _MMIO(0x9888), 0x0cba4000 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x258b800a },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b5500 },
+	{ _MMIO(0x9888), 0x198c4000 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x018d8000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2185800a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x1b830154 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x47800000 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800060 },
+};
+
+static const struct i915_oa_reg *
+get_l3_1_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_1);
+	return mux_config_l3_1;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_2[] = {
+	{ _MMIO(0x9888), 0x103f03da },
+	{ _MMIO(0x9888), 0x143f0001 },
+	{ _MMIO(0x9888), 0x12180340 },
+	{ _MMIO(0x9888), 0x12190340 },
+	{ _MMIO(0x9888), 0x0c3f1187 },
+	{ _MMIO(0x9888), 0x0e3f1205 },
+	{ _MMIO(0x9888), 0x003f0500 },
+	{ _MMIO(0x9888), 0x023f042b },
+	{ _MMIO(0x9888), 0x043f002c },
+	{ _MMIO(0x9888), 0x0c5ac000 },
+	{ _MMIO(0x9888), 0x0e5ac000 },
+	{ _MMIO(0x9888), 0x005a8000 },
+	{ _MMIO(0x9888), 0x025ac000 },
+	{ _MMIO(0x9888), 0x045a4000 },
+	{ _MMIO(0x9888), 0x04183400 },
+	{ _MMIO(0x9888), 0x10180000 },
+	{ _MMIO(0x9888), 0x06190034 },
+	{ _MMIO(0x9888), 0x10190000 },
+	{ _MMIO(0x9888), 0x0c1dc000 },
+	{ _MMIO(0x9888), 0x0e1dc000 },
+	{ _MMIO(0x9888), 0x001d8000 },
+	{ _MMIO(0x9888), 0x021dc000 },
+	{ _MMIO(0x9888), 0x041d4000 },
+	{ _MMIO(0x9888), 0x101f02a8 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f00ba },
+	{ _MMIO(0x9888), 0x0c388000 },
+	{ _MMIO(0x9888), 0x0c395000 },
+	{ _MMIO(0x9888), 0x0e395000 },
+	{ _MMIO(0x9888), 0x00394000 },
+	{ _MMIO(0x9888), 0x02395000 },
+	{ _MMIO(0x9888), 0x04391000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x0c3a4000 },
+	{ _MMIO(0x9888), 0x1b8aa800 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x258b4005 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x2185800a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x1b830154 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x47800000 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800060 },
+};
+
+static const struct i915_oa_reg *
+get_l3_2_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_2);
+	return mux_config_l3_2;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_3[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_3[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_3[] = {
+	{ _MMIO(0x9888), 0x121b0340 },
+	{ _MMIO(0x9888), 0x103f0274 },
+	{ _MMIO(0x9888), 0x123f0000 },
+	{ _MMIO(0x9888), 0x129b0340 },
+	{ _MMIO(0x9888), 0x10bf0274 },
+	{ _MMIO(0x9888), 0x12bf0000 },
+	{ _MMIO(0x9888), 0x041b3400 },
+	{ _MMIO(0x9888), 0x101b0000 },
+	{ _MMIO(0x9888), 0x045c8000 },
+	{ _MMIO(0x9888), 0x0a3d4000 },
+	{ _MMIO(0x9888), 0x003f0080 },
+	{ _MMIO(0x9888), 0x023f0793 },
+	{ _MMIO(0x9888), 0x043f0014 },
+	{ _MMIO(0x9888), 0x04588000 },
+	{ _MMIO(0x9888), 0x005a8000 },
+	{ _MMIO(0x9888), 0x025ac000 },
+	{ _MMIO(0x9888), 0x045a4000 },
+	{ _MMIO(0x9888), 0x0a5b4000 },
+	{ _MMIO(0x9888), 0x001d8000 },
+	{ _MMIO(0x9888), 0x021dc000 },
+	{ _MMIO(0x9888), 0x041d4000 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f002a },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x00394000 },
+	{ _MMIO(0x9888), 0x02395000 },
+	{ _MMIO(0x9888), 0x04399000 },
+	{ _MMIO(0x9888), 0x069b0034 },
+	{ _MMIO(0x9888), 0x109b0000 },
+	{ _MMIO(0x9888), 0x06dc4000 },
+	{ _MMIO(0x9888), 0x0cbd4000 },
+	{ _MMIO(0x9888), 0x0cbf0981 },
+	{ _MMIO(0x9888), 0x0ebf0a0f },
+	{ _MMIO(0x9888), 0x06d84000 },
+	{ _MMIO(0x9888), 0x0cdac000 },
+	{ _MMIO(0x9888), 0x0edac000 },
+	{ _MMIO(0x9888), 0x0cdb4000 },
+	{ _MMIO(0x9888), 0x0c9dc000 },
+	{ _MMIO(0x9888), 0x0e9dc000 },
+	{ _MMIO(0x9888), 0x109f02a8 },
+	{ _MMIO(0x9888), 0x0e9f0080 },
+	{ _MMIO(0x9888), 0x0cb84000 },
+	{ _MMIO(0x9888), 0x0cb95000 },
+	{ _MMIO(0x9888), 0x0eb95000 },
+	{ _MMIO(0x9888), 0x06b92000 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x258b8009 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x198c4000 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2185800a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x1b830154 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x45800c00 },
+	{ _MMIO(0x9888), 0x47800c63 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f8014a5 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800045 },
+};
+
+static const struct i915_oa_reg *
+get_l3_3_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_3);
+	return mux_config_l3_3;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_4[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_4[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_4[] = {
+	{ _MMIO(0x9888), 0x121a0340 },
+	{ _MMIO(0x9888), 0x103f0017 },
+	{ _MMIO(0x9888), 0x123f0020 },
+	{ _MMIO(0x9888), 0x129a0340 },
+	{ _MMIO(0x9888), 0x10bf0017 },
+	{ _MMIO(0x9888), 0x12bf0020 },
+	{ _MMIO(0x9888), 0x041a3400 },
+	{ _MMIO(0x9888), 0x101a0000 },
+	{ _MMIO(0x9888), 0x043b8000 },
+	{ _MMIO(0x9888), 0x0a3e0010 },
+	{ _MMIO(0x9888), 0x003f0200 },
+	{ _MMIO(0x9888), 0x023f0113 },
+	{ _MMIO(0x9888), 0x043f0014 },
+	{ _MMIO(0x9888), 0x02592000 },
+	{ _MMIO(0x9888), 0x005a8000 },
+	{ _MMIO(0x9888), 0x025ac000 },
+	{ _MMIO(0x9888), 0x045a4000 },
+	{ _MMIO(0x9888), 0x0a1c8000 },
+	{ _MMIO(0x9888), 0x001d8000 },
+	{ _MMIO(0x9888), 0x021dc000 },
+	{ _MMIO(0x9888), 0x041d4000 },
+	{ _MMIO(0x9888), 0x0a1e8000 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f001a },
+	{ _MMIO(0x9888), 0x00394000 },
+	{ _MMIO(0x9888), 0x02395000 },
+	{ _MMIO(0x9888), 0x04391000 },
+	{ _MMIO(0x9888), 0x069a0034 },
+	{ _MMIO(0x9888), 0x109a0000 },
+	{ _MMIO(0x9888), 0x06bb4000 },
+	{ _MMIO(0x9888), 0x0abe0040 },
+	{ _MMIO(0x9888), 0x0cbf0984 },
+	{ _MMIO(0x9888), 0x0ebf0a02 },
+	{ _MMIO(0x9888), 0x02d94000 },
+	{ _MMIO(0x9888), 0x0cdac000 },
+	{ _MMIO(0x9888), 0x0edac000 },
+	{ _MMIO(0x9888), 0x0c9c0400 },
+	{ _MMIO(0x9888), 0x0c9dc000 },
+	{ _MMIO(0x9888), 0x0e9dc000 },
+	{ _MMIO(0x9888), 0x0c9e0400 },
+	{ _MMIO(0x9888), 0x109f02a8 },
+	{ _MMIO(0x9888), 0x0e9f0040 },
+	{ _MMIO(0x9888), 0x0cb95000 },
+	{ _MMIO(0x9888), 0x0eb95000 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x258b8009 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x198c4000 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2185800a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x1b830154 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x45800800 },
+	{ _MMIO(0x9888), 0x47800842 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f801084 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800044 },
+};
+
+static const struct i915_oa_reg *
+get_l3_4_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_4);
+	return mux_config_l3_4;
+}
+
+static const struct i915_oa_reg b_counter_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00006000 },
+	{ _MMIO(0x2774), 0x0000f3ff },
+	{ _MMIO(0x2778), 0x00001800 },
+	{ _MMIO(0x277c), 0x0000fcff },
+	{ _MMIO(0x2780), 0x00000600 },
+	{ _MMIO(0x2784), 0x0000ff3f },
+	{ _MMIO(0x2788), 0x00000180 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000060 },
+	{ _MMIO(0x2794), 0x0000fff3 },
+	{ _MMIO(0x2798), 0x00000018 },
+	{ _MMIO(0x279c), 0x0000fffc },
+};
+
+static const struct i915_oa_reg flex_eu_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x9888), 0x143b000e },
+	{ _MMIO(0x9888), 0x043c55c0 },
+	{ _MMIO(0x9888), 0x0a1e0280 },
+	{ _MMIO(0x9888), 0x0c1e0408 },
+	{ _MMIO(0x9888), 0x10390000 },
+	{ _MMIO(0x9888), 0x12397a1f },
+	{ _MMIO(0x9888), 0x14bb000e },
+	{ _MMIO(0x9888), 0x04bc5000 },
+	{ _MMIO(0x9888), 0x0a9e0296 },
+	{ _MMIO(0x9888), 0x0c9e0008 },
+	{ _MMIO(0x9888), 0x10b90000 },
+	{ _MMIO(0x9888), 0x12b97a1f },
+	{ _MMIO(0x9888), 0x063b0042 },
+	{ _MMIO(0x9888), 0x103b0000 },
+	{ _MMIO(0x9888), 0x083c0000 },
+	{ _MMIO(0x9888), 0x0a3e0040 },
+	{ _MMIO(0x9888), 0x043f8000 },
+	{ _MMIO(0x9888), 0x02594000 },
+	{ _MMIO(0x9888), 0x045a8000 },
+	{ _MMIO(0x9888), 0x0c1c0400 },
+	{ _MMIO(0x9888), 0x041d8000 },
+	{ _MMIO(0x9888), 0x081e02c0 },
+	{ _MMIO(0x9888), 0x0e1e0000 },
+	{ _MMIO(0x9888), 0x0c1fa800 },
+	{ _MMIO(0x9888), 0x0e1f0260 },
+	{ _MMIO(0x9888), 0x101f0014 },
+	{ _MMIO(0x9888), 0x003905e0 },
+	{ _MMIO(0x9888), 0x06390bc0 },
+	{ _MMIO(0x9888), 0x02390018 },
+	{ _MMIO(0x9888), 0x04394000 },
+	{ _MMIO(0x9888), 0x04bb0042 },
+	{ _MMIO(0x9888), 0x10bb0000 },
+	{ _MMIO(0x9888), 0x02bc05c0 },
+	{ _MMIO(0x9888), 0x08bc0000 },
+	{ _MMIO(0x9888), 0x0abe0004 },
+	{ _MMIO(0x9888), 0x02bf8000 },
+	{ _MMIO(0x9888), 0x02d91000 },
+	{ _MMIO(0x9888), 0x02da8000 },
+	{ _MMIO(0x9888), 0x089c8000 },
+	{ _MMIO(0x9888), 0x029d8000 },
+	{ _MMIO(0x9888), 0x089e8000 },
+	{ _MMIO(0x9888), 0x0e9e0000 },
+	{ _MMIO(0x9888), 0x0e9fa806 },
+	{ _MMIO(0x9888), 0x109f0142 },
+	{ _MMIO(0x9888), 0x08b90617 },
+	{ _MMIO(0x9888), 0x0ab90be0 },
+	{ _MMIO(0x9888), 0x02b94000 },
+	{ _MMIO(0x9888), 0x0d88f000 },
+	{ _MMIO(0x9888), 0x0f88000c },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x1b8a2800 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x238b52a0 },
+	{ _MMIO(0x9888), 0x258b6a95 },
+	{ _MMIO(0x9888), 0x278b0029 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c1500 },
+	{ _MMIO(0x9888), 0x1b8c0014 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x038d8000 },
+	{ _MMIO(0x9888), 0x058d2000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x4d800444 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f804000 },
+	{ _MMIO(0x9888), 0x43801080 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800084 },
+	{ _MMIO(0x9888), 0x53800044 },
+	{ _MMIO(0x9888), 0x47801080 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800000 },
+	{ _MMIO(0x9888), 0x41800840 },
+};
+
+static const struct i915_oa_reg *
+get_rasterizer_and_pixel_backend_mux_config(struct drm_i915_private *dev_priv,
+					    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_rasterizer_and_pixel_backend);
+	return mux_config_rasterizer_and_pixel_backend;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler_1[] = {
+	{ _MMIO(0x9888), 0x18921400 },
+	{ _MMIO(0x9888), 0x149500ab },
+	{ _MMIO(0x9888), 0x18b21400 },
+	{ _MMIO(0x9888), 0x14b500ab },
+	{ _MMIO(0x9888), 0x18d21400 },
+	{ _MMIO(0x9888), 0x14d500ab },
+	{ _MMIO(0x9888), 0x0cdc8000 },
+	{ _MMIO(0x9888), 0x0edc4000 },
+	{ _MMIO(0x9888), 0x02dcc000 },
+	{ _MMIO(0x9888), 0x04dcc000 },
+	{ _MMIO(0x9888), 0x1abd00a0 },
+	{ _MMIO(0x9888), 0x0abd8000 },
+	{ _MMIO(0x9888), 0x0cd88000 },
+	{ _MMIO(0x9888), 0x0ed84000 },
+	{ _MMIO(0x9888), 0x04d88000 },
+	{ _MMIO(0x9888), 0x1adb0050 },
+	{ _MMIO(0x9888), 0x04db8000 },
+	{ _MMIO(0x9888), 0x06db8000 },
+	{ _MMIO(0x9888), 0x08db8000 },
+	{ _MMIO(0x9888), 0x0adb4000 },
+	{ _MMIO(0x9888), 0x109f02a0 },
+	{ _MMIO(0x9888), 0x0c9fa000 },
+	{ _MMIO(0x9888), 0x0e9f00aa },
+	{ _MMIO(0x9888), 0x18b82500 },
+	{ _MMIO(0x9888), 0x02b88000 },
+	{ _MMIO(0x9888), 0x04b84000 },
+	{ _MMIO(0x9888), 0x06b84000 },
+	{ _MMIO(0x9888), 0x08b84000 },
+	{ _MMIO(0x9888), 0x0ab84000 },
+	{ _MMIO(0x9888), 0x0cb88000 },
+	{ _MMIO(0x9888), 0x0cb98000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x00b98000 },
+	{ _MMIO(0x9888), 0x02b9a000 },
+	{ _MMIO(0x9888), 0x04b9a000 },
+	{ _MMIO(0x9888), 0x06b92000 },
+	{ _MMIO(0x9888), 0x1aba0200 },
+	{ _MMIO(0x9888), 0x02ba8000 },
+	{ _MMIO(0x9888), 0x0cba8000 },
+	{ _MMIO(0x9888), 0x04908000 },
+	{ _MMIO(0x9888), 0x04918000 },
+	{ _MMIO(0x9888), 0x04927300 },
+	{ _MMIO(0x9888), 0x10920000 },
+	{ _MMIO(0x9888), 0x1893000a },
+	{ _MMIO(0x9888), 0x0a934000 },
+	{ _MMIO(0x9888), 0x0a946000 },
+	{ _MMIO(0x9888), 0x0c959000 },
+	{ _MMIO(0x9888), 0x0e950098 },
+	{ _MMIO(0x9888), 0x10950000 },
+	{ _MMIO(0x9888), 0x04b04000 },
+	{ _MMIO(0x9888), 0x04b14000 },
+	{ _MMIO(0x9888), 0x04b20073 },
+	{ _MMIO(0x9888), 0x10b20000 },
+	{ _MMIO(0x9888), 0x04b38000 },
+	{ _MMIO(0x9888), 0x06b38000 },
+	{ _MMIO(0x9888), 0x08b34000 },
+	{ _MMIO(0x9888), 0x04b4c000 },
+	{ _MMIO(0x9888), 0x02b59890 },
+	{ _MMIO(0x9888), 0x10b50000 },
+	{ _MMIO(0x9888), 0x06d04000 },
+	{ _MMIO(0x9888), 0x06d14000 },
+	{ _MMIO(0x9888), 0x06d20073 },
+	{ _MMIO(0x9888), 0x10d20000 },
+	{ _MMIO(0x9888), 0x18d30020 },
+	{ _MMIO(0x9888), 0x02d38000 },
+	{ _MMIO(0x9888), 0x0cd34000 },
+	{ _MMIO(0x9888), 0x0ad48000 },
+	{ _MMIO(0x9888), 0x04d42000 },
+	{ _MMIO(0x9888), 0x0ed59000 },
+	{ _MMIO(0x9888), 0x00d59800 },
+	{ _MMIO(0x9888), 0x10d50000 },
+	{ _MMIO(0x9888), 0x0f88000e },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b5500 },
+	{ _MMIO(0x9888), 0x258b000a },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x0d8d8000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x018d8000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x2185000a },
+	{ _MMIO(0x9888), 0x1b830150 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d848000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d808000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47801021 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800c64 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800c02 },
+};
+
+static const struct i915_oa_reg *
+get_sampler_1_mux_config(struct drm_i915_private *dev_priv,
+			 int *len)
+{
+	*len = ARRAY_SIZE(mux_config_sampler_1);
+	return mux_config_sampler_1;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler_2[] = {
+	{ _MMIO(0x9888), 0x18121400 },
+	{ _MMIO(0x9888), 0x141500ab },
+	{ _MMIO(0x9888), 0x18321400 },
+	{ _MMIO(0x9888), 0x143500ab },
+	{ _MMIO(0x9888), 0x18521400 },
+	{ _MMIO(0x9888), 0x145500ab },
+	{ _MMIO(0x9888), 0x0c5c8000 },
+	{ _MMIO(0x9888), 0x0e5c4000 },
+	{ _MMIO(0x9888), 0x025cc000 },
+	{ _MMIO(0x9888), 0x045cc000 },
+	{ _MMIO(0x9888), 0x1a3d00a0 },
+	{ _MMIO(0x9888), 0x0a3d8000 },
+	{ _MMIO(0x9888), 0x0c588000 },
+	{ _MMIO(0x9888), 0x0e584000 },
+	{ _MMIO(0x9888), 0x04588000 },
+	{ _MMIO(0x9888), 0x1a5b0050 },
+	{ _MMIO(0x9888), 0x045b8000 },
+	{ _MMIO(0x9888), 0x065b8000 },
+	{ _MMIO(0x9888), 0x085b8000 },
+	{ _MMIO(0x9888), 0x0a5b4000 },
+	{ _MMIO(0x9888), 0x101f02a0 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f00aa },
+	{ _MMIO(0x9888), 0x18382500 },
+	{ _MMIO(0x9888), 0x02388000 },
+	{ _MMIO(0x9888), 0x04384000 },
+	{ _MMIO(0x9888), 0x06384000 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0c388000 },
+	{ _MMIO(0x9888), 0x0c398000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x00398000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x1a3a0200 },
+	{ _MMIO(0x9888), 0x023a8000 },
+	{ _MMIO(0x9888), 0x0c3a8000 },
+	{ _MMIO(0x9888), 0x04108000 },
+	{ _MMIO(0x9888), 0x04118000 },
+	{ _MMIO(0x9888), 0x04127300 },
+	{ _MMIO(0x9888), 0x10120000 },
+	{ _MMIO(0x9888), 0x1813000a },
+	{ _MMIO(0x9888), 0x0a134000 },
+	{ _MMIO(0x9888), 0x0a146000 },
+	{ _MMIO(0x9888), 0x0c159000 },
+	{ _MMIO(0x9888), 0x0e150098 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x04304000 },
+	{ _MMIO(0x9888), 0x04314000 },
+	{ _MMIO(0x9888), 0x04320073 },
+	{ _MMIO(0x9888), 0x10320000 },
+	{ _MMIO(0x9888), 0x04338000 },
+	{ _MMIO(0x9888), 0x06338000 },
+	{ _MMIO(0x9888), 0x08334000 },
+	{ _MMIO(0x9888), 0x0434c000 },
+	{ _MMIO(0x9888), 0x02359890 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x06504000 },
+	{ _MMIO(0x9888), 0x06514000 },
+	{ _MMIO(0x9888), 0x06520073 },
+	{ _MMIO(0x9888), 0x10520000 },
+	{ _MMIO(0x9888), 0x18530020 },
+	{ _MMIO(0x9888), 0x02538000 },
+	{ _MMIO(0x9888), 0x0c534000 },
+	{ _MMIO(0x9888), 0x0a548000 },
+	{ _MMIO(0x9888), 0x04542000 },
+	{ _MMIO(0x9888), 0x0e559000 },
+	{ _MMIO(0x9888), 0x00559800 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x1b8aa000 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x258b0005 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x2185000a },
+	{ _MMIO(0x9888), 0x1b830150 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d848000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d808000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47801021 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800c64 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800c02 },
+};
+
+static const struct i915_oa_reg *
+get_sampler_2_mux_config(struct drm_i915_private *dev_priv,
+			 int *len)
+{
+	*len = ARRAY_SIZE(mux_config_sampler_2);
+	return mux_config_sampler_2;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x0000fe7f },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000ffbf },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fff7 },
+	{ _MMIO(0x2798), 0x00000000 },
+	{ _MMIO(0x279c), 0x0000fff9 },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_1[] = {
+	{ _MMIO(0x9888), 0x16154d60 },
+	{ _MMIO(0x9888), 0x16352e60 },
+	{ _MMIO(0x9888), 0x16554d60 },
+	{ _MMIO(0x9888), 0x16950000 },
+	{ _MMIO(0x9888), 0x16b50000 },
+	{ _MMIO(0x9888), 0x16d50000 },
+	{ _MMIO(0x9888), 0x005c8000 },
+	{ _MMIO(0x9888), 0x045cc000 },
+	{ _MMIO(0x9888), 0x065c4000 },
+	{ _MMIO(0x9888), 0x083d8000 },
+	{ _MMIO(0x9888), 0x0a3d8000 },
+	{ _MMIO(0x9888), 0x0458c000 },
+	{ _MMIO(0x9888), 0x025b8000 },
+	{ _MMIO(0x9888), 0x085b4000 },
+	{ _MMIO(0x9888), 0x0a5b4000 },
+	{ _MMIO(0x9888), 0x0c5b8000 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f00aa },
+	{ _MMIO(0x9888), 0x02384000 },
+	{ _MMIO(0x9888), 0x04388000 },
+	{ _MMIO(0x9888), 0x06388000 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0c384000 },
+	{ _MMIO(0x9888), 0x00398000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x043a8000 },
+	{ _MMIO(0x9888), 0x063a8000 },
+	{ _MMIO(0x9888), 0x08138000 },
+	{ _MMIO(0x9888), 0x0a138000 },
+	{ _MMIO(0x9888), 0x06143000 },
+	{ _MMIO(0x9888), 0x0415cfc7 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x02338000 },
+	{ _MMIO(0x9888), 0x0c338000 },
+	{ _MMIO(0x9888), 0x04342000 },
+	{ _MMIO(0x9888), 0x06344000 },
+	{ _MMIO(0x9888), 0x0035c700 },
+	{ _MMIO(0x9888), 0x063500cf },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x04538000 },
+	{ _MMIO(0x9888), 0x06538000 },
+	{ _MMIO(0x9888), 0x0454c000 },
+	{ _MMIO(0x9888), 0x0255cfc7 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x06dc8000 },
+	{ _MMIO(0x9888), 0x08dc4000 },
+	{ _MMIO(0x9888), 0x0cdcc000 },
+	{ _MMIO(0x9888), 0x0edcc000 },
+	{ _MMIO(0x9888), 0x1abd00a8 },
+	{ _MMIO(0x9888), 0x0cd8c000 },
+	{ _MMIO(0x9888), 0x0ed84000 },
+	{ _MMIO(0x9888), 0x0edb8000 },
+	{ _MMIO(0x9888), 0x18db0800 },
+	{ _MMIO(0x9888), 0x1adb0254 },
+	{ _MMIO(0x9888), 0x0e9faa00 },
+	{ _MMIO(0x9888), 0x109f02aa },
+	{ _MMIO(0x9888), 0x0eb84000 },
+	{ _MMIO(0x9888), 0x16b84000 },
+	{ _MMIO(0x9888), 0x18b8156a },
+	{ _MMIO(0x9888), 0x06b98000 },
+	{ _MMIO(0x9888), 0x08b9a000 },
+	{ _MMIO(0x9888), 0x0ab9a000 },
+	{ _MMIO(0x9888), 0x0cb9a000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x18baa000 },
+	{ _MMIO(0x9888), 0x1aba0002 },
+	{ _MMIO(0x9888), 0x16934000 },
+	{ _MMIO(0x9888), 0x1893000a },
+	{ _MMIO(0x9888), 0x0a947000 },
+	{ _MMIO(0x9888), 0x0c95c5c1 },
+	{ _MMIO(0x9888), 0x0e9500c3 },
+	{ _MMIO(0x9888), 0x10950000 },
+	{ _MMIO(0x9888), 0x0eb38000 },
+	{ _MMIO(0x9888), 0x16b30040 },
+	{ _MMIO(0x9888), 0x18b30020 },
+	{ _MMIO(0x9888), 0x06b48000 },
+	{ _MMIO(0x9888), 0x08b41000 },
+	{ _MMIO(0x9888), 0x0ab48000 },
+	{ _MMIO(0x9888), 0x06b5c500 },
+	{ _MMIO(0x9888), 0x08b500c3 },
+	{ _MMIO(0x9888), 0x0eb5c100 },
+	{ _MMIO(0x9888), 0x10b50000 },
+	{ _MMIO(0x9888), 0x16d31500 },
+	{ _MMIO(0x9888), 0x08d4e000 },
+	{ _MMIO(0x9888), 0x08d5c100 },
+	{ _MMIO(0x9888), 0x0ad5c3c5 },
+	{ _MMIO(0x9888), 0x10d50000 },
+	{ _MMIO(0x9888), 0x0d88f800 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x258baaa5 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x0f8c4000 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c5500 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x078d8000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800c42 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800063 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800800 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f8014a4 },
+	{ _MMIO(0x9888), 0x41801042 },
+};
+
+static const struct i915_oa_reg *
+get_tdl_1_mux_config(struct drm_i915_private *dev_priv,
+		     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_tdl_1);
+	return mux_config_tdl_1;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x0000fe7f },
+	{ _MMIO(0x2780), 0x00000000 },
+	{ _MMIO(0x2784), 0x0000ff9f },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000ffe7 },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fffb },
+	{ _MMIO(0x2798), 0x00000002 },
+	{ _MMIO(0x279c), 0x0000fffd },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_2[] = {
+	{ _MMIO(0x9888), 0x16150000 },
+	{ _MMIO(0x9888), 0x16350000 },
+	{ _MMIO(0x9888), 0x16550000 },
+	{ _MMIO(0x9888), 0x16952e60 },
+	{ _MMIO(0x9888), 0x16b54d60 },
+	{ _MMIO(0x9888), 0x16d52e60 },
+	{ _MMIO(0x9888), 0x065c8000 },
+	{ _MMIO(0x9888), 0x085cc000 },
+	{ _MMIO(0x9888), 0x0a5cc000 },
+	{ _MMIO(0x9888), 0x0c5c4000 },
+	{ _MMIO(0x9888), 0x0e3d8000 },
+	{ _MMIO(0x9888), 0x183da000 },
+	{ _MMIO(0x9888), 0x06588000 },
+	{ _MMIO(0x9888), 0x08588000 },
+	{ _MMIO(0x9888), 0x0a584000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x185b5800 },
+	{ _MMIO(0x9888), 0x1a5b000a },
+	{ _MMIO(0x9888), 0x0e1faa00 },
+	{ _MMIO(0x9888), 0x101f02aa },
+	{ _MMIO(0x9888), 0x0e384000 },
+	{ _MMIO(0x9888), 0x16384000 },
+	{ _MMIO(0x9888), 0x18382a55 },
+	{ _MMIO(0x9888), 0x06398000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0c39a000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x1a3a02a0 },
+	{ _MMIO(0x9888), 0x0e138000 },
+	{ _MMIO(0x9888), 0x16130500 },
+	{ _MMIO(0x9888), 0x06148000 },
+	{ _MMIO(0x9888), 0x08146000 },
+	{ _MMIO(0x9888), 0x0615c100 },
+	{ _MMIO(0x9888), 0x0815c500 },
+	{ _MMIO(0x9888), 0x0a1500c3 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x16335040 },
+	{ _MMIO(0x9888), 0x08349000 },
+	{ _MMIO(0x9888), 0x0a341000 },
+	{ _MMIO(0x9888), 0x083500c1 },
+	{ _MMIO(0x9888), 0x0a35c500 },
+	{ _MMIO(0x9888), 0x0c3500c3 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x1853002a },
+	{ _MMIO(0x9888), 0x0a54e000 },
+	{ _MMIO(0x9888), 0x0c55c500 },
+	{ _MMIO(0x9888), 0x0e55c1c3 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x00dc8000 },
+	{ _MMIO(0x9888), 0x02dcc000 },
+	{ _MMIO(0x9888), 0x04dc4000 },
+	{ _MMIO(0x9888), 0x04bd8000 },
+	{ _MMIO(0x9888), 0x06bd8000 },
+	{ _MMIO(0x9888), 0x02d8c000 },
+	{ _MMIO(0x9888), 0x02db8000 },
+	{ _MMIO(0x9888), 0x04db4000 },
+	{ _MMIO(0x9888), 0x06db4000 },
+	{ _MMIO(0x9888), 0x08db8000 },
+	{ _MMIO(0x9888), 0x0c9fa000 },
+	{ _MMIO(0x9888), 0x0e9f00aa },
+	{ _MMIO(0x9888), 0x02b84000 },
+	{ _MMIO(0x9888), 0x04b84000 },
+	{ _MMIO(0x9888), 0x06b84000 },
+	{ _MMIO(0x9888), 0x08b84000 },
+	{ _MMIO(0x9888), 0x0ab88000 },
+	{ _MMIO(0x9888), 0x0cb88000 },
+	{ _MMIO(0x9888), 0x00b98000 },
+	{ _MMIO(0x9888), 0x02b9a000 },
+	{ _MMIO(0x9888), 0x04b9a000 },
+	{ _MMIO(0x9888), 0x06b92000 },
+	{ _MMIO(0x9888), 0x0aba8000 },
+	{ _MMIO(0x9888), 0x0cba8000 },
+	{ _MMIO(0x9888), 0x04938000 },
+	{ _MMIO(0x9888), 0x06938000 },
+	{ _MMIO(0x9888), 0x0494c000 },
+	{ _MMIO(0x9888), 0x0295cfc7 },
+	{ _MMIO(0x9888), 0x10950000 },
+	{ _MMIO(0x9888), 0x02b38000 },
+	{ _MMIO(0x9888), 0x08b38000 },
+	{ _MMIO(0x9888), 0x04b42000 },
+	{ _MMIO(0x9888), 0x06b41000 },
+	{ _MMIO(0x9888), 0x00b5c700 },
+	{ _MMIO(0x9888), 0x04b500cf },
+	{ _MMIO(0x9888), 0x10b50000 },
+	{ _MMIO(0x9888), 0x0ad38000 },
+	{ _MMIO(0x9888), 0x0cd38000 },
+	{ _MMIO(0x9888), 0x06d46000 },
+	{ _MMIO(0x9888), 0x04d5c700 },
+	{ _MMIO(0x9888), 0x06d500cf },
+	{ _MMIO(0x9888), 0x10d50000 },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8aaaa0 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x258b555a },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x238b5500 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x018d8000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800882 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45801082 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x478014a5 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800002 },
+	{ _MMIO(0x9888), 0x41800c62 },
+};
+
+static const struct i915_oa_reg *
+get_tdl_2_mux_config(struct drm_i915_private *dev_priv,
+		     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_tdl_2);
+	return mux_config_tdl_2;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extra[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extra[] = {
+	{ _MMIO(0xe458), 0x00001000 },
+	{ _MMIO(0xe558), 0x00003002 },
+	{ _MMIO(0xe658), 0x00005004 },
+	{ _MMIO(0xe758), 0x00011010 },
+	{ _MMIO(0xe45c), 0x00050012 },
+	{ _MMIO(0xe55c), 0x00052051 },
+	{ _MMIO(0xe65c), 0x00000008 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extra[] = {
+	{ _MMIO(0x9888), 0x161503e0 },
+	{ _MMIO(0x9888), 0x163503e0 },
+	{ _MMIO(0x9888), 0x165503e0 },
+	{ _MMIO(0x9888), 0x169503e0 },
+	{ _MMIO(0x9888), 0x16b503e0 },
+	{ _MMIO(0x9888), 0x16d503e0 },
+	{ _MMIO(0x9888), 0x045cc000 },
+	{ _MMIO(0x9888), 0x083d8000 },
+	{ _MMIO(0x9888), 0x04584000 },
+	{ _MMIO(0x9888), 0x085b4000 },
+	{ _MMIO(0x9888), 0x0a5b8000 },
+	{ _MMIO(0x9888), 0x0e1f00a8 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0c388000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x0c3a8000 },
+	{ _MMIO(0x9888), 0x08138000 },
+	{ _MMIO(0x9888), 0x06141000 },
+	{ _MMIO(0x9888), 0x041500c3 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x0a338000 },
+	{ _MMIO(0x9888), 0x06342000 },
+	{ _MMIO(0x9888), 0x0435c300 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x0c538000 },
+	{ _MMIO(0x9888), 0x06544000 },
+	{ _MMIO(0x9888), 0x065500c3 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x00dc8000 },
+	{ _MMIO(0x9888), 0x02dc4000 },
+	{ _MMIO(0x9888), 0x02bd8000 },
+	{ _MMIO(0x9888), 0x00d88000 },
+	{ _MMIO(0x9888), 0x02db4000 },
+	{ _MMIO(0x9888), 0x04db8000 },
+	{ _MMIO(0x9888), 0x0c9fa000 },
+	{ _MMIO(0x9888), 0x0e9f0002 },
+	{ _MMIO(0x9888), 0x02b84000 },
+	{ _MMIO(0x9888), 0x04b84000 },
+	{ _MMIO(0x9888), 0x06b88000 },
+	{ _MMIO(0x9888), 0x00b98000 },
+	{ _MMIO(0x9888), 0x02b9a000 },
+	{ _MMIO(0x9888), 0x06ba8000 },
+	{ _MMIO(0x9888), 0x02938000 },
+	{ _MMIO(0x9888), 0x04942000 },
+	{ _MMIO(0x9888), 0x0095c300 },
+	{ _MMIO(0x9888), 0x10950000 },
+	{ _MMIO(0x9888), 0x04b38000 },
+	{ _MMIO(0x9888), 0x04b44000 },
+	{ _MMIO(0x9888), 0x02b500c3 },
+	{ _MMIO(0x9888), 0x10b50000 },
+	{ _MMIO(0x9888), 0x06d38000 },
+	{ _MMIO(0x9888), 0x04d48000 },
+	{ _MMIO(0x9888), 0x02d5c300 },
+	{ _MMIO(0x9888), 0x10d50000 },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x238b3500 },
+	{ _MMIO(0x9888), 0x258b0005 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x018d8000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x2185000a },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800c40 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41801482 },
+	{ _MMIO(0x9888), 0x31800000 },
+};
+
+static const struct i915_oa_reg *
+get_compute_extra_mux_config(struct drm_i915_private *dev_priv,
+			     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_extra);
+	return mux_config_compute_extra;
+}
+
+static const struct i915_oa_reg b_counter_config_vme_pipe[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00100030 },
+	{ _MMIO(0x2774), 0x0000fff9 },
+	{ _MMIO(0x2778), 0x00000002 },
+	{ _MMIO(0x277c), 0x0000fffc },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000fff3 },
+	{ _MMIO(0x2788), 0x00100180 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00000002 },
+	{ _MMIO(0x279c), 0x0000ff3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_vme_pipe[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00008003 },
+};
+
+static const struct i915_oa_reg mux_config_vme_pipe[] = {
+	{ _MMIO(0x9888), 0x14100812 },
+	{ _MMIO(0x9888), 0x14125800 },
+	{ _MMIO(0x9888), 0x161200c0 },
+	{ _MMIO(0x9888), 0x14300812 },
+	{ _MMIO(0x9888), 0x14325800 },
+	{ _MMIO(0x9888), 0x163200c0 },
+	{ _MMIO(0x9888), 0x005c4000 },
+	{ _MMIO(0x9888), 0x065c8000 },
+	{ _MMIO(0x9888), 0x085cc000 },
+	{ _MMIO(0x9888), 0x0a5cc000 },
+	{ _MMIO(0x9888), 0x0c5cc000 },
+	{ _MMIO(0x9888), 0x003d8000 },
+	{ _MMIO(0x9888), 0x0e3d8000 },
+	{ _MMIO(0x9888), 0x183d2800 },
+	{ _MMIO(0x9888), 0x00584000 },
+	{ _MMIO(0x9888), 0x06588000 },
+	{ _MMIO(0x9888), 0x0858c000 },
+	{ _MMIO(0x9888), 0x005b4000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x185b9400 },
+	{ _MMIO(0x9888), 0x1a5b002a },
+	{ _MMIO(0x9888), 0x0c1f0800 },
+	{ _MMIO(0x9888), 0x0e1faa00 },
+	{ _MMIO(0x9888), 0x101f002a },
+	{ _MMIO(0x9888), 0x00384000 },
+	{ _MMIO(0x9888), 0x0e384000 },
+	{ _MMIO(0x9888), 0x16384000 },
+	{ _MMIO(0x9888), 0x18380155 },
+	{ _MMIO(0x9888), 0x00392000 },
+	{ _MMIO(0x9888), 0x06398000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0c39a000 },
+	{ _MMIO(0x9888), 0x00100047 },
+	{ _MMIO(0x9888), 0x06101a80 },
+	{ _MMIO(0x9888), 0x10100000 },
+	{ _MMIO(0x9888), 0x0810c000 },
+	{ _MMIO(0x9888), 0x0811c000 },
+	{ _MMIO(0x9888), 0x08126151 },
+	{ _MMIO(0x9888), 0x10120000 },
+	{ _MMIO(0x9888), 0x00134000 },
+	{ _MMIO(0x9888), 0x0e134000 },
+	{ _MMIO(0x9888), 0x161300a0 },
+	{ _MMIO(0x9888), 0x0a301ac7 },
+	{ _MMIO(0x9888), 0x10300000 },
+	{ _MMIO(0x9888), 0x0c30c000 },
+	{ _MMIO(0x9888), 0x0c31c000 },
+	{ _MMIO(0x9888), 0x0c326151 },
+	{ _MMIO(0x9888), 0x10320000 },
+	{ _MMIO(0x9888), 0x16332a00 },
+	{ _MMIO(0x9888), 0x18330001 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8a2aa0 },
+	{ _MMIO(0x9888), 0x238b0020 },
+	{ _MMIO(0x9888), 0x258b5550 },
+	{ _MMIO(0x9888), 0x278b0001 },
+	{ _MMIO(0x9888), 0x1f850080 },
+	{ _MMIO(0x9888), 0x2185aaa0 },
+	{ _MMIO(0x9888), 0x23850002 },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830015 },
+	{ _MMIO(0x9888), 0x01844000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x11804000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3d800800 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800002 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800884 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800002 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+};
+
+static const struct i915_oa_reg *
+get_vme_pipe_mux_config(struct drm_i915_private *dev_priv,
+			int *len)
+{
+	*len = ARRAY_SIZE(mux_config_vme_pipe);
+	return mux_config_vme_pipe;
+}
+
+static const struct i915_oa_reg b_counter_config_test_oa[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2770), 0x00000004 },
+	{ _MMIO(0x2774), 0x00000000 },
+	{ _MMIO(0x2778), 0x00000003 },
+	{ _MMIO(0x277c), 0x00000000 },
+	{ _MMIO(0x2780), 0x00000007 },
+	{ _MMIO(0x2784), 0x00000000 },
+	{ _MMIO(0x2788), 0x00100002 },
+	{ _MMIO(0x278c), 0x0000fff7 },
+	{ _MMIO(0x2790), 0x00100002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00100082 },
+	{ _MMIO(0x279c), 0x0000ffef },
+	{ _MMIO(0x27a0), 0x001000c2 },
+	{ _MMIO(0x27a4), 0x0000ffe7 },
+	{ _MMIO(0x27a8), 0x00100001 },
+	{ _MMIO(0x27ac), 0x0000ffe7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_test_oa[] = {
+};
+
+static const struct i915_oa_reg mux_config_test_oa[] = {
+	{ _MMIO(0x9888), 0x198b0000 },
+	{ _MMIO(0x9888), 0x078b0066 },
+	{ _MMIO(0x9888), 0x118b0000 },
+	{ _MMIO(0x9888), 0x258b0000 },
+	{ _MMIO(0x9888), 0x21850008 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+};
+
+static const struct i915_oa_reg *
+get_test_oa_mux_config(struct drm_i915_private *dev_priv,
+		       int *len)
+{
+	*len = ARRAY_SIZE(mux_config_test_oa);
+	return mux_config_test_oa;
+}
+
 int i915_oa_select_metric_set_bdw(struct drm_i915_private *dev_priv)
 {
-	dev_priv->perf.oa.mux_regs = NULL;
-	dev_priv->perf.oa.mux_regs_len = 0;
-	dev_priv->perf.oa.b_counter_regs = NULL;
-	dev_priv->perf.oa.b_counter_regs_len = 0;
-	dev_priv->perf.oa.flex_regs = NULL;
-	dev_priv->perf.oa.flex_regs_len = 0;
+	dev_priv->perf.oa.mux_regs = NULL;
+	dev_priv->perf.oa.mux_regs_len = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_render_basic_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_basic_mux_config(dev_priv,
+						     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
+		return 0;
+	case METRIC_SET_ID_RENDER_PIPE_PROFILE:
+		dev_priv->perf.oa.mux_regs =
+			get_render_pipe_profile_mux_config(dev_priv,
+							   &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_PIPE_PROFILE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_pipe_profile;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_pipe_profile);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_pipe_profile;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_pipe_profile);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_READS:
+		dev_priv->perf.oa.mux_regs =
+			get_memory_reads_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_READS\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_reads;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_reads);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_reads;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_reads);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_WRITES:
+		dev_priv->perf.oa.mux_regs =
+			get_memory_writes_mux_config(dev_priv,
+						     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_WRITES\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_writes;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_writes);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_writes;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_writes);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTENDED:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_extended_mux_config(dev_priv,
+							&dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTENDED\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extended;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extended);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extended;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extended);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_L3_CACHE:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_l3_cache_mux_config(dev_priv,
+							&dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_L3_CACHE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_l3_cache;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_l3_cache);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_l3_cache;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_l3_cache);
+
+		return 0;
+	case METRIC_SET_ID_DATA_PORT_READS_COALESCING:
+		dev_priv->perf.oa.mux_regs =
+			get_data_port_reads_coalescing_mux_config(dev_priv,
+								  &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"DATA_PORT_READS_COALESCING\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_data_port_reads_coalescing;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_data_port_reads_coalescing);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_data_port_reads_coalescing;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_data_port_reads_coalescing);
+
+		return 0;
+	case METRIC_SET_ID_DATA_PORT_WRITES_COALESCING:
+		dev_priv->perf.oa.mux_regs =
+			get_data_port_writes_coalescing_mux_config(dev_priv,
+								   &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"DATA_PORT_WRITES_COALESCING\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_data_port_writes_coalescing;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_data_port_writes_coalescing);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_data_port_writes_coalescing;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_data_port_writes_coalescing);
+
+		return 0;
+	case METRIC_SET_ID_HDC_AND_SF:
+		dev_priv->perf.oa.mux_regs =
+			get_hdc_and_sf_mux_config(dev_priv,
+						  &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"HDC_AND_SF\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_hdc_and_sf;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_hdc_and_sf);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_hdc_and_sf;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_hdc_and_sf);
+
+		return 0;
+	case METRIC_SET_ID_L3_1:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_1_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_1);
+
+		return 0;
+	case METRIC_SET_ID_L3_2:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_2_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_2);
+
+		return 0;
+	case METRIC_SET_ID_L3_3:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_3_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_3\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_3;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_3);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_3;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_3);
+
+		return 0;
+	case METRIC_SET_ID_L3_4:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_4_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_4\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_4;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_4);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_4;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_4);
+
+		return 0;
+	case METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND:
+		dev_priv->perf.oa.mux_regs =
+			get_rasterizer_and_pixel_backend_mux_config(dev_priv,
+								    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RASTERIZER_AND_PIXEL_BACKEND\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_rasterizer_and_pixel_backend);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_rasterizer_and_pixel_backend);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER_1:
+		dev_priv->perf.oa.mux_regs =
+			get_sampler_1_mux_config(dev_priv,
+						 &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler_1);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER_2:
+		dev_priv->perf.oa.mux_regs =
+			get_sampler_2_mux_config(dev_priv,
+						 &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler_2);
+
+		return 0;
+	case METRIC_SET_ID_TDL_1:
+		dev_priv->perf.oa.mux_regs =
+			get_tdl_1_mux_config(dev_priv,
+					     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_1);
+
+		return 0;
+	case METRIC_SET_ID_TDL_2:
+		dev_priv->perf.oa.mux_regs =
+			get_tdl_2_mux_config(dev_priv,
+					     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_2);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTRA:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_extra_mux_config(dev_priv,
+						     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTRA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extra;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extra);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extra;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extra);
+
+		return 0;
+	case METRIC_SET_ID_VME_PIPE:
+		dev_priv->perf.oa.mux_regs =
+			get_vme_pipe_mux_config(dev_priv,
+						&dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"VME_PIPE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_vme_pipe;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_vme_pipe);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_vme_pipe;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_vme_pipe);
+
+		return 0;
+	case METRIC_SET_ID_TEST_OA:
+		dev_priv->perf.oa.mux_regs =
+			get_test_oa_mux_config(dev_priv,
+					       &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TEST_OA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_test_oa;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_test_oa);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_test_oa;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_test_oa);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "b541bd57-0e0f-4154-b4c0-5858010a2bf7",
+	.attrs =  attrs_render_basic,
+};
+
+static ssize_t
+show_compute_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_BASIC);
+}
+
+static struct device_attribute dev_attr_compute_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_basic[] = {
+	&dev_attr_compute_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_basic = {
+	.name = "35fbc9b2-a891-40a6-a38d-022bb7057552",
+	.attrs =  attrs_compute_basic,
+};
+
+static ssize_t
+show_render_pipe_profile_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_PIPE_PROFILE);
+}
+
+static struct device_attribute dev_attr_render_pipe_profile_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_pipe_profile_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_pipe_profile[] = {
+	&dev_attr_render_pipe_profile_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_pipe_profile = {
+	.name = "233d0544-fff7-4281-8291-e02f222aff72",
+	.attrs =  attrs_render_pipe_profile,
+};
+
+static ssize_t
+show_memory_reads_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_READS);
+}
+
+static struct device_attribute dev_attr_memory_reads_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_reads_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_reads[] = {
+	&dev_attr_memory_reads_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_reads = {
+	.name = "2b255d48-2117-4fef-a8f7-f151e1d25a2c",
+	.attrs =  attrs_memory_reads,
+};
+
+static ssize_t
+show_memory_writes_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_WRITES);
+}
+
+static struct device_attribute dev_attr_memory_writes_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_writes_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_writes[] = {
+	&dev_attr_memory_writes_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_writes = {
+	.name = "f7fd3220-b466-4a4d-9f98-b0caf3f2394c",
+	.attrs =  attrs_memory_writes,
+};
+
+static ssize_t
+show_compute_extended_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTENDED);
+}
+
+static struct device_attribute dev_attr_compute_extended_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extended_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extended[] = {
+	&dev_attr_compute_extended_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extended = {
+	.name = "e99ccaca-821c-4df9-97a7-96bdb7204e43",
+	.attrs =  attrs_compute_extended,
+};
+
+static ssize_t
+show_compute_l3_cache_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_L3_CACHE);
+}
+
+static struct device_attribute dev_attr_compute_l3_cache_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_l3_cache_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_l3_cache[] = {
+	&dev_attr_compute_l3_cache_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_l3_cache = {
+	.name = "27a364dc-8225-4ecb-b607-d6f1925598d9",
+	.attrs =  attrs_compute_l3_cache,
+};
+
+static ssize_t
+show_data_port_reads_coalescing_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_DATA_PORT_READS_COALESCING);
+}
+
+static struct device_attribute dev_attr_data_port_reads_coalescing_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_data_port_reads_coalescing_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_data_port_reads_coalescing[] = {
+	&dev_attr_data_port_reads_coalescing_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_data_port_reads_coalescing = {
+	.name = "857fc630-2f09-4804-85f1-084adfadd5ab",
+	.attrs =  attrs_data_port_reads_coalescing,
+};
+
+static ssize_t
+show_data_port_writes_coalescing_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_DATA_PORT_WRITES_COALESCING);
+}
 
-	switch (dev_priv->perf.oa.metrics_set) {
-	case METRIC_SET_ID_RENDER_BASIC:
-		dev_priv->perf.oa.mux_regs =
-			get_render_basic_mux_config(dev_priv,
-						    &dev_priv->perf.oa.mux_regs_len);
-		if (!dev_priv->perf.oa.mux_regs) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+static struct device_attribute dev_attr_data_port_writes_coalescing_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_data_port_writes_coalescing_id,
+	.store = NULL,
+};
 
-			/* EINVAL because *_register_sysfs already checked this
-			 * and so it wouldn't have been advertised to userspace and
-			 * so shouldn't have been requested
-			 */
-			return -EINVAL;
-		}
+static struct attribute *attrs_data_port_writes_coalescing[] = {
+	&dev_attr_data_port_writes_coalescing_id.attr,
+	NULL,
+};
 
-		dev_priv->perf.oa.b_counter_regs =
-			b_counter_config_render_basic;
-		dev_priv->perf.oa.b_counter_regs_len =
-			ARRAY_SIZE(b_counter_config_render_basic);
+static struct attribute_group group_data_port_writes_coalescing = {
+	.name = "343ebc99-4a55-414c-8c17-d8e259cf5e20",
+	.attrs =  attrs_data_port_writes_coalescing,
+};
 
-		dev_priv->perf.oa.flex_regs =
-			flex_eu_config_render_basic;
-		dev_priv->perf.oa.flex_regs_len =
-			ARRAY_SIZE(flex_eu_config_render_basic);
+static ssize_t
+show_hdc_and_sf_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_HDC_AND_SF);
+}
 
-		return 0;
-	default:
-		return -ENODEV;
-	}
+static struct device_attribute dev_attr_hdc_and_sf_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_hdc_and_sf_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_hdc_and_sf[] = {
+	&dev_attr_hdc_and_sf_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_hdc_and_sf = {
+	.name = "7bdafd88-a4fa-4ed5-bc09-1a977aa5be3e",
+	.attrs =  attrs_hdc_and_sf,
+};
+
+static ssize_t
+show_l3_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_1);
 }
 
+static struct device_attribute dev_attr_l3_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_1[] = {
+	&dev_attr_l3_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_1 = {
+	.name = "9385ebb2-f34f-4aa5-aec5-7e9cbbea0f0b",
+	.attrs =  attrs_l3_1,
+};
+
 static ssize_t
-show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+show_l3_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
 {
-	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_2);
 }
 
-static struct device_attribute dev_attr_render_basic_id = {
+static struct device_attribute dev_attr_l3_2_id = {
 	.attr = { .name = "id", .mode = 0444 },
-	.show = show_render_basic_id,
+	.show = show_l3_2_id,
 	.store = NULL,
 };
 
-static struct attribute *attrs_render_basic[] = {
-	&dev_attr_render_basic_id.attr,
+static struct attribute *attrs_l3_2[] = {
+	&dev_attr_l3_2_id.attr,
 	NULL,
 };
 
-static struct attribute_group group_render_basic = {
-	.name = "b541bd57-0e0f-4154-b4c0-5858010a2bf7",
-	.attrs =  attrs_render_basic,
+static struct attribute_group group_l3_2 = {
+	.name = "446ae59b-ff2e-41c9-b49e-0184a54bf00a",
+	.attrs =  attrs_l3_2,
+};
+
+static ssize_t
+show_l3_3_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_3);
+}
+
+static struct device_attribute dev_attr_l3_3_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_3_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_3[] = {
+	&dev_attr_l3_3_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_3 = {
+	.name = "84a7956f-1ea4-4d0d-837f-e39a0376e38c",
+	.attrs =  attrs_l3_3,
+};
+
+static ssize_t
+show_l3_4_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_4);
+}
+
+static struct device_attribute dev_attr_l3_4_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_4_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_4[] = {
+	&dev_attr_l3_4_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_4 = {
+	.name = "92b493d9-df18-4bed-be06-5cac6f2a6f5f",
+	.attrs =  attrs_l3_4,
+};
+
+static ssize_t
+show_rasterizer_and_pixel_backend_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND);
+}
+
+static struct device_attribute dev_attr_rasterizer_and_pixel_backend_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_rasterizer_and_pixel_backend_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_rasterizer_and_pixel_backend[] = {
+	&dev_attr_rasterizer_and_pixel_backend_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_rasterizer_and_pixel_backend = {
+	.name = "14345c35-cc46-40d0-bb04-6ed1fbb43679",
+	.attrs =  attrs_rasterizer_and_pixel_backend,
+};
+
+static ssize_t
+show_sampler_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER_1);
+}
+
+static struct device_attribute dev_attr_sampler_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler_1[] = {
+	&dev_attr_sampler_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler_1 = {
+	.name = "f0c6ba37-d3d3-4211-91b5-226730312a54",
+	.attrs =  attrs_sampler_1,
+};
+
+static ssize_t
+show_sampler_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER_2);
+}
+
+static struct device_attribute dev_attr_sampler_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler_2[] = {
+	&dev_attr_sampler_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler_2 = {
+	.name = "30bf3702-48cf-4bca-b412-7cf50bb2f564",
+	.attrs =  attrs_sampler_2,
+};
+
+static ssize_t
+show_tdl_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_1);
+}
+
+static struct device_attribute dev_attr_tdl_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_1[] = {
+	&dev_attr_tdl_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_1 = {
+	.name = "238bec85-df05-44f3-b905-d166712f2451",
+	.attrs =  attrs_tdl_1,
+};
+
+static ssize_t
+show_tdl_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_2);
+}
+
+static struct device_attribute dev_attr_tdl_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_2[] = {
+	&dev_attr_tdl_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_2 = {
+	.name = "24bf02cd-8693-4583-981c-c4165b33da01",
+	.attrs =  attrs_tdl_2,
+};
+
+static ssize_t
+show_compute_extra_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTRA);
+}
+
+static struct device_attribute dev_attr_compute_extra_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extra_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extra[] = {
+	&dev_attr_compute_extra_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extra = {
+	.name = "8fb61ba2-2fbb-454c-a136-2dec5a8a595e",
+	.attrs =  attrs_compute_extra,
+};
+
+static ssize_t
+show_vme_pipe_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_VME_PIPE);
+}
+
+static struct device_attribute dev_attr_vme_pipe_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_vme_pipe_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_vme_pipe[] = {
+	&dev_attr_vme_pipe_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_vme_pipe = {
+	.name = "e1743ca0-7fc8-410b-a066-de7bbb9280b7",
+	.attrs =  attrs_vme_pipe,
+};
+
+static ssize_t
+show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TEST_OA);
+}
+
+static struct device_attribute dev_attr_test_oa_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_test_oa_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_test_oa[] = {
+	&dev_attr_test_oa_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_test_oa = {
+	.name = "d6de6f55-e526-4f79-a6a6-d7315c09044e",
+	.attrs =  attrs_test_oa,
 };
 
 int
@@ -363,9 +4927,177 @@ i915_perf_register_sysfs_bdw(struct drm_i915_private *dev_priv)
 		if (ret)
 			goto error_render_basic;
 	}
+	if (get_compute_basic_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+		if (ret)
+			goto error_compute_basic;
+	}
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+		if (ret)
+			goto error_render_pipe_profile;
+	}
+	if (get_memory_reads_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+		if (ret)
+			goto error_memory_reads;
+	}
+	if (get_memory_writes_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+		if (ret)
+			goto error_memory_writes;
+	}
+	if (get_compute_extended_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+		if (ret)
+			goto error_compute_extended;
+	}
+	if (get_compute_l3_cache_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+		if (ret)
+			goto error_compute_l3_cache;
+	}
+	if (get_data_port_reads_coalescing_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_data_port_reads_coalescing);
+		if (ret)
+			goto error_data_port_reads_coalescing;
+	}
+	if (get_data_port_writes_coalescing_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_data_port_writes_coalescing);
+		if (ret)
+			goto error_data_port_writes_coalescing;
+	}
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+		if (ret)
+			goto error_hdc_and_sf;
+	}
+	if (get_l3_1_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+		if (ret)
+			goto error_l3_1;
+	}
+	if (get_l3_2_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+		if (ret)
+			goto error_l3_2;
+	}
+	if (get_l3_3_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+		if (ret)
+			goto error_l3_3;
+	}
+	if (get_l3_4_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_4);
+		if (ret)
+			goto error_l3_4;
+	}
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+		if (ret)
+			goto error_rasterizer_and_pixel_backend;
+	}
+	if (get_sampler_1_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler_1);
+		if (ret)
+			goto error_sampler_1;
+	}
+	if (get_sampler_2_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler_2);
+		if (ret)
+			goto error_sampler_2;
+	}
+	if (get_tdl_1_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+		if (ret)
+			goto error_tdl_1;
+	}
+	if (get_tdl_2_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+		if (ret)
+			goto error_tdl_2;
+	}
+	if (get_compute_extra_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+		if (ret)
+			goto error_compute_extra;
+	}
+	if (get_vme_pipe_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+		if (ret)
+			goto error_vme_pipe;
+	}
+	if (get_test_oa_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+		if (ret)
+			goto error_test_oa;
+	}
 
 	return 0;
 
+error_test_oa:
+	if (get_vme_pipe_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+error_vme_pipe:
+	if (get_compute_extra_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+error_compute_extra:
+	if (get_tdl_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+error_tdl_2:
+	if (get_tdl_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+error_tdl_1:
+	if (get_sampler_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_2);
+error_sampler_2:
+	if (get_sampler_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_1);
+error_sampler_1:
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+error_rasterizer_and_pixel_backend:
+	if (get_l3_4_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_4);
+error_l3_4:
+	if (get_l3_3_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+error_l3_3:
+	if (get_l3_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+error_l3_2:
+	if (get_l3_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+error_l3_1:
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+error_hdc_and_sf:
+	if (get_data_port_writes_coalescing_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_data_port_writes_coalescing);
+error_data_port_writes_coalescing:
+	if (get_data_port_reads_coalescing_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_data_port_reads_coalescing);
+error_data_port_reads_coalescing:
+	if (get_compute_l3_cache_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+error_compute_l3_cache:
+	if (get_compute_extended_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+error_compute_extended:
+	if (get_memory_writes_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+error_memory_writes:
+	if (get_memory_reads_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+error_memory_reads:
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+error_render_pipe_profile:
+	if (get_compute_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+error_compute_basic:
+	if (get_render_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
 error_render_basic:
 	return ret;
 }
@@ -377,4 +5109,46 @@ i915_perf_unregister_sysfs_bdw(struct drm_i915_private *dev_priv)
 
 	if (get_render_basic_mux_config(dev_priv, &mux_len))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+	if (get_compute_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+	if (get_memory_reads_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+	if (get_memory_writes_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+	if (get_compute_extended_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+	if (get_compute_l3_cache_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+	if (get_data_port_reads_coalescing_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_data_port_reads_coalescing);
+	if (get_data_port_writes_coalescing_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_data_port_writes_coalescing);
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+	if (get_l3_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+	if (get_l3_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+	if (get_l3_3_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+	if (get_l3_4_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_4);
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+	if (get_sampler_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_1);
+	if (get_sampler_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_2);
+	if (get_tdl_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+	if (get_tdl_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+	if (get_compute_extra_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+	if (get_vme_pipe_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+	if (get_test_oa_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_test_oa);
 }
diff --git a/drivers/gpu/drm/i915/i915_oa_bxt.c b/drivers/gpu/drm/i915/i915_oa_bxt.c
index b10e44643bc1..7bc979a7c159 100644
--- a/drivers/gpu/drm/i915/i915_oa_bxt.c
+++ b/drivers/gpu/drm/i915/i915_oa_bxt.c
@@ -31,9 +31,23 @@
 
 enum metric_set_id {
 	METRIC_SET_ID_RENDER_BASIC = 1,
+	METRIC_SET_ID_COMPUTE_BASIC,
+	METRIC_SET_ID_RENDER_PIPE_PROFILE,
+	METRIC_SET_ID_MEMORY_READS,
+	METRIC_SET_ID_MEMORY_WRITES,
+	METRIC_SET_ID_COMPUTE_EXTENDED,
+	METRIC_SET_ID_COMPUTE_L3_CACHE,
+	METRIC_SET_ID_HDC_AND_SF,
+	METRIC_SET_ID_L3_1,
+	METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND,
+	METRIC_SET_ID_SAMPLER,
+	METRIC_SET_ID_TDL_1,
+	METRIC_SET_ID_TDL_2,
+	METRIC_SET_ID_COMPUTE_EXTRA,
+	METRIC_SET_ID_TEST_OA,
 };
 
-int i915_oa_n_builtin_metric_sets_bxt = 1;
+int i915_oa_n_builtin_metric_sets_bxt = 15;
 
 static const struct i915_oa_reg b_counter_config_render_basic[] = {
 	{ _MMIO(0x2710), 0x00000000 },
@@ -148,6 +162,1497 @@ get_render_basic_mux_config(struct drm_i915_private *dev_priv,
 	return NULL;
 }
 
+static const struct i915_oa_reg b_counter_config_compute_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic[] = {
+	{ _MMIO(0x9888), 0x104f00e0 },
+	{ _MMIO(0x9888), 0x124f1c00 },
+	{ _MMIO(0x9888), 0x39900340 },
+	{ _MMIO(0x9888), 0x3f900c00 },
+	{ _MMIO(0x9888), 0x41900000 },
+	{ _MMIO(0x9888), 0x002d5000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x082d4000 },
+	{ _MMIO(0x9888), 0x0a2d1000 },
+	{ _MMIO(0x9888), 0x0c2d5000 },
+	{ _MMIO(0x9888), 0x0e2d4000 },
+	{ _MMIO(0x9888), 0x0c2e1400 },
+	{ _MMIO(0x9888), 0x0e2e5100 },
+	{ _MMIO(0x9888), 0x102e0114 },
+	{ _MMIO(0x9888), 0x044cc000 },
+	{ _MMIO(0x9888), 0x0a4c8000 },
+	{ _MMIO(0x9888), 0x0c4c8000 },
+	{ _MMIO(0x9888), 0x0e4c4000 },
+	{ _MMIO(0x9888), 0x104c8000 },
+	{ _MMIO(0x9888), 0x124c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x004ea000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e2000 },
+	{ _MMIO(0x9888), 0x0c4ea000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x004f6b42 },
+	{ _MMIO(0x9888), 0x064f6200 },
+	{ _MMIO(0x9888), 0x084f4100 },
+	{ _MMIO(0x9888), 0x0a4f0061 },
+	{ _MMIO(0x9888), 0x0c4f6c4c },
+	{ _MMIO(0x9888), 0x0e4f4b00 },
+	{ _MMIO(0x9888), 0x1a4f0000 },
+	{ _MMIO(0x9888), 0x1c4f0000 },
+	{ _MMIO(0x9888), 0x180f5000 },
+	{ _MMIO(0x9888), 0x1a0f8800 },
+	{ _MMIO(0x9888), 0x1c0f08a2 },
+	{ _MMIO(0x9888), 0x182c4000 },
+	{ _MMIO(0x9888), 0x1c2c1451 },
+	{ _MMIO(0x9888), 0x1e2c0001 },
+	{ _MMIO(0x9888), 0x1a2c0010 },
+	{ _MMIO(0x9888), 0x01938000 },
+	{ _MMIO(0x9888), 0x0f938000 },
+	{ _MMIO(0x9888), 0x19938a28 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x19900177 },
+	{ _MMIO(0x9888), 0x1b900178 },
+	{ _MMIO(0x9888), 0x1d900125 },
+	{ _MMIO(0x9888), 0x1f900123 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x25904000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x53901000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x55900111 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+};
+
+static const struct i915_oa_reg *
+get_compute_basic_mux_config(struct drm_i915_private *dev_priv,
+			     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_basic);
+	return mux_config_compute_basic;
+}
+
+static const struct i915_oa_reg b_counter_config_render_pipe_profile[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ffea },
+	{ _MMIO(0x2774), 0x00007ffc },
+	{ _MMIO(0x2778), 0x0007affa },
+	{ _MMIO(0x277c), 0x0000f5fd },
+	{ _MMIO(0x2780), 0x00079ffa },
+	{ _MMIO(0x2784), 0x0000f3fb },
+	{ _MMIO(0x2788), 0x0007bf7a },
+	{ _MMIO(0x278c), 0x0000f7e7 },
+	{ _MMIO(0x2790), 0x0007fefa },
+	{ _MMIO(0x2794), 0x0000f7cf },
+	{ _MMIO(0x2798), 0x00077ffa },
+	{ _MMIO(0x279c), 0x0000efdf },
+	{ _MMIO(0x27a0), 0x0006fffa },
+	{ _MMIO(0x27a4), 0x0000cfbf },
+	{ _MMIO(0x27a8), 0x0003fffa },
+	{ _MMIO(0x27ac), 0x00005f7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_pipe_profile[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile[] = {
+	{ _MMIO(0x9888), 0x0c2e001f },
+	{ _MMIO(0x9888), 0x0a2f0000 },
+	{ _MMIO(0x9888), 0x10186800 },
+	{ _MMIO(0x9888), 0x11810019 },
+	{ _MMIO(0x9888), 0x15810013 },
+	{ _MMIO(0x9888), 0x13820020 },
+	{ _MMIO(0x9888), 0x11830020 },
+	{ _MMIO(0x9888), 0x17840000 },
+	{ _MMIO(0x9888), 0x11860007 },
+	{ _MMIO(0x9888), 0x21860000 },
+	{ _MMIO(0x9888), 0x178703e0 },
+	{ _MMIO(0x9888), 0x0c2d8000 },
+	{ _MMIO(0x9888), 0x042d4000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x022e5400 },
+	{ _MMIO(0x9888), 0x002e0000 },
+	{ _MMIO(0x9888), 0x0e2e0080 },
+	{ _MMIO(0x9888), 0x082f0040 },
+	{ _MMIO(0x9888), 0x002f0000 },
+	{ _MMIO(0x9888), 0x06143000 },
+	{ _MMIO(0x9888), 0x06174000 },
+	{ _MMIO(0x9888), 0x06180012 },
+	{ _MMIO(0x9888), 0x00180000 },
+	{ _MMIO(0x9888), 0x0d804000 },
+	{ _MMIO(0x9888), 0x0f804000 },
+	{ _MMIO(0x9888), 0x05804000 },
+	{ _MMIO(0x9888), 0x09810200 },
+	{ _MMIO(0x9888), 0x0b810030 },
+	{ _MMIO(0x9888), 0x03810003 },
+	{ _MMIO(0x9888), 0x21819140 },
+	{ _MMIO(0x9888), 0x23819050 },
+	{ _MMIO(0x9888), 0x25810018 },
+	{ _MMIO(0x9888), 0x0b820980 },
+	{ _MMIO(0x9888), 0x03820d80 },
+	{ _MMIO(0x9888), 0x11820000 },
+	{ _MMIO(0x9888), 0x0182c000 },
+	{ _MMIO(0x9888), 0x07828000 },
+	{ _MMIO(0x9888), 0x09824000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x0d830004 },
+	{ _MMIO(0x9888), 0x0583000c },
+	{ _MMIO(0x9888), 0x0f831000 },
+	{ _MMIO(0x9888), 0x01848072 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x09844000 },
+	{ _MMIO(0x9888), 0x0f848000 },
+	{ _MMIO(0x9888), 0x07860000 },
+	{ _MMIO(0x9888), 0x09860092 },
+	{ _MMIO(0x9888), 0x0f860400 },
+	{ _MMIO(0x9888), 0x01869100 },
+	{ _MMIO(0x9888), 0x0f870065 },
+	{ _MMIO(0x9888), 0x01870000 },
+	{ _MMIO(0x9888), 0x19930800 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x1b952000 },
+	{ _MMIO(0x9888), 0x1d955055 },
+	{ _MMIO(0x9888), 0x1f951455 },
+	{ _MMIO(0x9888), 0x0992a000 },
+	{ _MMIO(0x9888), 0x0f928000 },
+	{ _MMIO(0x9888), 0x1192a800 },
+	{ _MMIO(0x9888), 0x1392028a },
+	{ _MMIO(0x9888), 0x0b92a000 },
+	{ _MMIO(0x9888), 0x0d922000 },
+	{ _MMIO(0x9888), 0x13908000 },
+	{ _MMIO(0x9888), 0x21908000 },
+	{ _MMIO(0x9888), 0x23908000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27908000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f908000 },
+	{ _MMIO(0x9888), 0x31908000 },
+	{ _MMIO(0x9888), 0x15908000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900c01 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900863 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900061 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900000 },
+	{ _MMIO(0x9888), 0x45900c22 },
+};
+
+static const struct i915_oa_reg *
+get_render_pipe_profile_mux_config(struct drm_i915_private *dev_priv,
+				   int *len)
+{
+	*len = ARRAY_SIZE(mux_config_render_pipe_profile);
+	return mux_config_render_pipe_profile;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_reads[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f872 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_reads[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads[] = {
+	{ _MMIO(0x9888), 0x19800343 },
+	{ _MMIO(0x9888), 0x39900340 },
+	{ _MMIO(0x9888), 0x3f901000 },
+	{ _MMIO(0x9888), 0x41900003 },
+	{ _MMIO(0x9888), 0x03803180 },
+	{ _MMIO(0x9888), 0x058035e2 },
+	{ _MMIO(0x9888), 0x0780006a },
+	{ _MMIO(0x9888), 0x11800000 },
+	{ _MMIO(0x9888), 0x2181a000 },
+	{ _MMIO(0x9888), 0x2381000a },
+	{ _MMIO(0x9888), 0x1d950550 },
+	{ _MMIO(0x9888), 0x0b928000 },
+	{ _MMIO(0x9888), 0x0d92a000 },
+	{ _MMIO(0x9888), 0x0f922000 },
+	{ _MMIO(0x9888), 0x13900170 },
+	{ _MMIO(0x9888), 0x21900171 },
+	{ _MMIO(0x9888), 0x23900172 },
+	{ _MMIO(0x9888), 0x25900173 },
+	{ _MMIO(0x9888), 0x27900174 },
+	{ _MMIO(0x9888), 0x29900175 },
+	{ _MMIO(0x9888), 0x2b900176 },
+	{ _MMIO(0x9888), 0x2d900177 },
+	{ _MMIO(0x9888), 0x2f90017f },
+	{ _MMIO(0x9888), 0x31900125 },
+	{ _MMIO(0x9888), 0x15900123 },
+	{ _MMIO(0x9888), 0x17900121 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43901084 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47901080 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49901084 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b901084 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900004 },
+	{ _MMIO(0x9888), 0x45900000 },
+};
+
+static const struct i915_oa_reg *
+get_memory_reads_mux_config(struct drm_i915_private *dev_priv,
+			    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_memory_reads);
+	return mux_config_memory_reads;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_writes[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f822 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_writes[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes[] = {
+	{ _MMIO(0x9888), 0x19800343 },
+	{ _MMIO(0x9888), 0x39900340 },
+	{ _MMIO(0x9888), 0x3f900000 },
+	{ _MMIO(0x9888), 0x41900080 },
+	{ _MMIO(0x9888), 0x03803180 },
+	{ _MMIO(0x9888), 0x058035e2 },
+	{ _MMIO(0x9888), 0x0780006a },
+	{ _MMIO(0x9888), 0x11800000 },
+	{ _MMIO(0x9888), 0x2181a000 },
+	{ _MMIO(0x9888), 0x2381000a },
+	{ _MMIO(0x9888), 0x1d950550 },
+	{ _MMIO(0x9888), 0x0b928000 },
+	{ _MMIO(0x9888), 0x0d92a000 },
+	{ _MMIO(0x9888), 0x0f922000 },
+	{ _MMIO(0x9888), 0x13900180 },
+	{ _MMIO(0x9888), 0x21900181 },
+	{ _MMIO(0x9888), 0x23900182 },
+	{ _MMIO(0x9888), 0x25900183 },
+	{ _MMIO(0x9888), 0x27900184 },
+	{ _MMIO(0x9888), 0x29900185 },
+	{ _MMIO(0x9888), 0x2b900186 },
+	{ _MMIO(0x9888), 0x2d900187 },
+	{ _MMIO(0x9888), 0x2f900170 },
+	{ _MMIO(0x9888), 0x31900125 },
+	{ _MMIO(0x9888), 0x15900123 },
+	{ _MMIO(0x9888), 0x17900121 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43901084 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47901080 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49901084 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b901084 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900004 },
+	{ _MMIO(0x9888), 0x45900000 },
+};
+
+static const struct i915_oa_reg *
+get_memory_writes_mux_config(struct drm_i915_private *dev_priv,
+			     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_memory_writes);
+	return mux_config_memory_writes;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extended[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fc2a },
+	{ _MMIO(0x2774), 0x0000bf00 },
+	{ _MMIO(0x2778), 0x0007fc6a },
+	{ _MMIO(0x277c), 0x0000bf00 },
+	{ _MMIO(0x2780), 0x0007fc92 },
+	{ _MMIO(0x2784), 0x0000bf00 },
+	{ _MMIO(0x2788), 0x0007fca2 },
+	{ _MMIO(0x278c), 0x0000bf00 },
+	{ _MMIO(0x2790), 0x0007fc32 },
+	{ _MMIO(0x2794), 0x0000bf00 },
+	{ _MMIO(0x2798), 0x0007fc9a },
+	{ _MMIO(0x279c), 0x0000bf00 },
+	{ _MMIO(0x27a0), 0x0007fe6a },
+	{ _MMIO(0x27a4), 0x0000bf00 },
+	{ _MMIO(0x27a8), 0x0007fe7a },
+	{ _MMIO(0x27ac), 0x0000bf00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extended[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended[] = {
+	{ _MMIO(0x9888), 0x104f00e0 },
+	{ _MMIO(0x9888), 0x141c0160 },
+	{ _MMIO(0x9888), 0x161c0015 },
+	{ _MMIO(0x9888), 0x181c0120 },
+	{ _MMIO(0x9888), 0x002d5000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x082d5000 },
+	{ _MMIO(0x9888), 0x0a2d5000 },
+	{ _MMIO(0x9888), 0x0c2d5000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x0c2e5400 },
+	{ _MMIO(0x9888), 0x0e2e5515 },
+	{ _MMIO(0x9888), 0x102e0155 },
+	{ _MMIO(0x9888), 0x044cc000 },
+	{ _MMIO(0x9888), 0x0a4c8000 },
+	{ _MMIO(0x9888), 0x0c4cc000 },
+	{ _MMIO(0x9888), 0x0e4cc000 },
+	{ _MMIO(0x9888), 0x104c8000 },
+	{ _MMIO(0x9888), 0x124c8000 },
+	{ _MMIO(0x9888), 0x144c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x064cc000 },
+	{ _MMIO(0x9888), 0x084cc000 },
+	{ _MMIO(0x9888), 0x004ea000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084ea000 },
+	{ _MMIO(0x9888), 0x0a4ea000 },
+	{ _MMIO(0x9888), 0x0c4ea000 },
+	{ _MMIO(0x9888), 0x0e4ea000 },
+	{ _MMIO(0x9888), 0x024ea000 },
+	{ _MMIO(0x9888), 0x044ea000 },
+	{ _MMIO(0x9888), 0x0e4f4b41 },
+	{ _MMIO(0x9888), 0x004f4200 },
+	{ _MMIO(0x9888), 0x024f404c },
+	{ _MMIO(0x9888), 0x1c4f0000 },
+	{ _MMIO(0x9888), 0x1a4f0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x061b8000 },
+	{ _MMIO(0x9888), 0x081bc000 },
+	{ _MMIO(0x9888), 0x0a1bc000 },
+	{ _MMIO(0x9888), 0x0c1bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x001c0031 },
+	{ _MMIO(0x9888), 0x061c1900 },
+	{ _MMIO(0x9888), 0x081c1a33 },
+	{ _MMIO(0x9888), 0x0a1c1b35 },
+	{ _MMIO(0x9888), 0x0c1c3337 },
+	{ _MMIO(0x9888), 0x041c31c7 },
+	{ _MMIO(0x9888), 0x180f5000 },
+	{ _MMIO(0x9888), 0x1a0fa8aa },
+	{ _MMIO(0x9888), 0x1c0f0aaa },
+	{ _MMIO(0x9888), 0x182c8000 },
+	{ _MMIO(0x9888), 0x1c2c6aaa },
+	{ _MMIO(0x9888), 0x1e2c0001 },
+	{ _MMIO(0x9888), 0x1a2c2950 },
+	{ _MMIO(0x9888), 0x01938000 },
+	{ _MMIO(0x9888), 0x0f938000 },
+	{ _MMIO(0x9888), 0x1993aaaa },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25904000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x29904000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900420 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900400 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900001 },
+	{ _MMIO(0x9888), 0x45900001 },
+};
+
+static const struct i915_oa_reg *
+get_compute_extended_mux_config(struct drm_i915_private *dev_priv,
+				int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_extended);
+	return mux_config_compute_extended;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_l3_cache[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fffa },
+	{ _MMIO(0x2774), 0x0000fefe },
+	{ _MMIO(0x2778), 0x0007fffa },
+	{ _MMIO(0x277c), 0x0000fefd },
+	{ _MMIO(0x2790), 0x0007fffa },
+	{ _MMIO(0x2794), 0x0000fbef },
+	{ _MMIO(0x2798), 0x0007fffa },
+	{ _MMIO(0x279c), 0x0000fbdf },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_l3_cache[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00101100 },
+	{ _MMIO(0xe45c), 0x00201200 },
+	{ _MMIO(0xe55c), 0x00301300 },
+	{ _MMIO(0xe65c), 0x00401400 },
+};
+
+static const struct i915_oa_reg mux_config_compute_l3_cache[] = {
+	{ _MMIO(0x9888), 0x166c03b0 },
+	{ _MMIO(0x9888), 0x1593001e },
+	{ _MMIO(0x9888), 0x3f900c00 },
+	{ _MMIO(0x9888), 0x41900000 },
+	{ _MMIO(0x9888), 0x002d1000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x082d5000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x0c2e0400 },
+	{ _MMIO(0x9888), 0x0e2e1500 },
+	{ _MMIO(0x9888), 0x102e0140 },
+	{ _MMIO(0x9888), 0x044c4000 },
+	{ _MMIO(0x9888), 0x0a4c8000 },
+	{ _MMIO(0x9888), 0x0c4cc000 },
+	{ _MMIO(0x9888), 0x144c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x004e2000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084ea000 },
+	{ _MMIO(0x9888), 0x0e4ea000 },
+	{ _MMIO(0x9888), 0x1a4f4001 },
+	{ _MMIO(0x9888), 0x1c4f5005 },
+	{ _MMIO(0x9888), 0x006c0051 },
+	{ _MMIO(0x9888), 0x066c5000 },
+	{ _MMIO(0x9888), 0x086c5c5d },
+	{ _MMIO(0x9888), 0x0e6c5e5f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x146c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x180f1000 },
+	{ _MMIO(0x9888), 0x1a0fa800 },
+	{ _MMIO(0x9888), 0x1c0f0a00 },
+	{ _MMIO(0x9888), 0x182c4000 },
+	{ _MMIO(0x9888), 0x1c2c4015 },
+	{ _MMIO(0x9888), 0x1e2c0001 },
+	{ _MMIO(0x9888), 0x03931980 },
+	{ _MMIO(0x9888), 0x05930032 },
+	{ _MMIO(0x9888), 0x11930000 },
+	{ _MMIO(0x9888), 0x01938000 },
+	{ _MMIO(0x9888), 0x0f938000 },
+	{ _MMIO(0x9888), 0x1993a00a },
+	{ _MMIO(0x9888), 0x07930000 },
+	{ _MMIO(0x9888), 0x09930000 },
+	{ _MMIO(0x9888), 0x1d900177 },
+	{ _MMIO(0x9888), 0x1f900178 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x53901000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x55900111 },
+	{ _MMIO(0x9888), 0x47900001 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x4d900000 },
+	{ _MMIO(0x9888), 0x45900400 },
+};
+
+static const struct i915_oa_reg *
+get_compute_l3_cache_mux_config(struct drm_i915_private *dev_priv,
+				int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_l3_cache);
+	return mux_config_compute_l3_cache;
+}
+
+static const struct i915_oa_reg b_counter_config_hdc_and_sf[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x10800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+};
+
+static const struct i915_oa_reg flex_eu_config_hdc_and_sf[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_hdc_and_sf[] = {
+	{ _MMIO(0x9888), 0x104f0232 },
+	{ _MMIO(0x9888), 0x124f4640 },
+	{ _MMIO(0x9888), 0x11834400 },
+	{ _MMIO(0x9888), 0x022d4000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x0e2e0055 },
+	{ _MMIO(0x9888), 0x064c8000 },
+	{ _MMIO(0x9888), 0x084cc000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044ea000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x024f6100 },
+	{ _MMIO(0x9888), 0x044f416b },
+	{ _MMIO(0x9888), 0x064f004b },
+	{ _MMIO(0x9888), 0x1a4f0000 },
+	{ _MMIO(0x9888), 0x1a0f02a8 },
+	{ _MMIO(0x9888), 0x1a2c5500 },
+	{ _MMIO(0x9888), 0x0f808000 },
+	{ _MMIO(0x9888), 0x25810020 },
+	{ _MMIO(0x9888), 0x0f8305c0 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x1f951000 },
+	{ _MMIO(0x9888), 0x13920200 },
+	{ _MMIO(0x9888), 0x31908000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4d900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_hdc_and_sf_mux_config(struct drm_i915_private *dev_priv,
+			  int *len)
+{
+	*len = ARRAY_SIZE(mux_config_hdc_and_sf);
+	return mux_config_hdc_and_sf;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1_0_sku_gte_0x03[] = {
+	{ _MMIO(0x9888), 0x12643400 },
+	{ _MMIO(0x9888), 0x12653400 },
+	{ _MMIO(0x9888), 0x106c6800 },
+	{ _MMIO(0x9888), 0x126c001e },
+	{ _MMIO(0x9888), 0x166c0010 },
+	{ _MMIO(0x9888), 0x0c2d5000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x002d4000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x102e0154 },
+	{ _MMIO(0x9888), 0x0c2e5000 },
+	{ _MMIO(0x9888), 0x0e2e0055 },
+	{ _MMIO(0x9888), 0x104c8000 },
+	{ _MMIO(0x9888), 0x124c8000 },
+	{ _MMIO(0x9888), 0x144c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x044c8000 },
+	{ _MMIO(0x9888), 0x064cc000 },
+	{ _MMIO(0x9888), 0x084cc000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x0c4ea000 },
+	{ _MMIO(0x9888), 0x0e4ea000 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x024ea000 },
+	{ _MMIO(0x9888), 0x044ea000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x1c4f5500 },
+	{ _MMIO(0x9888), 0x1a4f1554 },
+	{ _MMIO(0x9888), 0x0a640024 },
+	{ _MMIO(0x9888), 0x10640000 },
+	{ _MMIO(0x9888), 0x04640000 },
+	{ _MMIO(0x9888), 0x0c650024 },
+	{ _MMIO(0x9888), 0x10650000 },
+	{ _MMIO(0x9888), 0x06650000 },
+	{ _MMIO(0x9888), 0x0c6c5327 },
+	{ _MMIO(0x9888), 0x0e6c5425 },
+	{ _MMIO(0x9888), 0x006c2a00 },
+	{ _MMIO(0x9888), 0x026c285b },
+	{ _MMIO(0x9888), 0x046c005c },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0900 },
+	{ _MMIO(0x9888), 0x1c0f0aa0 },
+	{ _MMIO(0x9888), 0x180f4000 },
+	{ _MMIO(0x9888), 0x1a0f02aa },
+	{ _MMIO(0x9888), 0x1c2c5400 },
+	{ _MMIO(0x9888), 0x1e2c0001 },
+	{ _MMIO(0x9888), 0x1a2c5550 },
+	{ _MMIO(0x9888), 0x1993aa00 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900421 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900001 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900420 },
+	{ _MMIO(0x9888), 0x45900021 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1_0_sku_lt_0x03[] = {
+	{ _MMIO(0x9888), 0x14640340 },
+	{ _MMIO(0x9888), 0x14650340 },
+	{ _MMIO(0x9888), 0x106c6800 },
+	{ _MMIO(0x9888), 0x126c001e },
+	{ _MMIO(0x9888), 0x166c0010 },
+	{ _MMIO(0x9888), 0x0c2d5000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x002d4000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x102e0154 },
+	{ _MMIO(0x9888), 0x0c2e5000 },
+	{ _MMIO(0x9888), 0x0e2e0055 },
+	{ _MMIO(0x9888), 0x104c8000 },
+	{ _MMIO(0x9888), 0x124c8000 },
+	{ _MMIO(0x9888), 0x144c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x044c8000 },
+	{ _MMIO(0x9888), 0x064cc000 },
+	{ _MMIO(0x9888), 0x084cc000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x0c4ea000 },
+	{ _MMIO(0x9888), 0x0e4ea000 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x024ea000 },
+	{ _MMIO(0x9888), 0x044ea000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x1c4f5500 },
+	{ _MMIO(0x9888), 0x1a4f1554 },
+	{ _MMIO(0x9888), 0x04642400 },
+	{ _MMIO(0x9888), 0x22640000 },
+	{ _MMIO(0x9888), 0x1a640000 },
+	{ _MMIO(0x9888), 0x06650024 },
+	{ _MMIO(0x9888), 0x22650000 },
+	{ _MMIO(0x9888), 0x1c650000 },
+	{ _MMIO(0x9888), 0x0c6c5327 },
+	{ _MMIO(0x9888), 0x0e6c5425 },
+	{ _MMIO(0x9888), 0x006c2a00 },
+	{ _MMIO(0x9888), 0x026c285b },
+	{ _MMIO(0x9888), 0x046c005c },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0900 },
+	{ _MMIO(0x9888), 0x1c0f0aa0 },
+	{ _MMIO(0x9888), 0x180f4000 },
+	{ _MMIO(0x9888), 0x1a0f02aa },
+	{ _MMIO(0x9888), 0x1c2c5400 },
+	{ _MMIO(0x9888), 0x1e2c0001 },
+	{ _MMIO(0x9888), 0x1a2c5550 },
+	{ _MMIO(0x9888), 0x1993aa00 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900421 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900001 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900420 },
+	{ _MMIO(0x9888), 0x45900021 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+};
+
+static const struct i915_oa_reg *
+get_l3_1_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	if (dev_priv->drm.pdev->revision >= 0x03) {
+		*len = ARRAY_SIZE(mux_config_l3_1_0_sku_gte_0x03);
+		return mux_config_l3_1_0_sku_gte_0x03;
+	} else if (dev_priv->drm.pdev->revision < 0x03) {
+		*len = ARRAY_SIZE(mux_config_l3_1_0_sku_lt_0x03);
+		return mux_config_l3_1_0_sku_lt_0x03;
+	}
+
+	*len = 0;
+	return NULL;
+}
+
+static const struct i915_oa_reg b_counter_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000efff },
+	{ _MMIO(0x2778), 0x00006000 },
+	{ _MMIO(0x277c), 0x0000f3ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x9888), 0x102d7800 },
+	{ _MMIO(0x9888), 0x122d79e0 },
+	{ _MMIO(0x9888), 0x0c2f0004 },
+	{ _MMIO(0x9888), 0x100e3800 },
+	{ _MMIO(0x9888), 0x180f0005 },
+	{ _MMIO(0x9888), 0x002d0940 },
+	{ _MMIO(0x9888), 0x022d802f },
+	{ _MMIO(0x9888), 0x042d4013 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x0e2e0050 },
+	{ _MMIO(0x9888), 0x022f0010 },
+	{ _MMIO(0x9888), 0x002f0000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x040e0480 },
+	{ _MMIO(0x9888), 0x000e0000 },
+	{ _MMIO(0x9888), 0x060f0027 },
+	{ _MMIO(0x9888), 0x100f0000 },
+	{ _MMIO(0x9888), 0x1a0f0040 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x439014a0 },
+	{ _MMIO(0x9888), 0x459000a4 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900001 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_rasterizer_and_pixel_backend_mux_config(struct drm_i915_private *dev_priv,
+					    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_rasterizer_and_pixel_backend);
+	return mux_config_rasterizer_and_pixel_backend;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler[] = {
+	{ _MMIO(0x9888), 0x121300a0 },
+	{ _MMIO(0x9888), 0x141600ab },
+	{ _MMIO(0x9888), 0x123300a0 },
+	{ _MMIO(0x9888), 0x143600ab },
+	{ _MMIO(0x9888), 0x125300a0 },
+	{ _MMIO(0x9888), 0x145600ab },
+	{ _MMIO(0x9888), 0x0c2d4000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x002d4000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x102e01a0 },
+	{ _MMIO(0x9888), 0x0c2e5000 },
+	{ _MMIO(0x9888), 0x0e2e0065 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x044c8000 },
+	{ _MMIO(0x9888), 0x064cc000 },
+	{ _MMIO(0x9888), 0x084c4000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x024ea000 },
+	{ _MMIO(0x9888), 0x044e2000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x1c0f0800 },
+	{ _MMIO(0x9888), 0x180f4000 },
+	{ _MMIO(0x9888), 0x1a0f023f },
+	{ _MMIO(0x9888), 0x1e2c0003 },
+	{ _MMIO(0x9888), 0x1a2cc030 },
+	{ _MMIO(0x9888), 0x04132180 },
+	{ _MMIO(0x9888), 0x02130000 },
+	{ _MMIO(0x9888), 0x0c148000 },
+	{ _MMIO(0x9888), 0x0e142000 },
+	{ _MMIO(0x9888), 0x04148000 },
+	{ _MMIO(0x9888), 0x1e150140 },
+	{ _MMIO(0x9888), 0x1c150040 },
+	{ _MMIO(0x9888), 0x0c163000 },
+	{ _MMIO(0x9888), 0x0e160068 },
+	{ _MMIO(0x9888), 0x10160000 },
+	{ _MMIO(0x9888), 0x18160000 },
+	{ _MMIO(0x9888), 0x0a164000 },
+	{ _MMIO(0x9888), 0x04330043 },
+	{ _MMIO(0x9888), 0x02330000 },
+	{ _MMIO(0x9888), 0x0234a000 },
+	{ _MMIO(0x9888), 0x04342000 },
+	{ _MMIO(0x9888), 0x1c350015 },
+	{ _MMIO(0x9888), 0x02363460 },
+	{ _MMIO(0x9888), 0x10360000 },
+	{ _MMIO(0x9888), 0x04360000 },
+	{ _MMIO(0x9888), 0x06360000 },
+	{ _MMIO(0x9888), 0x08364000 },
+	{ _MMIO(0x9888), 0x06530043 },
+	{ _MMIO(0x9888), 0x02530000 },
+	{ _MMIO(0x9888), 0x0e548000 },
+	{ _MMIO(0x9888), 0x00548000 },
+	{ _MMIO(0x9888), 0x06542000 },
+	{ _MMIO(0x9888), 0x1e550400 },
+	{ _MMIO(0x9888), 0x1a552000 },
+	{ _MMIO(0x9888), 0x1c550100 },
+	{ _MMIO(0x9888), 0x0e563000 },
+	{ _MMIO(0x9888), 0x00563400 },
+	{ _MMIO(0x9888), 0x10560000 },
+	{ _MMIO(0x9888), 0x18560000 },
+	{ _MMIO(0x9888), 0x02560000 },
+	{ _MMIO(0x9888), 0x0c564000 },
+	{ _MMIO(0x9888), 0x1993a800 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b9014a0 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900001 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900820 },
+	{ _MMIO(0x9888), 0x45901022 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+};
+
+static const struct i915_oa_reg *
+get_sampler_mux_config(struct drm_i915_private *dev_priv,
+		       int *len)
+{
+	*len = ARRAY_SIZE(mux_config_sampler);
+	return mux_config_sampler;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x00007fff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x00009fff },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000efff },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000f3ff },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fdff },
+	{ _MMIO(0x2798), 0x00000000 },
+	{ _MMIO(0x279c), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_1[] = {
+	{ _MMIO(0x9888), 0x141a0000 },
+	{ _MMIO(0x9888), 0x143a0000 },
+	{ _MMIO(0x9888), 0x145a0000 },
+	{ _MMIO(0x9888), 0x0c2d4000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x002d4000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x102e0150 },
+	{ _MMIO(0x9888), 0x0c2e5000 },
+	{ _MMIO(0x9888), 0x0e2e006a },
+	{ _MMIO(0x9888), 0x124c8000 },
+	{ _MMIO(0x9888), 0x144c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x044c8000 },
+	{ _MMIO(0x9888), 0x064c4000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x0e4ea000 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x024e2000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x1c0f0bc0 },
+	{ _MMIO(0x9888), 0x180f4000 },
+	{ _MMIO(0x9888), 0x1a0f0302 },
+	{ _MMIO(0x9888), 0x1e2c0003 },
+	{ _MMIO(0x9888), 0x1a2c00f0 },
+	{ _MMIO(0x9888), 0x021a3080 },
+	{ _MMIO(0x9888), 0x041a31e5 },
+	{ _MMIO(0x9888), 0x02148000 },
+	{ _MMIO(0x9888), 0x0414a000 },
+	{ _MMIO(0x9888), 0x1c150054 },
+	{ _MMIO(0x9888), 0x06168000 },
+	{ _MMIO(0x9888), 0x08168000 },
+	{ _MMIO(0x9888), 0x0a168000 },
+	{ _MMIO(0x9888), 0x0c3a3280 },
+	{ _MMIO(0x9888), 0x0e3a0063 },
+	{ _MMIO(0x9888), 0x063a0061 },
+	{ _MMIO(0x9888), 0x023a0000 },
+	{ _MMIO(0x9888), 0x0c348000 },
+	{ _MMIO(0x9888), 0x0e342000 },
+	{ _MMIO(0x9888), 0x06342000 },
+	{ _MMIO(0x9888), 0x1e350140 },
+	{ _MMIO(0x9888), 0x1c350100 },
+	{ _MMIO(0x9888), 0x18360028 },
+	{ _MMIO(0x9888), 0x0c368000 },
+	{ _MMIO(0x9888), 0x0e5a3080 },
+	{ _MMIO(0x9888), 0x005a3280 },
+	{ _MMIO(0x9888), 0x025a0063 },
+	{ _MMIO(0x9888), 0x0e548000 },
+	{ _MMIO(0x9888), 0x00548000 },
+	{ _MMIO(0x9888), 0x02542000 },
+	{ _MMIO(0x9888), 0x1e550400 },
+	{ _MMIO(0x9888), 0x1a552000 },
+	{ _MMIO(0x9888), 0x1c550001 },
+	{ _MMIO(0x9888), 0x18560080 },
+	{ _MMIO(0x9888), 0x02568000 },
+	{ _MMIO(0x9888), 0x04568000 },
+	{ _MMIO(0x9888), 0x1993a800 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900420 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x45901084 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900001 },
+};
+
+static const struct i915_oa_reg *
+get_tdl_1_mux_config(struct drm_i915_private *dev_priv,
+		     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_tdl_1);
+	return mux_config_tdl_1;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_2[] = {
+	{ _MMIO(0x9888), 0x141a026b },
+	{ _MMIO(0x9888), 0x143a0173 },
+	{ _MMIO(0x9888), 0x145a026b },
+	{ _MMIO(0x9888), 0x002d4000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x0c2e5000 },
+	{ _MMIO(0x9888), 0x0e2e0069 },
+	{ _MMIO(0x9888), 0x044c8000 },
+	{ _MMIO(0x9888), 0x064cc000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x024ea000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x180f6000 },
+	{ _MMIO(0x9888), 0x1a0f030a },
+	{ _MMIO(0x9888), 0x1a2c03c0 },
+	{ _MMIO(0x9888), 0x041a37e7 },
+	{ _MMIO(0x9888), 0x021a0000 },
+	{ _MMIO(0x9888), 0x0414a000 },
+	{ _MMIO(0x9888), 0x1c150050 },
+	{ _MMIO(0x9888), 0x08168000 },
+	{ _MMIO(0x9888), 0x0a168000 },
+	{ _MMIO(0x9888), 0x003a3380 },
+	{ _MMIO(0x9888), 0x063a006f },
+	{ _MMIO(0x9888), 0x023a0000 },
+	{ _MMIO(0x9888), 0x00348000 },
+	{ _MMIO(0x9888), 0x06342000 },
+	{ _MMIO(0x9888), 0x1a352000 },
+	{ _MMIO(0x9888), 0x1c350100 },
+	{ _MMIO(0x9888), 0x02368000 },
+	{ _MMIO(0x9888), 0x0c368000 },
+	{ _MMIO(0x9888), 0x025a37e7 },
+	{ _MMIO(0x9888), 0x0254a000 },
+	{ _MMIO(0x9888), 0x1c550005 },
+	{ _MMIO(0x9888), 0x04568000 },
+	{ _MMIO(0x9888), 0x06568000 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900020 },
+	{ _MMIO(0x9888), 0x45901080 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900001 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_tdl_2_mux_config(struct drm_i915_private *dev_priv,
+		     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_tdl_2);
+	return mux_config_tdl_2;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extra[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extra[] = {
+	{ _MMIO(0xe458), 0x00001000 },
+	{ _MMIO(0xe558), 0x00003002 },
+	{ _MMIO(0xe658), 0x00005004 },
+	{ _MMIO(0xe758), 0x00011010 },
+	{ _MMIO(0xe45c), 0x00050012 },
+	{ _MMIO(0xe55c), 0x00052051 },
+	{ _MMIO(0xe65c), 0x00000008 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extra[] = {
+	{ _MMIO(0x9888), 0x141a001f },
+	{ _MMIO(0x9888), 0x143a001f },
+	{ _MMIO(0x9888), 0x145a001f },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x0e2e0094 },
+	{ _MMIO(0x9888), 0x084cc000 },
+	{ _MMIO(0x9888), 0x044ea000 },
+	{ _MMIO(0x9888), 0x1a0f00e0 },
+	{ _MMIO(0x9888), 0x1a2c0c00 },
+	{ _MMIO(0x9888), 0x061a0063 },
+	{ _MMIO(0x9888), 0x021a0000 },
+	{ _MMIO(0x9888), 0x06142000 },
+	{ _MMIO(0x9888), 0x1c150100 },
+	{ _MMIO(0x9888), 0x0c168000 },
+	{ _MMIO(0x9888), 0x043a3180 },
+	{ _MMIO(0x9888), 0x023a0000 },
+	{ _MMIO(0x9888), 0x04348000 },
+	{ _MMIO(0x9888), 0x1c350040 },
+	{ _MMIO(0x9888), 0x0a368000 },
+	{ _MMIO(0x9888), 0x045a0063 },
+	{ _MMIO(0x9888), 0x025a0000 },
+	{ _MMIO(0x9888), 0x04542000 },
+	{ _MMIO(0x9888), 0x1c550010 },
+	{ _MMIO(0x9888), 0x08568000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900400 },
+	{ _MMIO(0x9888), 0x47900004 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_compute_extra_mux_config(struct drm_i915_private *dev_priv,
+			     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_extra);
+	return mux_config_compute_extra;
+}
+
+static const struct i915_oa_reg b_counter_config_test_oa[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2770), 0x00000004 },
+	{ _MMIO(0x2774), 0x00000000 },
+	{ _MMIO(0x2778), 0x00000003 },
+	{ _MMIO(0x277c), 0x00000000 },
+	{ _MMIO(0x2780), 0x00000007 },
+	{ _MMIO(0x2784), 0x00000000 },
+	{ _MMIO(0x2788), 0x00100002 },
+	{ _MMIO(0x278c), 0x0000fff7 },
+	{ _MMIO(0x2790), 0x00100002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00100082 },
+	{ _MMIO(0x279c), 0x0000ffef },
+	{ _MMIO(0x27a0), 0x001000c2 },
+	{ _MMIO(0x27a4), 0x0000ffe7 },
+	{ _MMIO(0x27a8), 0x00100001 },
+	{ _MMIO(0x27ac), 0x0000ffe7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_test_oa[] = {
+};
+
+static const struct i915_oa_reg mux_config_test_oa[] = {
+	{ _MMIO(0x9888), 0x19800000 },
+	{ _MMIO(0x9888), 0x07800063 },
+	{ _MMIO(0x9888), 0x11800000 },
+	{ _MMIO(0x9888), 0x23810008 },
+	{ _MMIO(0x9888), 0x1d950400 },
+	{ _MMIO(0x9888), 0x0f922000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_test_oa_mux_config(struct drm_i915_private *dev_priv,
+		       int *len)
+{
+	*len = ARRAY_SIZE(mux_config_test_oa);
+	return mux_config_test_oa;
+}
+
 int i915_oa_select_metric_set_bxt(struct drm_i915_private *dev_priv)
 {
 	dev_priv->perf.oa.mux_regs = NULL;
@@ -157,13 +1662,313 @@ int i915_oa_select_metric_set_bxt(struct drm_i915_private *dev_priv)
 	dev_priv->perf.oa.flex_regs = NULL;
 	dev_priv->perf.oa.flex_regs_len = 0;
 
-	switch (dev_priv->perf.oa.metrics_set) {
-	case METRIC_SET_ID_RENDER_BASIC:
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_render_basic_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_basic_mux_config(dev_priv,
+						     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
+		return 0;
+	case METRIC_SET_ID_RENDER_PIPE_PROFILE:
+		dev_priv->perf.oa.mux_regs =
+			get_render_pipe_profile_mux_config(dev_priv,
+							   &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_PIPE_PROFILE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_pipe_profile;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_pipe_profile);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_pipe_profile;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_pipe_profile);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_READS:
+		dev_priv->perf.oa.mux_regs =
+			get_memory_reads_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_READS\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_reads;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_reads);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_reads;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_reads);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_WRITES:
+		dev_priv->perf.oa.mux_regs =
+			get_memory_writes_mux_config(dev_priv,
+						     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_WRITES\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_writes;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_writes);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_writes;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_writes);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTENDED:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_extended_mux_config(dev_priv,
+							&dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTENDED\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extended;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extended);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extended;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extended);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_L3_CACHE:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_l3_cache_mux_config(dev_priv,
+							&dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_L3_CACHE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_l3_cache;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_l3_cache);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_l3_cache;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_l3_cache);
+
+		return 0;
+	case METRIC_SET_ID_HDC_AND_SF:
+		dev_priv->perf.oa.mux_regs =
+			get_hdc_and_sf_mux_config(dev_priv,
+						  &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"HDC_AND_SF\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_hdc_and_sf;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_hdc_and_sf);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_hdc_and_sf;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_hdc_and_sf);
+
+		return 0;
+	case METRIC_SET_ID_L3_1:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_1_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_1);
+
+		return 0;
+	case METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND:
+		dev_priv->perf.oa.mux_regs =
+			get_rasterizer_and_pixel_backend_mux_config(dev_priv,
+								    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RASTERIZER_AND_PIXEL_BACKEND\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_rasterizer_and_pixel_backend);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_rasterizer_and_pixel_backend);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER:
+		dev_priv->perf.oa.mux_regs =
+			get_sampler_mux_config(dev_priv,
+					       &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler);
+
+		return 0;
+	case METRIC_SET_ID_TDL_1:
+		dev_priv->perf.oa.mux_regs =
+			get_tdl_1_mux_config(dev_priv,
+					     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_1);
+
+		return 0;
+	case METRIC_SET_ID_TDL_2:
 		dev_priv->perf.oa.mux_regs =
-			get_render_basic_mux_config(dev_priv,
-						    &dev_priv->perf.oa.mux_regs_len);
+			get_tdl_2_mux_config(dev_priv,
+					     &dev_priv->perf.oa.mux_regs_len);
 		if (!dev_priv->perf.oa.mux_regs) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_2\" metric set\n");
 
 			/* EINVAL because *_register_sysfs already checked this
 			 * and so it wouldn't have been advertised to userspace and
@@ -173,14 +1978,64 @@ int i915_oa_select_metric_set_bxt(struct drm_i915_private *dev_priv)
 		}
 
 		dev_priv->perf.oa.b_counter_regs =
-			b_counter_config_render_basic;
+			b_counter_config_tdl_2;
 		dev_priv->perf.oa.b_counter_regs_len =
-			ARRAY_SIZE(b_counter_config_render_basic);
+			ARRAY_SIZE(b_counter_config_tdl_2);
 
 		dev_priv->perf.oa.flex_regs =
-			flex_eu_config_render_basic;
+			flex_eu_config_tdl_2;
 		dev_priv->perf.oa.flex_regs_len =
-			ARRAY_SIZE(flex_eu_config_render_basic);
+			ARRAY_SIZE(flex_eu_config_tdl_2);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTRA:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_extra_mux_config(dev_priv,
+						     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTRA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extra;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extra);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extra;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extra);
+
+		return 0;
+	case METRIC_SET_ID_TEST_OA:
+		dev_priv->perf.oa.mux_regs =
+			get_test_oa_mux_config(dev_priv,
+					       &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TEST_OA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_test_oa;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_test_oa);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_test_oa;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_test_oa);
 
 		return 0;
 	default:
@@ -210,6 +2065,314 @@ static struct attribute_group group_render_basic = {
 	.attrs =  attrs_render_basic,
 };
 
+static ssize_t
+show_compute_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_BASIC);
+}
+
+static struct device_attribute dev_attr_compute_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_basic[] = {
+	&dev_attr_compute_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_basic = {
+	.name = "012d72cf-82a9-4d25-8ddf-74076fd30797",
+	.attrs =  attrs_compute_basic,
+};
+
+static ssize_t
+show_render_pipe_profile_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_PIPE_PROFILE);
+}
+
+static struct device_attribute dev_attr_render_pipe_profile_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_pipe_profile_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_pipe_profile[] = {
+	&dev_attr_render_pipe_profile_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_pipe_profile = {
+	.name = "ce416533-e49e-4211-80af-ec513590a914",
+	.attrs =  attrs_render_pipe_profile,
+};
+
+static ssize_t
+show_memory_reads_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_READS);
+}
+
+static struct device_attribute dev_attr_memory_reads_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_reads_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_reads[] = {
+	&dev_attr_memory_reads_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_reads = {
+	.name = "398e2452-18d7-42d0-b241-e4d0a9148ada",
+	.attrs =  attrs_memory_reads,
+};
+
+static ssize_t
+show_memory_writes_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_WRITES);
+}
+
+static struct device_attribute dev_attr_memory_writes_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_writes_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_writes[] = {
+	&dev_attr_memory_writes_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_writes = {
+	.name = "d324a0d6-7269-4847-a5c2-6f71ddc7fed5",
+	.attrs =  attrs_memory_writes,
+};
+
+static ssize_t
+show_compute_extended_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTENDED);
+}
+
+static struct device_attribute dev_attr_compute_extended_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extended_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extended[] = {
+	&dev_attr_compute_extended_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extended = {
+	.name = "caf3596a-7bb1-4dec-b3b3-2a080d283b49",
+	.attrs =  attrs_compute_extended,
+};
+
+static ssize_t
+show_compute_l3_cache_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_L3_CACHE);
+}
+
+static struct device_attribute dev_attr_compute_l3_cache_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_l3_cache_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_l3_cache[] = {
+	&dev_attr_compute_l3_cache_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_l3_cache = {
+	.name = "49b956e2-d5b9-47e0-9d8a-cee5e8cec527",
+	.attrs =  attrs_compute_l3_cache,
+};
+
+static ssize_t
+show_hdc_and_sf_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_HDC_AND_SF);
+}
+
+static struct device_attribute dev_attr_hdc_and_sf_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_hdc_and_sf_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_hdc_and_sf[] = {
+	&dev_attr_hdc_and_sf_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_hdc_and_sf = {
+	.name = "f64ef50a-bdba-4b35-8f09-203c13d8ee5a",
+	.attrs =  attrs_hdc_and_sf,
+};
+
+static ssize_t
+show_l3_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_1);
+}
+
+static struct device_attribute dev_attr_l3_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_1[] = {
+	&dev_attr_l3_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_1 = {
+	.name = "00ad5a41-7eab-4f7a-9103-49d411c67219",
+	.attrs =  attrs_l3_1,
+};
+
+static ssize_t
+show_rasterizer_and_pixel_backend_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND);
+}
+
+static struct device_attribute dev_attr_rasterizer_and_pixel_backend_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_rasterizer_and_pixel_backend_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_rasterizer_and_pixel_backend[] = {
+	&dev_attr_rasterizer_and_pixel_backend_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_rasterizer_and_pixel_backend = {
+	.name = "46dc44ca-491c-4cc1-a951-e7b3e62bf02b",
+	.attrs =  attrs_rasterizer_and_pixel_backend,
+};
+
+static ssize_t
+show_sampler_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER);
+}
+
+static struct device_attribute dev_attr_sampler_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler[] = {
+	&dev_attr_sampler_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler = {
+	.name = "8364e2a8-af63-40af-b0d5-42969a255654",
+	.attrs =  attrs_sampler,
+};
+
+static ssize_t
+show_tdl_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_1);
+}
+
+static struct device_attribute dev_attr_tdl_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_1[] = {
+	&dev_attr_tdl_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_1 = {
+	.name = "175c8092-cb25-4d1e-8dc7-b4fdd39e2d92",
+	.attrs =  attrs_tdl_1,
+};
+
+static ssize_t
+show_tdl_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_2);
+}
+
+static struct device_attribute dev_attr_tdl_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_2[] = {
+	&dev_attr_tdl_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_2 = {
+	.name = "d260f03f-b34d-4b49-a44e-436819117332",
+	.attrs =  attrs_tdl_2,
+};
+
+static ssize_t
+show_compute_extra_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTRA);
+}
+
+static struct device_attribute dev_attr_compute_extra_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extra_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extra[] = {
+	&dev_attr_compute_extra_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extra = {
+	.name = "fa6ecf21-2cb8-4d0b-9308-6e4a7b4ca87a",
+	.attrs =  attrs_compute_extra,
+};
+
+static ssize_t
+show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TEST_OA);
+}
+
+static struct device_attribute dev_attr_test_oa_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_test_oa_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_test_oa[] = {
+	&dev_attr_test_oa_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_test_oa = {
+	.name = "5ee72f5c-092f-421e-8b70-225f7c3e9612",
+	.attrs =  attrs_test_oa,
+};
+
 int
 i915_perf_register_sysfs_bxt(struct drm_i915_private *dev_priv)
 {
@@ -221,9 +2384,121 @@ i915_perf_register_sysfs_bxt(struct drm_i915_private *dev_priv)
 		if (ret)
 			goto error_render_basic;
 	}
+	if (get_compute_basic_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+		if (ret)
+			goto error_compute_basic;
+	}
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+		if (ret)
+			goto error_render_pipe_profile;
+	}
+	if (get_memory_reads_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+		if (ret)
+			goto error_memory_reads;
+	}
+	if (get_memory_writes_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+		if (ret)
+			goto error_memory_writes;
+	}
+	if (get_compute_extended_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+		if (ret)
+			goto error_compute_extended;
+	}
+	if (get_compute_l3_cache_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+		if (ret)
+			goto error_compute_l3_cache;
+	}
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+		if (ret)
+			goto error_hdc_and_sf;
+	}
+	if (get_l3_1_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+		if (ret)
+			goto error_l3_1;
+	}
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+		if (ret)
+			goto error_rasterizer_and_pixel_backend;
+	}
+	if (get_sampler_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler);
+		if (ret)
+			goto error_sampler;
+	}
+	if (get_tdl_1_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+		if (ret)
+			goto error_tdl_1;
+	}
+	if (get_tdl_2_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+		if (ret)
+			goto error_tdl_2;
+	}
+	if (get_compute_extra_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+		if (ret)
+			goto error_compute_extra;
+	}
+	if (get_test_oa_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+		if (ret)
+			goto error_test_oa;
+	}
 
 	return 0;
 
+error_test_oa:
+	if (get_compute_extra_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+error_compute_extra:
+	if (get_tdl_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+error_tdl_2:
+	if (get_tdl_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+error_tdl_1:
+	if (get_sampler_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+error_sampler:
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+error_rasterizer_and_pixel_backend:
+	if (get_l3_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+error_l3_1:
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+error_hdc_and_sf:
+	if (get_compute_l3_cache_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+error_compute_l3_cache:
+	if (get_compute_extended_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+error_compute_extended:
+	if (get_memory_writes_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+error_memory_writes:
+	if (get_memory_reads_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+error_memory_reads:
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+error_render_pipe_profile:
+	if (get_compute_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+error_compute_basic:
+	if (get_render_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
 error_render_basic:
 	return ret;
 }
@@ -235,4 +2510,32 @@ i915_perf_unregister_sysfs_bxt(struct drm_i915_private *dev_priv)
 
 	if (get_render_basic_mux_config(dev_priv, &mux_len))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+	if (get_compute_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+	if (get_memory_reads_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+	if (get_memory_writes_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+	if (get_compute_extended_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+	if (get_compute_l3_cache_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+	if (get_l3_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+	if (get_sampler_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+	if (get_tdl_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+	if (get_tdl_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+	if (get_compute_extra_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+	if (get_test_oa_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_test_oa);
 }
diff --git a/drivers/gpu/drm/i915/i915_oa_chv.c b/drivers/gpu/drm/i915/i915_oa_chv.c
index 1c2889e819c8..878377c75e47 100644
--- a/drivers/gpu/drm/i915/i915_oa_chv.c
+++ b/drivers/gpu/drm/i915/i915_oa_chv.c
@@ -31,9 +31,22 @@
 
 enum metric_set_id {
 	METRIC_SET_ID_RENDER_BASIC = 1,
+	METRIC_SET_ID_COMPUTE_BASIC,
+	METRIC_SET_ID_RENDER_PIPE_PROFILE,
+	METRIC_SET_ID_HDC_AND_SF,
+	METRIC_SET_ID_L3_1,
+	METRIC_SET_ID_L3_2,
+	METRIC_SET_ID_L3_3,
+	METRIC_SET_ID_L3_4,
+	METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND,
+	METRIC_SET_ID_SAMPLER_1,
+	METRIC_SET_ID_SAMPLER_2,
+	METRIC_SET_ID_TDL_1,
+	METRIC_SET_ID_TDL_2,
+	METRIC_SET_ID_TEST_OA,
 };
 
-int i915_oa_n_builtin_metric_sets_chv = 1;
+int i915_oa_n_builtin_metric_sets_chv = 14;
 
 static const struct i915_oa_reg b_counter_config_render_basic[] = {
 	{ _MMIO(0x2740), 0x00000000 },
@@ -135,6 +148,1757 @@ get_render_basic_mux_config(struct drm_i915_private *dev_priv,
 	return mux_config_render_basic;
 }
 
+static const struct i915_oa_reg b_counter_config_compute_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic[] = {
+	{ _MMIO(0x9888), 0x59800000 },
+	{ _MMIO(0x9888), 0x59800001 },
+	{ _MMIO(0x9888), 0x2e5800e0 },
+	{ _MMIO(0x9888), 0x2e3800e0 },
+	{ _MMIO(0x9888), 0x3580024f },
+	{ _MMIO(0x9888), 0x3d800140 },
+	{ _MMIO(0x9888), 0x08580042 },
+	{ _MMIO(0x9888), 0x0c580040 },
+	{ _MMIO(0x9888), 0x1058004c },
+	{ _MMIO(0x9888), 0x1458004b },
+	{ _MMIO(0x9888), 0x04580000 },
+	{ _MMIO(0x9888), 0x00580000 },
+	{ _MMIO(0x9888), 0x00195555 },
+	{ _MMIO(0x9888), 0x06380042 },
+	{ _MMIO(0x9888), 0x0a380040 },
+	{ _MMIO(0x9888), 0x0e38004c },
+	{ _MMIO(0x9888), 0x1238004b },
+	{ _MMIO(0x9888), 0x04380000 },
+	{ _MMIO(0x9888), 0x00384444 },
+	{ _MMIO(0x9888), 0x003a5555 },
+	{ _MMIO(0x9888), 0x018bffff },
+	{ _MMIO(0x9888), 0x01845555 },
+	{ _MMIO(0x9888), 0x17800074 },
+	{ _MMIO(0x9888), 0x1980007d },
+	{ _MMIO(0x9888), 0x1b80007c },
+	{ _MMIO(0x9888), 0x1d8000b6 },
+	{ _MMIO(0x9888), 0x1f8000b7 },
+	{ _MMIO(0x9888), 0x05800000 },
+	{ _MMIO(0x9888), 0x03800000 },
+	{ _MMIO(0x9888), 0x418000aa },
+	{ _MMIO(0x9888), 0x438000aa },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x47800000 },
+	{ _MMIO(0x9888), 0x4980012a },
+	{ _MMIO(0x9888), 0x4b80012a },
+	{ _MMIO(0x9888), 0x4d80012a },
+	{ _MMIO(0x9888), 0x4f80012a },
+	{ _MMIO(0x9888), 0x518001ce },
+	{ _MMIO(0x9888), 0x538001ce },
+	{ _MMIO(0x9888), 0x5580000e },
+	{ _MMIO(0x9888), 0x59800000 },
+};
+
+static const struct i915_oa_reg *
+get_compute_basic_mux_config(struct drm_i915_private *dev_priv,
+			     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_basic);
+	return mux_config_compute_basic;
+}
+
+static const struct i915_oa_reg b_counter_config_render_pipe_profile[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ffea },
+	{ _MMIO(0x2774), 0x00007ffc },
+	{ _MMIO(0x2778), 0x0007affa },
+	{ _MMIO(0x277c), 0x0000f5fd },
+	{ _MMIO(0x2780), 0x00079ffa },
+	{ _MMIO(0x2784), 0x0000f3fb },
+	{ _MMIO(0x2788), 0x0007bf7a },
+	{ _MMIO(0x278c), 0x0000f7e7 },
+	{ _MMIO(0x2790), 0x0007fefa },
+	{ _MMIO(0x2794), 0x0000f7cf },
+	{ _MMIO(0x2798), 0x00077ffa },
+	{ _MMIO(0x279c), 0x0000efdf },
+	{ _MMIO(0x27a0), 0x0006fffa },
+	{ _MMIO(0x27a4), 0x0000cfbf },
+	{ _MMIO(0x27a8), 0x0003fffa },
+	{ _MMIO(0x27ac), 0x00005f7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_pipe_profile[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile[] = {
+	{ _MMIO(0x9888), 0x59800000 },
+	{ _MMIO(0x9888), 0x59800001 },
+	{ _MMIO(0x9888), 0x261e0000 },
+	{ _MMIO(0x9888), 0x281f000f },
+	{ _MMIO(0x9888), 0x2817001a },
+	{ _MMIO(0x9888), 0x2791001f },
+	{ _MMIO(0x9888), 0x27880019 },
+	{ _MMIO(0x9888), 0x2d890000 },
+	{ _MMIO(0x9888), 0x278a0007 },
+	{ _MMIO(0x9888), 0x298d001f },
+	{ _MMIO(0x9888), 0x278e0020 },
+	{ _MMIO(0x9888), 0x2b8f0012 },
+	{ _MMIO(0x9888), 0x29900000 },
+	{ _MMIO(0x9888), 0x00184000 },
+	{ _MMIO(0x9888), 0x02181000 },
+	{ _MMIO(0x9888), 0x02194000 },
+	{ _MMIO(0x9888), 0x141e0002 },
+	{ _MMIO(0x9888), 0x041e0000 },
+	{ _MMIO(0x9888), 0x001e0000 },
+	{ _MMIO(0x9888), 0x221f0015 },
+	{ _MMIO(0x9888), 0x041f0000 },
+	{ _MMIO(0x9888), 0x001f4000 },
+	{ _MMIO(0x9888), 0x021f0000 },
+	{ _MMIO(0x9888), 0x023a8000 },
+	{ _MMIO(0x9888), 0x0213c000 },
+	{ _MMIO(0x9888), 0x02164000 },
+	{ _MMIO(0x9888), 0x24170012 },
+	{ _MMIO(0x9888), 0x04170000 },
+	{ _MMIO(0x9888), 0x07910005 },
+	{ _MMIO(0x9888), 0x05910000 },
+	{ _MMIO(0x9888), 0x01911500 },
+	{ _MMIO(0x9888), 0x03910501 },
+	{ _MMIO(0x9888), 0x0d880002 },
+	{ _MMIO(0x9888), 0x1d880003 },
+	{ _MMIO(0x9888), 0x05880000 },
+	{ _MMIO(0x9888), 0x0b890032 },
+	{ _MMIO(0x9888), 0x1b890031 },
+	{ _MMIO(0x9888), 0x05890000 },
+	{ _MMIO(0x9888), 0x01890040 },
+	{ _MMIO(0x9888), 0x03890040 },
+	{ _MMIO(0x9888), 0x098a0000 },
+	{ _MMIO(0x9888), 0x198a0004 },
+	{ _MMIO(0x9888), 0x058a0000 },
+	{ _MMIO(0x9888), 0x018a8050 },
+	{ _MMIO(0x9888), 0x038a2050 },
+	{ _MMIO(0x9888), 0x018b95a9 },
+	{ _MMIO(0x9888), 0x038be5a9 },
+	{ _MMIO(0x9888), 0x018c1500 },
+	{ _MMIO(0x9888), 0x038c0501 },
+	{ _MMIO(0x9888), 0x178d0015 },
+	{ _MMIO(0x9888), 0x058d0000 },
+	{ _MMIO(0x9888), 0x138e0004 },
+	{ _MMIO(0x9888), 0x218e000c },
+	{ _MMIO(0x9888), 0x058e0000 },
+	{ _MMIO(0x9888), 0x018e0500 },
+	{ _MMIO(0x9888), 0x038e0101 },
+	{ _MMIO(0x9888), 0x0f8f0027 },
+	{ _MMIO(0x9888), 0x058f0000 },
+	{ _MMIO(0x9888), 0x018f0000 },
+	{ _MMIO(0x9888), 0x038f0001 },
+	{ _MMIO(0x9888), 0x11900013 },
+	{ _MMIO(0x9888), 0x1f900017 },
+	{ _MMIO(0x9888), 0x05900000 },
+	{ _MMIO(0x9888), 0x01900100 },
+	{ _MMIO(0x9888), 0x03900001 },
+	{ _MMIO(0x9888), 0x01845555 },
+	{ _MMIO(0x9888), 0x03845555 },
+	{ _MMIO(0x9888), 0x418000aa },
+	{ _MMIO(0x9888), 0x438000aa },
+	{ _MMIO(0x9888), 0x458000aa },
+	{ _MMIO(0x9888), 0x478000aa },
+	{ _MMIO(0x9888), 0x4980018c },
+	{ _MMIO(0x9888), 0x4b80014b },
+	{ _MMIO(0x9888), 0x4d800128 },
+	{ _MMIO(0x9888), 0x4f80012a },
+	{ _MMIO(0x9888), 0x51800187 },
+	{ _MMIO(0x9888), 0x5380014b },
+	{ _MMIO(0x9888), 0x55800149 },
+	{ _MMIO(0x9888), 0x5780010a },
+	{ _MMIO(0x9888), 0x59800000 },
+};
+
+static const struct i915_oa_reg *
+get_render_pipe_profile_mux_config(struct drm_i915_private *dev_priv,
+				   int *len)
+{
+	*len = ARRAY_SIZE(mux_config_render_pipe_profile);
+	return mux_config_render_pipe_profile;
+}
+
+static const struct i915_oa_reg b_counter_config_hdc_and_sf[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x10800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fff7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_hdc_and_sf[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_hdc_and_sf[] = {
+	{ _MMIO(0x9888), 0x105c0232 },
+	{ _MMIO(0x9888), 0x10580232 },
+	{ _MMIO(0x9888), 0x10380232 },
+	{ _MMIO(0x9888), 0x10dc0232 },
+	{ _MMIO(0x9888), 0x10d80232 },
+	{ _MMIO(0x9888), 0x10b80232 },
+	{ _MMIO(0x9888), 0x118e4400 },
+	{ _MMIO(0x9888), 0x025c6080 },
+	{ _MMIO(0x9888), 0x045c004b },
+	{ _MMIO(0x9888), 0x005c8000 },
+	{ _MMIO(0x9888), 0x00582080 },
+	{ _MMIO(0x9888), 0x0258004b },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x045b4000 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f00aa },
+	{ _MMIO(0x9888), 0x04386080 },
+	{ _MMIO(0x9888), 0x0638404b },
+	{ _MMIO(0x9888), 0x02384000 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a380000 },
+	{ _MMIO(0x9888), 0x0c380000 },
+	{ _MMIO(0x9888), 0x00398000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x0cdc25c1 },
+	{ _MMIO(0x9888), 0x0adcc000 },
+	{ _MMIO(0x9888), 0x0ad825c1 },
+	{ _MMIO(0x9888), 0x18db4000 },
+	{ _MMIO(0x9888), 0x1adb0001 },
+	{ _MMIO(0x9888), 0x0e9f8000 },
+	{ _MMIO(0x9888), 0x109f02aa },
+	{ _MMIO(0x9888), 0x0eb825c1 },
+	{ _MMIO(0x9888), 0x18b80154 },
+	{ _MMIO(0x9888), 0x0ab9a000 },
+	{ _MMIO(0x9888), 0x0cb9a000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x0d88c000 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x258baa05 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x198c5400 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x098dc000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x098e05c0 },
+	{ _MMIO(0x9888), 0x058e0000 },
+	{ _MMIO(0x9888), 0x198f0020 },
+	{ _MMIO(0x9888), 0x2185aa0a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x19835000 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x09848000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x19808000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x51800040 },
+	{ _MMIO(0x9888), 0x43800400 },
+	{ _MMIO(0x9888), 0x45800800 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800c62 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f801042 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x418014a4 },
+};
+
+static const struct i915_oa_reg *
+get_hdc_and_sf_mux_config(struct drm_i915_private *dev_priv,
+			  int *len)
+{
+	*len = ARRAY_SIZE(mux_config_hdc_and_sf);
+	return mux_config_hdc_and_sf;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1[] = {
+	{ _MMIO(0x9888), 0x10bf03da },
+	{ _MMIO(0x9888), 0x14bf0001 },
+	{ _MMIO(0x9888), 0x12980340 },
+	{ _MMIO(0x9888), 0x12990340 },
+	{ _MMIO(0x9888), 0x0cbf1187 },
+	{ _MMIO(0x9888), 0x0ebf1205 },
+	{ _MMIO(0x9888), 0x00bf0500 },
+	{ _MMIO(0x9888), 0x02bf042b },
+	{ _MMIO(0x9888), 0x04bf002c },
+	{ _MMIO(0x9888), 0x0cdac000 },
+	{ _MMIO(0x9888), 0x0edac000 },
+	{ _MMIO(0x9888), 0x00da8000 },
+	{ _MMIO(0x9888), 0x02dac000 },
+	{ _MMIO(0x9888), 0x04da4000 },
+	{ _MMIO(0x9888), 0x04983400 },
+	{ _MMIO(0x9888), 0x10980000 },
+	{ _MMIO(0x9888), 0x06990034 },
+	{ _MMIO(0x9888), 0x10990000 },
+	{ _MMIO(0x9888), 0x0c9dc000 },
+	{ _MMIO(0x9888), 0x0e9dc000 },
+	{ _MMIO(0x9888), 0x009d8000 },
+	{ _MMIO(0x9888), 0x029dc000 },
+	{ _MMIO(0x9888), 0x049d4000 },
+	{ _MMIO(0x9888), 0x109f02a8 },
+	{ _MMIO(0x9888), 0x0c9fa000 },
+	{ _MMIO(0x9888), 0x0e9f00ba },
+	{ _MMIO(0x9888), 0x0cb88000 },
+	{ _MMIO(0x9888), 0x0cb95000 },
+	{ _MMIO(0x9888), 0x0eb95000 },
+	{ _MMIO(0x9888), 0x00b94000 },
+	{ _MMIO(0x9888), 0x02b95000 },
+	{ _MMIO(0x9888), 0x04b91000 },
+	{ _MMIO(0x9888), 0x06b92000 },
+	{ _MMIO(0x9888), 0x0cba4000 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x258b800a },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b5500 },
+	{ _MMIO(0x9888), 0x198c4000 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x018d8000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2185800a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x1b830154 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x47800000 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800060 },
+};
+
+static const struct i915_oa_reg *
+get_l3_1_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_1);
+	return mux_config_l3_1;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_2[] = {
+	{ _MMIO(0x9888), 0x103f03da },
+	{ _MMIO(0x9888), 0x143f0001 },
+	{ _MMIO(0x9888), 0x12180340 },
+	{ _MMIO(0x9888), 0x12190340 },
+	{ _MMIO(0x9888), 0x0c3f1187 },
+	{ _MMIO(0x9888), 0x0e3f1205 },
+	{ _MMIO(0x9888), 0x003f0500 },
+	{ _MMIO(0x9888), 0x023f042b },
+	{ _MMIO(0x9888), 0x043f002c },
+	{ _MMIO(0x9888), 0x0c5ac000 },
+	{ _MMIO(0x9888), 0x0e5ac000 },
+	{ _MMIO(0x9888), 0x005a8000 },
+	{ _MMIO(0x9888), 0x025ac000 },
+	{ _MMIO(0x9888), 0x045a4000 },
+	{ _MMIO(0x9888), 0x04183400 },
+	{ _MMIO(0x9888), 0x10180000 },
+	{ _MMIO(0x9888), 0x06190034 },
+	{ _MMIO(0x9888), 0x10190000 },
+	{ _MMIO(0x9888), 0x0c1dc000 },
+	{ _MMIO(0x9888), 0x0e1dc000 },
+	{ _MMIO(0x9888), 0x001d8000 },
+	{ _MMIO(0x9888), 0x021dc000 },
+	{ _MMIO(0x9888), 0x041d4000 },
+	{ _MMIO(0x9888), 0x101f02a8 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f00ba },
+	{ _MMIO(0x9888), 0x0c388000 },
+	{ _MMIO(0x9888), 0x0c395000 },
+	{ _MMIO(0x9888), 0x0e395000 },
+	{ _MMIO(0x9888), 0x00394000 },
+	{ _MMIO(0x9888), 0x02395000 },
+	{ _MMIO(0x9888), 0x04391000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x0c3a4000 },
+	{ _MMIO(0x9888), 0x1b8aa800 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x258b4005 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x2185800a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x1b830154 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x47800000 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800060 },
+};
+
+static const struct i915_oa_reg *
+get_l3_2_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_2);
+	return mux_config_l3_2;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_3[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_3[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_3[] = {
+	{ _MMIO(0x9888), 0x121b0340 },
+	{ _MMIO(0x9888), 0x103f0274 },
+	{ _MMIO(0x9888), 0x123f0000 },
+	{ _MMIO(0x9888), 0x129b0340 },
+	{ _MMIO(0x9888), 0x10bf0274 },
+	{ _MMIO(0x9888), 0x12bf0000 },
+	{ _MMIO(0x9888), 0x041b3400 },
+	{ _MMIO(0x9888), 0x101b0000 },
+	{ _MMIO(0x9888), 0x045c8000 },
+	{ _MMIO(0x9888), 0x0a3d4000 },
+	{ _MMIO(0x9888), 0x003f0080 },
+	{ _MMIO(0x9888), 0x023f0793 },
+	{ _MMIO(0x9888), 0x043f0014 },
+	{ _MMIO(0x9888), 0x04588000 },
+	{ _MMIO(0x9888), 0x005a8000 },
+	{ _MMIO(0x9888), 0x025ac000 },
+	{ _MMIO(0x9888), 0x045a4000 },
+	{ _MMIO(0x9888), 0x0a5b4000 },
+	{ _MMIO(0x9888), 0x001d8000 },
+	{ _MMIO(0x9888), 0x021dc000 },
+	{ _MMIO(0x9888), 0x041d4000 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f002a },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x00394000 },
+	{ _MMIO(0x9888), 0x02395000 },
+	{ _MMIO(0x9888), 0x04399000 },
+	{ _MMIO(0x9888), 0x069b0034 },
+	{ _MMIO(0x9888), 0x109b0000 },
+	{ _MMIO(0x9888), 0x06dc4000 },
+	{ _MMIO(0x9888), 0x0cbd4000 },
+	{ _MMIO(0x9888), 0x0cbf0981 },
+	{ _MMIO(0x9888), 0x0ebf0a0f },
+	{ _MMIO(0x9888), 0x06d84000 },
+	{ _MMIO(0x9888), 0x0cdac000 },
+	{ _MMIO(0x9888), 0x0edac000 },
+	{ _MMIO(0x9888), 0x0cdb4000 },
+	{ _MMIO(0x9888), 0x0c9dc000 },
+	{ _MMIO(0x9888), 0x0e9dc000 },
+	{ _MMIO(0x9888), 0x109f02a8 },
+	{ _MMIO(0x9888), 0x0e9f0080 },
+	{ _MMIO(0x9888), 0x0cb84000 },
+	{ _MMIO(0x9888), 0x0cb95000 },
+	{ _MMIO(0x9888), 0x0eb95000 },
+	{ _MMIO(0x9888), 0x06b92000 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x258b8009 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x198c4000 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2185800a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x1b830154 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x45800c00 },
+	{ _MMIO(0x9888), 0x47800c63 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f8014a5 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800045 },
+};
+
+static const struct i915_oa_reg *
+get_l3_3_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_3);
+	return mux_config_l3_3;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_4[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_4[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_4[] = {
+	{ _MMIO(0x9888), 0x121a0340 },
+	{ _MMIO(0x9888), 0x103f0017 },
+	{ _MMIO(0x9888), 0x123f0020 },
+	{ _MMIO(0x9888), 0x129a0340 },
+	{ _MMIO(0x9888), 0x10bf0017 },
+	{ _MMIO(0x9888), 0x12bf0020 },
+	{ _MMIO(0x9888), 0x041a3400 },
+	{ _MMIO(0x9888), 0x101a0000 },
+	{ _MMIO(0x9888), 0x043b8000 },
+	{ _MMIO(0x9888), 0x0a3e0010 },
+	{ _MMIO(0x9888), 0x003f0200 },
+	{ _MMIO(0x9888), 0x023f0113 },
+	{ _MMIO(0x9888), 0x043f0014 },
+	{ _MMIO(0x9888), 0x02592000 },
+	{ _MMIO(0x9888), 0x005a8000 },
+	{ _MMIO(0x9888), 0x025ac000 },
+	{ _MMIO(0x9888), 0x045a4000 },
+	{ _MMIO(0x9888), 0x0a1c8000 },
+	{ _MMIO(0x9888), 0x001d8000 },
+	{ _MMIO(0x9888), 0x021dc000 },
+	{ _MMIO(0x9888), 0x041d4000 },
+	{ _MMIO(0x9888), 0x0a1e8000 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f001a },
+	{ _MMIO(0x9888), 0x00394000 },
+	{ _MMIO(0x9888), 0x02395000 },
+	{ _MMIO(0x9888), 0x04391000 },
+	{ _MMIO(0x9888), 0x069a0034 },
+	{ _MMIO(0x9888), 0x109a0000 },
+	{ _MMIO(0x9888), 0x06bb4000 },
+	{ _MMIO(0x9888), 0x0abe0040 },
+	{ _MMIO(0x9888), 0x0cbf0984 },
+	{ _MMIO(0x9888), 0x0ebf0a02 },
+	{ _MMIO(0x9888), 0x02d94000 },
+	{ _MMIO(0x9888), 0x0cdac000 },
+	{ _MMIO(0x9888), 0x0edac000 },
+	{ _MMIO(0x9888), 0x0c9c0400 },
+	{ _MMIO(0x9888), 0x0c9dc000 },
+	{ _MMIO(0x9888), 0x0e9dc000 },
+	{ _MMIO(0x9888), 0x0c9e0400 },
+	{ _MMIO(0x9888), 0x109f02a8 },
+	{ _MMIO(0x9888), 0x0e9f0040 },
+	{ _MMIO(0x9888), 0x0cb95000 },
+	{ _MMIO(0x9888), 0x0eb95000 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x258b8009 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x198c4000 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2185800a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x1b830154 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x45800800 },
+	{ _MMIO(0x9888), 0x47800842 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f801084 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800044 },
+};
+
+static const struct i915_oa_reg *
+get_l3_4_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_4);
+	return mux_config_l3_4;
+}
+
+static const struct i915_oa_reg b_counter_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00006000 },
+	{ _MMIO(0x2774), 0x0000f3ff },
+	{ _MMIO(0x2778), 0x00001800 },
+	{ _MMIO(0x277c), 0x0000fcff },
+	{ _MMIO(0x2780), 0x00000600 },
+	{ _MMIO(0x2784), 0x0000ff3f },
+	{ _MMIO(0x2788), 0x00000180 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000060 },
+	{ _MMIO(0x2794), 0x0000fff3 },
+	{ _MMIO(0x2798), 0x00000018 },
+	{ _MMIO(0x279c), 0x0000fffc },
+};
+
+static const struct i915_oa_reg flex_eu_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x9888), 0x143b000e },
+	{ _MMIO(0x9888), 0x043c55c0 },
+	{ _MMIO(0x9888), 0x0a1e0280 },
+	{ _MMIO(0x9888), 0x0c1e0408 },
+	{ _MMIO(0x9888), 0x10390000 },
+	{ _MMIO(0x9888), 0x12397a1f },
+	{ _MMIO(0x9888), 0x14bb000e },
+	{ _MMIO(0x9888), 0x04bc5000 },
+	{ _MMIO(0x9888), 0x0a9e0296 },
+	{ _MMIO(0x9888), 0x0c9e0008 },
+	{ _MMIO(0x9888), 0x10b90000 },
+	{ _MMIO(0x9888), 0x12b97a1f },
+	{ _MMIO(0x9888), 0x063b0042 },
+	{ _MMIO(0x9888), 0x103b0000 },
+	{ _MMIO(0x9888), 0x083c0000 },
+	{ _MMIO(0x9888), 0x0a3e0040 },
+	{ _MMIO(0x9888), 0x043f8000 },
+	{ _MMIO(0x9888), 0x02594000 },
+	{ _MMIO(0x9888), 0x045a8000 },
+	{ _MMIO(0x9888), 0x0c1c0400 },
+	{ _MMIO(0x9888), 0x041d8000 },
+	{ _MMIO(0x9888), 0x081e02c0 },
+	{ _MMIO(0x9888), 0x0e1e0000 },
+	{ _MMIO(0x9888), 0x0c1fa800 },
+	{ _MMIO(0x9888), 0x0e1f0260 },
+	{ _MMIO(0x9888), 0x101f0014 },
+	{ _MMIO(0x9888), 0x003905e0 },
+	{ _MMIO(0x9888), 0x06390bc0 },
+	{ _MMIO(0x9888), 0x02390018 },
+	{ _MMIO(0x9888), 0x04394000 },
+	{ _MMIO(0x9888), 0x04bb0042 },
+	{ _MMIO(0x9888), 0x10bb0000 },
+	{ _MMIO(0x9888), 0x02bc05c0 },
+	{ _MMIO(0x9888), 0x08bc0000 },
+	{ _MMIO(0x9888), 0x0abe0004 },
+	{ _MMIO(0x9888), 0x02bf8000 },
+	{ _MMIO(0x9888), 0x02d91000 },
+	{ _MMIO(0x9888), 0x02da8000 },
+	{ _MMIO(0x9888), 0x089c8000 },
+	{ _MMIO(0x9888), 0x029d8000 },
+	{ _MMIO(0x9888), 0x089e8000 },
+	{ _MMIO(0x9888), 0x0e9e0000 },
+	{ _MMIO(0x9888), 0x0e9fa806 },
+	{ _MMIO(0x9888), 0x109f0142 },
+	{ _MMIO(0x9888), 0x08b90617 },
+	{ _MMIO(0x9888), 0x0ab90be0 },
+	{ _MMIO(0x9888), 0x02b94000 },
+	{ _MMIO(0x9888), 0x0d88f000 },
+	{ _MMIO(0x9888), 0x0f88000c },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x1b8a2800 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x238b52a0 },
+	{ _MMIO(0x9888), 0x258b6a95 },
+	{ _MMIO(0x9888), 0x278b0029 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c1500 },
+	{ _MMIO(0x9888), 0x1b8c0014 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x038d8000 },
+	{ _MMIO(0x9888), 0x058d2000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x4d800444 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f804000 },
+	{ _MMIO(0x9888), 0x43801080 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800084 },
+	{ _MMIO(0x9888), 0x53800044 },
+	{ _MMIO(0x9888), 0x47801080 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800000 },
+	{ _MMIO(0x9888), 0x41800840 },
+};
+
+static const struct i915_oa_reg *
+get_rasterizer_and_pixel_backend_mux_config(struct drm_i915_private *dev_priv,
+					    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_rasterizer_and_pixel_backend);
+	return mux_config_rasterizer_and_pixel_backend;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler_1[] = {
+	{ _MMIO(0x9888), 0x18921400 },
+	{ _MMIO(0x9888), 0x149500ab },
+	{ _MMIO(0x9888), 0x18b21400 },
+	{ _MMIO(0x9888), 0x14b500ab },
+	{ _MMIO(0x9888), 0x18d21400 },
+	{ _MMIO(0x9888), 0x14d500ab },
+	{ _MMIO(0x9888), 0x0cdc8000 },
+	{ _MMIO(0x9888), 0x0edc4000 },
+	{ _MMIO(0x9888), 0x02dcc000 },
+	{ _MMIO(0x9888), 0x04dcc000 },
+	{ _MMIO(0x9888), 0x1abd00a0 },
+	{ _MMIO(0x9888), 0x0abd8000 },
+	{ _MMIO(0x9888), 0x0cd88000 },
+	{ _MMIO(0x9888), 0x0ed84000 },
+	{ _MMIO(0x9888), 0x04d88000 },
+	{ _MMIO(0x9888), 0x1adb0050 },
+	{ _MMIO(0x9888), 0x04db8000 },
+	{ _MMIO(0x9888), 0x06db8000 },
+	{ _MMIO(0x9888), 0x08db8000 },
+	{ _MMIO(0x9888), 0x0adb4000 },
+	{ _MMIO(0x9888), 0x109f02a0 },
+	{ _MMIO(0x9888), 0x0c9fa000 },
+	{ _MMIO(0x9888), 0x0e9f00aa },
+	{ _MMIO(0x9888), 0x18b82500 },
+	{ _MMIO(0x9888), 0x02b88000 },
+	{ _MMIO(0x9888), 0x04b84000 },
+	{ _MMIO(0x9888), 0x06b84000 },
+	{ _MMIO(0x9888), 0x08b84000 },
+	{ _MMIO(0x9888), 0x0ab84000 },
+	{ _MMIO(0x9888), 0x0cb88000 },
+	{ _MMIO(0x9888), 0x0cb98000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x00b98000 },
+	{ _MMIO(0x9888), 0x02b9a000 },
+	{ _MMIO(0x9888), 0x04b9a000 },
+	{ _MMIO(0x9888), 0x06b92000 },
+	{ _MMIO(0x9888), 0x1aba0200 },
+	{ _MMIO(0x9888), 0x02ba8000 },
+	{ _MMIO(0x9888), 0x0cba8000 },
+	{ _MMIO(0x9888), 0x04908000 },
+	{ _MMIO(0x9888), 0x04918000 },
+	{ _MMIO(0x9888), 0x04927300 },
+	{ _MMIO(0x9888), 0x10920000 },
+	{ _MMIO(0x9888), 0x1893000a },
+	{ _MMIO(0x9888), 0x0a934000 },
+	{ _MMIO(0x9888), 0x0a946000 },
+	{ _MMIO(0x9888), 0x0c959000 },
+	{ _MMIO(0x9888), 0x0e950098 },
+	{ _MMIO(0x9888), 0x10950000 },
+	{ _MMIO(0x9888), 0x04b04000 },
+	{ _MMIO(0x9888), 0x04b14000 },
+	{ _MMIO(0x9888), 0x04b20073 },
+	{ _MMIO(0x9888), 0x10b20000 },
+	{ _MMIO(0x9888), 0x04b38000 },
+	{ _MMIO(0x9888), 0x06b38000 },
+	{ _MMIO(0x9888), 0x08b34000 },
+	{ _MMIO(0x9888), 0x04b4c000 },
+	{ _MMIO(0x9888), 0x02b59890 },
+	{ _MMIO(0x9888), 0x10b50000 },
+	{ _MMIO(0x9888), 0x06d04000 },
+	{ _MMIO(0x9888), 0x06d14000 },
+	{ _MMIO(0x9888), 0x06d20073 },
+	{ _MMIO(0x9888), 0x10d20000 },
+	{ _MMIO(0x9888), 0x18d30020 },
+	{ _MMIO(0x9888), 0x02d38000 },
+	{ _MMIO(0x9888), 0x0cd34000 },
+	{ _MMIO(0x9888), 0x0ad48000 },
+	{ _MMIO(0x9888), 0x04d42000 },
+	{ _MMIO(0x9888), 0x0ed59000 },
+	{ _MMIO(0x9888), 0x00d59800 },
+	{ _MMIO(0x9888), 0x10d50000 },
+	{ _MMIO(0x9888), 0x0f88000e },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b5500 },
+	{ _MMIO(0x9888), 0x258b000a },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x0d8d8000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x018d8000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x2185000a },
+	{ _MMIO(0x9888), 0x1b830150 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d848000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d808000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47801021 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800c64 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800c02 },
+};
+
+static const struct i915_oa_reg *
+get_sampler_1_mux_config(struct drm_i915_private *dev_priv,
+			 int *len)
+{
+	*len = ARRAY_SIZE(mux_config_sampler_1);
+	return mux_config_sampler_1;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler_2[] = {
+	{ _MMIO(0x9888), 0x18121400 },
+	{ _MMIO(0x9888), 0x141500ab },
+	{ _MMIO(0x9888), 0x18321400 },
+	{ _MMIO(0x9888), 0x143500ab },
+	{ _MMIO(0x9888), 0x18521400 },
+	{ _MMIO(0x9888), 0x145500ab },
+	{ _MMIO(0x9888), 0x0c5c8000 },
+	{ _MMIO(0x9888), 0x0e5c4000 },
+	{ _MMIO(0x9888), 0x025cc000 },
+	{ _MMIO(0x9888), 0x045cc000 },
+	{ _MMIO(0x9888), 0x1a3d00a0 },
+	{ _MMIO(0x9888), 0x0a3d8000 },
+	{ _MMIO(0x9888), 0x0c588000 },
+	{ _MMIO(0x9888), 0x0e584000 },
+	{ _MMIO(0x9888), 0x04588000 },
+	{ _MMIO(0x9888), 0x1a5b0050 },
+	{ _MMIO(0x9888), 0x045b8000 },
+	{ _MMIO(0x9888), 0x065b8000 },
+	{ _MMIO(0x9888), 0x085b8000 },
+	{ _MMIO(0x9888), 0x0a5b4000 },
+	{ _MMIO(0x9888), 0x101f02a0 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f00aa },
+	{ _MMIO(0x9888), 0x18382500 },
+	{ _MMIO(0x9888), 0x02388000 },
+	{ _MMIO(0x9888), 0x04384000 },
+	{ _MMIO(0x9888), 0x06384000 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0c388000 },
+	{ _MMIO(0x9888), 0x0c398000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x00398000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x1a3a0200 },
+	{ _MMIO(0x9888), 0x023a8000 },
+	{ _MMIO(0x9888), 0x0c3a8000 },
+	{ _MMIO(0x9888), 0x04108000 },
+	{ _MMIO(0x9888), 0x04118000 },
+	{ _MMIO(0x9888), 0x04127300 },
+	{ _MMIO(0x9888), 0x10120000 },
+	{ _MMIO(0x9888), 0x1813000a },
+	{ _MMIO(0x9888), 0x0a134000 },
+	{ _MMIO(0x9888), 0x0a146000 },
+	{ _MMIO(0x9888), 0x0c159000 },
+	{ _MMIO(0x9888), 0x0e150098 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x04304000 },
+	{ _MMIO(0x9888), 0x04314000 },
+	{ _MMIO(0x9888), 0x04320073 },
+	{ _MMIO(0x9888), 0x10320000 },
+	{ _MMIO(0x9888), 0x04338000 },
+	{ _MMIO(0x9888), 0x06338000 },
+	{ _MMIO(0x9888), 0x08334000 },
+	{ _MMIO(0x9888), 0x0434c000 },
+	{ _MMIO(0x9888), 0x02359890 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x06504000 },
+	{ _MMIO(0x9888), 0x06514000 },
+	{ _MMIO(0x9888), 0x06520073 },
+	{ _MMIO(0x9888), 0x10520000 },
+	{ _MMIO(0x9888), 0x18530020 },
+	{ _MMIO(0x9888), 0x02538000 },
+	{ _MMIO(0x9888), 0x0c534000 },
+	{ _MMIO(0x9888), 0x0a548000 },
+	{ _MMIO(0x9888), 0x04542000 },
+	{ _MMIO(0x9888), 0x0e559000 },
+	{ _MMIO(0x9888), 0x00559800 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x1b8aa000 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x258b0005 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x2185000a },
+	{ _MMIO(0x9888), 0x1b830150 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d848000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d808000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47801021 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800c64 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800c02 },
+};
+
+static const struct i915_oa_reg *
+get_sampler_2_mux_config(struct drm_i915_private *dev_priv,
+			 int *len)
+{
+	*len = ARRAY_SIZE(mux_config_sampler_2);
+	return mux_config_sampler_2;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x0000fe7f },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000ffbf },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fff7 },
+	{ _MMIO(0x2798), 0x00000000 },
+	{ _MMIO(0x279c), 0x0000fff9 },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_1[] = {
+	{ _MMIO(0x9888), 0x16154d60 },
+	{ _MMIO(0x9888), 0x16352e60 },
+	{ _MMIO(0x9888), 0x16554d60 },
+	{ _MMIO(0x9888), 0x16950000 },
+	{ _MMIO(0x9888), 0x16b50000 },
+	{ _MMIO(0x9888), 0x16d50000 },
+	{ _MMIO(0x9888), 0x005c8000 },
+	{ _MMIO(0x9888), 0x045cc000 },
+	{ _MMIO(0x9888), 0x065c4000 },
+	{ _MMIO(0x9888), 0x083d8000 },
+	{ _MMIO(0x9888), 0x0a3d8000 },
+	{ _MMIO(0x9888), 0x0458c000 },
+	{ _MMIO(0x9888), 0x025b8000 },
+	{ _MMIO(0x9888), 0x085b4000 },
+	{ _MMIO(0x9888), 0x0a5b4000 },
+	{ _MMIO(0x9888), 0x0c5b8000 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f00aa },
+	{ _MMIO(0x9888), 0x02384000 },
+	{ _MMIO(0x9888), 0x04388000 },
+	{ _MMIO(0x9888), 0x06388000 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0c384000 },
+	{ _MMIO(0x9888), 0x00398000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x043a8000 },
+	{ _MMIO(0x9888), 0x063a8000 },
+	{ _MMIO(0x9888), 0x08138000 },
+	{ _MMIO(0x9888), 0x0a138000 },
+	{ _MMIO(0x9888), 0x06143000 },
+	{ _MMIO(0x9888), 0x0415cfc7 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x02338000 },
+	{ _MMIO(0x9888), 0x0c338000 },
+	{ _MMIO(0x9888), 0x04342000 },
+	{ _MMIO(0x9888), 0x06344000 },
+	{ _MMIO(0x9888), 0x0035c700 },
+	{ _MMIO(0x9888), 0x063500cf },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x04538000 },
+	{ _MMIO(0x9888), 0x06538000 },
+	{ _MMIO(0x9888), 0x0454c000 },
+	{ _MMIO(0x9888), 0x0255cfc7 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x06dc8000 },
+	{ _MMIO(0x9888), 0x08dc4000 },
+	{ _MMIO(0x9888), 0x0cdcc000 },
+	{ _MMIO(0x9888), 0x0edcc000 },
+	{ _MMIO(0x9888), 0x1abd00a8 },
+	{ _MMIO(0x9888), 0x0cd8c000 },
+	{ _MMIO(0x9888), 0x0ed84000 },
+	{ _MMIO(0x9888), 0x0edb8000 },
+	{ _MMIO(0x9888), 0x18db0800 },
+	{ _MMIO(0x9888), 0x1adb0254 },
+	{ _MMIO(0x9888), 0x0e9faa00 },
+	{ _MMIO(0x9888), 0x109f02aa },
+	{ _MMIO(0x9888), 0x0eb84000 },
+	{ _MMIO(0x9888), 0x16b84000 },
+	{ _MMIO(0x9888), 0x18b8156a },
+	{ _MMIO(0x9888), 0x06b98000 },
+	{ _MMIO(0x9888), 0x08b9a000 },
+	{ _MMIO(0x9888), 0x0ab9a000 },
+	{ _MMIO(0x9888), 0x0cb9a000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x18baa000 },
+	{ _MMIO(0x9888), 0x1aba0002 },
+	{ _MMIO(0x9888), 0x16934000 },
+	{ _MMIO(0x9888), 0x1893000a },
+	{ _MMIO(0x9888), 0x0a947000 },
+	{ _MMIO(0x9888), 0x0c95c5c1 },
+	{ _MMIO(0x9888), 0x0e9500c3 },
+	{ _MMIO(0x9888), 0x10950000 },
+	{ _MMIO(0x9888), 0x0eb38000 },
+	{ _MMIO(0x9888), 0x16b30040 },
+	{ _MMIO(0x9888), 0x18b30020 },
+	{ _MMIO(0x9888), 0x06b48000 },
+	{ _MMIO(0x9888), 0x08b41000 },
+	{ _MMIO(0x9888), 0x0ab48000 },
+	{ _MMIO(0x9888), 0x06b5c500 },
+	{ _MMIO(0x9888), 0x08b500c3 },
+	{ _MMIO(0x9888), 0x0eb5c100 },
+	{ _MMIO(0x9888), 0x10b50000 },
+	{ _MMIO(0x9888), 0x16d31500 },
+	{ _MMIO(0x9888), 0x08d4e000 },
+	{ _MMIO(0x9888), 0x08d5c100 },
+	{ _MMIO(0x9888), 0x0ad5c3c5 },
+	{ _MMIO(0x9888), 0x10d50000 },
+	{ _MMIO(0x9888), 0x0d88f800 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x258baaa5 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x0f8c4000 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c5500 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x078d8000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800c42 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800063 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800800 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f8014a4 },
+	{ _MMIO(0x9888), 0x41801042 },
+};
+
+static const struct i915_oa_reg *
+get_tdl_1_mux_config(struct drm_i915_private *dev_priv,
+		     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_tdl_1);
+	return mux_config_tdl_1;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x0000fe7f },
+	{ _MMIO(0x2780), 0x00000000 },
+	{ _MMIO(0x2784), 0x0000ff9f },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000ffe7 },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fffb },
+	{ _MMIO(0x2798), 0x00000002 },
+	{ _MMIO(0x279c), 0x0000fffd },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_2[] = {
+	{ _MMIO(0x9888), 0x16150000 },
+	{ _MMIO(0x9888), 0x16350000 },
+	{ _MMIO(0x9888), 0x16550000 },
+	{ _MMIO(0x9888), 0x16952e60 },
+	{ _MMIO(0x9888), 0x16b54d60 },
+	{ _MMIO(0x9888), 0x16d52e60 },
+	{ _MMIO(0x9888), 0x065c8000 },
+	{ _MMIO(0x9888), 0x085cc000 },
+	{ _MMIO(0x9888), 0x0a5cc000 },
+	{ _MMIO(0x9888), 0x0c5c4000 },
+	{ _MMIO(0x9888), 0x0e3d8000 },
+	{ _MMIO(0x9888), 0x183da000 },
+	{ _MMIO(0x9888), 0x06588000 },
+	{ _MMIO(0x9888), 0x08588000 },
+	{ _MMIO(0x9888), 0x0a584000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x185b5800 },
+	{ _MMIO(0x9888), 0x1a5b000a },
+	{ _MMIO(0x9888), 0x0e1faa00 },
+	{ _MMIO(0x9888), 0x101f02aa },
+	{ _MMIO(0x9888), 0x0e384000 },
+	{ _MMIO(0x9888), 0x16384000 },
+	{ _MMIO(0x9888), 0x18382a55 },
+	{ _MMIO(0x9888), 0x06398000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0c39a000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x1a3a02a0 },
+	{ _MMIO(0x9888), 0x0e138000 },
+	{ _MMIO(0x9888), 0x16130500 },
+	{ _MMIO(0x9888), 0x06148000 },
+	{ _MMIO(0x9888), 0x08146000 },
+	{ _MMIO(0x9888), 0x0615c100 },
+	{ _MMIO(0x9888), 0x0815c500 },
+	{ _MMIO(0x9888), 0x0a1500c3 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x16335040 },
+	{ _MMIO(0x9888), 0x08349000 },
+	{ _MMIO(0x9888), 0x0a341000 },
+	{ _MMIO(0x9888), 0x083500c1 },
+	{ _MMIO(0x9888), 0x0a35c500 },
+	{ _MMIO(0x9888), 0x0c3500c3 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x1853002a },
+	{ _MMIO(0x9888), 0x0a54e000 },
+	{ _MMIO(0x9888), 0x0c55c500 },
+	{ _MMIO(0x9888), 0x0e55c1c3 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x00dc8000 },
+	{ _MMIO(0x9888), 0x02dcc000 },
+	{ _MMIO(0x9888), 0x04dc4000 },
+	{ _MMIO(0x9888), 0x04bd8000 },
+	{ _MMIO(0x9888), 0x06bd8000 },
+	{ _MMIO(0x9888), 0x02d8c000 },
+	{ _MMIO(0x9888), 0x02db8000 },
+	{ _MMIO(0x9888), 0x04db4000 },
+	{ _MMIO(0x9888), 0x06db4000 },
+	{ _MMIO(0x9888), 0x08db8000 },
+	{ _MMIO(0x9888), 0x0c9fa000 },
+	{ _MMIO(0x9888), 0x0e9f00aa },
+	{ _MMIO(0x9888), 0x02b84000 },
+	{ _MMIO(0x9888), 0x04b84000 },
+	{ _MMIO(0x9888), 0x06b84000 },
+	{ _MMIO(0x9888), 0x08b84000 },
+	{ _MMIO(0x9888), 0x0ab88000 },
+	{ _MMIO(0x9888), 0x0cb88000 },
+	{ _MMIO(0x9888), 0x00b98000 },
+	{ _MMIO(0x9888), 0x02b9a000 },
+	{ _MMIO(0x9888), 0x04b9a000 },
+	{ _MMIO(0x9888), 0x06b92000 },
+	{ _MMIO(0x9888), 0x0aba8000 },
+	{ _MMIO(0x9888), 0x0cba8000 },
+	{ _MMIO(0x9888), 0x04938000 },
+	{ _MMIO(0x9888), 0x06938000 },
+	{ _MMIO(0x9888), 0x0494c000 },
+	{ _MMIO(0x9888), 0x0295cfc7 },
+	{ _MMIO(0x9888), 0x10950000 },
+	{ _MMIO(0x9888), 0x02b38000 },
+	{ _MMIO(0x9888), 0x08b38000 },
+	{ _MMIO(0x9888), 0x04b42000 },
+	{ _MMIO(0x9888), 0x06b41000 },
+	{ _MMIO(0x9888), 0x00b5c700 },
+	{ _MMIO(0x9888), 0x04b500cf },
+	{ _MMIO(0x9888), 0x10b50000 },
+	{ _MMIO(0x9888), 0x0ad38000 },
+	{ _MMIO(0x9888), 0x0cd38000 },
+	{ _MMIO(0x9888), 0x06d46000 },
+	{ _MMIO(0x9888), 0x04d5c700 },
+	{ _MMIO(0x9888), 0x06d500cf },
+	{ _MMIO(0x9888), 0x10d50000 },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8aaaa0 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x258b555a },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x238b5500 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x018d8000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800882 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45801082 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x478014a5 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800002 },
+	{ _MMIO(0x9888), 0x41800c62 },
+};
+
+static const struct i915_oa_reg *
+get_tdl_2_mux_config(struct drm_i915_private *dev_priv,
+		     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_tdl_2);
+	return mux_config_tdl_2;
+}
+
+static const struct i915_oa_reg b_counter_config_test_oa[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2770), 0x00000004 },
+	{ _MMIO(0x2774), 0x00000000 },
+	{ _MMIO(0x2778), 0x00000003 },
+	{ _MMIO(0x277c), 0x00000000 },
+	{ _MMIO(0x2780), 0x00000007 },
+	{ _MMIO(0x2784), 0x00000000 },
+	{ _MMIO(0x2788), 0x00100002 },
+	{ _MMIO(0x278c), 0x0000fff7 },
+	{ _MMIO(0x2790), 0x00100002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00100082 },
+	{ _MMIO(0x279c), 0x0000ffef },
+	{ _MMIO(0x27a0), 0x001000c2 },
+	{ _MMIO(0x27a4), 0x0000ffe7 },
+	{ _MMIO(0x27a8), 0x00100001 },
+	{ _MMIO(0x27ac), 0x0000ffe7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_test_oa[] = {
+};
+
+static const struct i915_oa_reg mux_config_test_oa[] = {
+	{ _MMIO(0x9888), 0x59800000 },
+	{ _MMIO(0x9888), 0x59800001 },
+	{ _MMIO(0x9888), 0x338b0000 },
+	{ _MMIO(0x9888), 0x258b0066 },
+	{ _MMIO(0x9888), 0x058b0000 },
+	{ _MMIO(0x9888), 0x038b0000 },
+	{ _MMIO(0x9888), 0x03844000 },
+	{ _MMIO(0x9888), 0x47800080 },
+	{ _MMIO(0x9888), 0x57800000 },
+	{ _MMIO(0x1823a4), 0x00000000 },
+	{ _MMIO(0x9888), 0x59800000 },
+};
+
+static const struct i915_oa_reg *
+get_test_oa_mux_config(struct drm_i915_private *dev_priv,
+		       int *len)
+{
+	*len = ARRAY_SIZE(mux_config_test_oa);
+	return mux_config_test_oa;
+}
+
 int i915_oa_select_metric_set_chv(struct drm_i915_private *dev_priv)
 {
 	dev_priv->perf.oa.mux_regs = NULL;
@@ -144,13 +1908,288 @@ int i915_oa_select_metric_set_chv(struct drm_i915_private *dev_priv)
 	dev_priv->perf.oa.flex_regs = NULL;
 	dev_priv->perf.oa.flex_regs_len = 0;
 
-	switch (dev_priv->perf.oa.metrics_set) {
-	case METRIC_SET_ID_RENDER_BASIC:
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_render_basic_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_basic_mux_config(dev_priv,
+						     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
+		return 0;
+	case METRIC_SET_ID_RENDER_PIPE_PROFILE:
+		dev_priv->perf.oa.mux_regs =
+			get_render_pipe_profile_mux_config(dev_priv,
+							   &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_PIPE_PROFILE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_pipe_profile;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_pipe_profile);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_pipe_profile;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_pipe_profile);
+
+		return 0;
+	case METRIC_SET_ID_HDC_AND_SF:
+		dev_priv->perf.oa.mux_regs =
+			get_hdc_and_sf_mux_config(dev_priv,
+						  &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"HDC_AND_SF\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_hdc_and_sf;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_hdc_and_sf);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_hdc_and_sf;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_hdc_and_sf);
+
+		return 0;
+	case METRIC_SET_ID_L3_1:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_1_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_1);
+
+		return 0;
+	case METRIC_SET_ID_L3_2:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_2_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_2);
+
+		return 0;
+	case METRIC_SET_ID_L3_3:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_3_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_3\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_3;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_3);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_3;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_3);
+
+		return 0;
+	case METRIC_SET_ID_L3_4:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_4_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_4\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_4;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_4);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_4;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_4);
+
+		return 0;
+	case METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND:
+		dev_priv->perf.oa.mux_regs =
+			get_rasterizer_and_pixel_backend_mux_config(dev_priv,
+								    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RASTERIZER_AND_PIXEL_BACKEND\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_rasterizer_and_pixel_backend);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_rasterizer_and_pixel_backend);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER_1:
+		dev_priv->perf.oa.mux_regs =
+			get_sampler_1_mux_config(dev_priv,
+						 &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler_1);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER_2:
+		dev_priv->perf.oa.mux_regs =
+			get_sampler_2_mux_config(dev_priv,
+						 &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler_2);
+
+		return 0;
+	case METRIC_SET_ID_TDL_1:
 		dev_priv->perf.oa.mux_regs =
-			get_render_basic_mux_config(dev_priv,
-						    &dev_priv->perf.oa.mux_regs_len);
+			get_tdl_1_mux_config(dev_priv,
+					     &dev_priv->perf.oa.mux_regs_len);
 		if (!dev_priv->perf.oa.mux_regs) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_1\" metric set\n");
 
 			/* EINVAL because *_register_sysfs already checked this
 			 * and so it wouldn't have been advertised to userspace and
@@ -160,14 +2199,64 @@ int i915_oa_select_metric_set_chv(struct drm_i915_private *dev_priv)
 		}
 
 		dev_priv->perf.oa.b_counter_regs =
-			b_counter_config_render_basic;
+			b_counter_config_tdl_1;
 		dev_priv->perf.oa.b_counter_regs_len =
-			ARRAY_SIZE(b_counter_config_render_basic);
+			ARRAY_SIZE(b_counter_config_tdl_1);
 
 		dev_priv->perf.oa.flex_regs =
-			flex_eu_config_render_basic;
+			flex_eu_config_tdl_1;
 		dev_priv->perf.oa.flex_regs_len =
-			ARRAY_SIZE(flex_eu_config_render_basic);
+			ARRAY_SIZE(flex_eu_config_tdl_1);
+
+		return 0;
+	case METRIC_SET_ID_TDL_2:
+		dev_priv->perf.oa.mux_regs =
+			get_tdl_2_mux_config(dev_priv,
+					     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_2);
+
+		return 0;
+	case METRIC_SET_ID_TEST_OA:
+		dev_priv->perf.oa.mux_regs =
+			get_test_oa_mux_config(dev_priv,
+					       &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TEST_OA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_test_oa;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_test_oa);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_test_oa;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_test_oa);
 
 		return 0;
 	default:
@@ -197,6 +2286,292 @@ static struct attribute_group group_render_basic = {
 	.attrs =  attrs_render_basic,
 };
 
+static ssize_t
+show_compute_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_BASIC);
+}
+
+static struct device_attribute dev_attr_compute_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_basic[] = {
+	&dev_attr_compute_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_basic = {
+	.name = "f522a89c-ecd1-4522-8331-3383c54af5f5",
+	.attrs =  attrs_compute_basic,
+};
+
+static ssize_t
+show_render_pipe_profile_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_PIPE_PROFILE);
+}
+
+static struct device_attribute dev_attr_render_pipe_profile_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_pipe_profile_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_pipe_profile[] = {
+	&dev_attr_render_pipe_profile_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_pipe_profile = {
+	.name = "a9ccc03d-a943-4e6b-9cd6-13e063075927",
+	.attrs =  attrs_render_pipe_profile,
+};
+
+static ssize_t
+show_hdc_and_sf_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_HDC_AND_SF);
+}
+
+static struct device_attribute dev_attr_hdc_and_sf_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_hdc_and_sf_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_hdc_and_sf[] = {
+	&dev_attr_hdc_and_sf_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_hdc_and_sf = {
+	.name = "2cf0c064-68df-4fac-9b3f-57f51ca8a069",
+	.attrs =  attrs_hdc_and_sf,
+};
+
+static ssize_t
+show_l3_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_1);
+}
+
+static struct device_attribute dev_attr_l3_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_1[] = {
+	&dev_attr_l3_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_1 = {
+	.name = "78a87ff9-543a-49ce-95ea-26d86071ea93",
+	.attrs =  attrs_l3_1,
+};
+
+static ssize_t
+show_l3_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_2);
+}
+
+static struct device_attribute dev_attr_l3_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_2[] = {
+	&dev_attr_l3_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_2 = {
+	.name = "9f2cece5-7bfe-4320-ad66-8c7cc526bec5",
+	.attrs =  attrs_l3_2,
+};
+
+static ssize_t
+show_l3_3_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_3);
+}
+
+static struct device_attribute dev_attr_l3_3_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_3_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_3[] = {
+	&dev_attr_l3_3_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_3 = {
+	.name = "d890ef38-d309-47e4-b8b5-aa779bb19ab0",
+	.attrs =  attrs_l3_3,
+};
+
+static ssize_t
+show_l3_4_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_4);
+}
+
+static struct device_attribute dev_attr_l3_4_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_4_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_4[] = {
+	&dev_attr_l3_4_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_4 = {
+	.name = "5fdff4a6-9dc8-45e1-bfda-ef54869fbdd4",
+	.attrs =  attrs_l3_4,
+};
+
+static ssize_t
+show_rasterizer_and_pixel_backend_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND);
+}
+
+static struct device_attribute dev_attr_rasterizer_and_pixel_backend_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_rasterizer_and_pixel_backend_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_rasterizer_and_pixel_backend[] = {
+	&dev_attr_rasterizer_and_pixel_backend_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_rasterizer_and_pixel_backend = {
+	.name = "2c0e45e1-7e2c-4a14-ae00-0b7ec868b8aa",
+	.attrs =  attrs_rasterizer_and_pixel_backend,
+};
+
+static ssize_t
+show_sampler_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER_1);
+}
+
+static struct device_attribute dev_attr_sampler_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler_1[] = {
+	&dev_attr_sampler_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler_1 = {
+	.name = "71148d78-baf5-474f-878a-e23158d0265d",
+	.attrs =  attrs_sampler_1,
+};
+
+static ssize_t
+show_sampler_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER_2);
+}
+
+static struct device_attribute dev_attr_sampler_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler_2[] = {
+	&dev_attr_sampler_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler_2 = {
+	.name = "b996a2b7-c59c-492d-877a-8cd54fd6df84",
+	.attrs =  attrs_sampler_2,
+};
+
+static ssize_t
+show_tdl_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_1);
+}
+
+static struct device_attribute dev_attr_tdl_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_1[] = {
+	&dev_attr_tdl_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_1 = {
+	.name = "eb2fecba-b431-42e7-8261-fe9429a6e67a",
+	.attrs =  attrs_tdl_1,
+};
+
+static ssize_t
+show_tdl_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_2);
+}
+
+static struct device_attribute dev_attr_tdl_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_2[] = {
+	&dev_attr_tdl_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_2 = {
+	.name = "60749470-a648-4a4b-9f10-dbfe1e36e44d",
+	.attrs =  attrs_tdl_2,
+};
+
+static ssize_t
+show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TEST_OA);
+}
+
+static struct device_attribute dev_attr_test_oa_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_test_oa_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_test_oa[] = {
+	&dev_attr_test_oa_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_test_oa = {
+	.name = "4a534b07-cba3-414d-8d60-874830e883aa",
+	.attrs =  attrs_test_oa,
+};
+
 int
 i915_perf_register_sysfs_chv(struct drm_i915_private *dev_priv)
 {
@@ -208,9 +2583,113 @@ i915_perf_register_sysfs_chv(struct drm_i915_private *dev_priv)
 		if (ret)
 			goto error_render_basic;
 	}
+	if (get_compute_basic_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+		if (ret)
+			goto error_compute_basic;
+	}
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+		if (ret)
+			goto error_render_pipe_profile;
+	}
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+		if (ret)
+			goto error_hdc_and_sf;
+	}
+	if (get_l3_1_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+		if (ret)
+			goto error_l3_1;
+	}
+	if (get_l3_2_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+		if (ret)
+			goto error_l3_2;
+	}
+	if (get_l3_3_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+		if (ret)
+			goto error_l3_3;
+	}
+	if (get_l3_4_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_4);
+		if (ret)
+			goto error_l3_4;
+	}
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+		if (ret)
+			goto error_rasterizer_and_pixel_backend;
+	}
+	if (get_sampler_1_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler_1);
+		if (ret)
+			goto error_sampler_1;
+	}
+	if (get_sampler_2_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler_2);
+		if (ret)
+			goto error_sampler_2;
+	}
+	if (get_tdl_1_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+		if (ret)
+			goto error_tdl_1;
+	}
+	if (get_tdl_2_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+		if (ret)
+			goto error_tdl_2;
+	}
+	if (get_test_oa_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+		if (ret)
+			goto error_test_oa;
+	}
 
 	return 0;
 
+error_test_oa:
+	if (get_tdl_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+error_tdl_2:
+	if (get_tdl_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+error_tdl_1:
+	if (get_sampler_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_2);
+error_sampler_2:
+	if (get_sampler_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_1);
+error_sampler_1:
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+error_rasterizer_and_pixel_backend:
+	if (get_l3_4_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_4);
+error_l3_4:
+	if (get_l3_3_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+error_l3_3:
+	if (get_l3_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+error_l3_2:
+	if (get_l3_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+error_l3_1:
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+error_hdc_and_sf:
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+error_render_pipe_profile:
+	if (get_compute_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+error_compute_basic:
+	if (get_render_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
 error_render_basic:
 	return ret;
 }
@@ -222,4 +2701,30 @@ i915_perf_unregister_sysfs_chv(struct drm_i915_private *dev_priv)
 
 	if (get_render_basic_mux_config(dev_priv, &mux_len))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+	if (get_compute_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+	if (get_l3_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+	if (get_l3_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+	if (get_l3_3_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+	if (get_l3_4_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_4);
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+	if (get_sampler_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_1);
+	if (get_sampler_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_2);
+	if (get_tdl_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+	if (get_tdl_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+	if (get_test_oa_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_test_oa);
 }
diff --git a/drivers/gpu/drm/i915/i915_oa_hsw.c b/drivers/gpu/drm/i915/i915_oa_hsw.c
index 4ddf756add31..7247d539ff04 100644
--- a/drivers/gpu/drm/i915/i915_oa_hsw.c
+++ b/drivers/gpu/drm/i915/i915_oa_hsw.c
@@ -47,6 +47,9 @@ static const struct i915_oa_reg b_counter_config_render_basic[] = {
 	{ _MMIO(0x2710), 0x00000000 },
 };
 
+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+};
+
 static const struct i915_oa_reg mux_config_render_basic[] = {
 	{ _MMIO(0x253a4), 0x01600000 },
 	{ _MMIO(0x25440), 0x00100000 },
@@ -137,6 +140,9 @@ static const struct i915_oa_reg b_counter_config_compute_basic[] = {
 	{ _MMIO(0x236c), 0x00000000 },
 };
 
+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+};
+
 static const struct i915_oa_reg mux_config_compute_basic[] = {
 	{ _MMIO(0x253a4), 0x00000000 },
 	{ _MMIO(0x2681c), 0x01f00800 },
@@ -203,6 +209,9 @@ static const struct i915_oa_reg b_counter_config_compute_extended[] = {
 	{ _MMIO(0x27ac), 0x0000fffe },
 };
 
+static const struct i915_oa_reg flex_eu_config_compute_extended[] = {
+};
+
 static const struct i915_oa_reg mux_config_compute_extended[] = {
 	{ _MMIO(0x2681c), 0x3eb00800 },
 	{ _MMIO(0x26820), 0x00900000 },
@@ -260,6 +269,9 @@ static const struct i915_oa_reg b_counter_config_memory_reads[] = {
 	{ _MMIO(0x27ac), 0x0000fc00 },
 };
 
+static const struct i915_oa_reg flex_eu_config_memory_reads[] = {
+};
+
 static const struct i915_oa_reg mux_config_memory_reads[] = {
 	{ _MMIO(0x253a4), 0x34300000 },
 	{ _MMIO(0x25440), 0x2d800000 },
@@ -320,6 +332,9 @@ static const struct i915_oa_reg b_counter_config_memory_writes[] = {
 	{ _MMIO(0x27ac), 0x0000fc00 },
 };
 
+static const struct i915_oa_reg flex_eu_config_memory_writes[] = {
+};
+
 static const struct i915_oa_reg mux_config_memory_writes[] = {
 	{ _MMIO(0x253a4), 0x34300000 },
 	{ _MMIO(0x25440), 0x01500000 },
@@ -358,6 +373,9 @@ static const struct i915_oa_reg b_counter_config_sampler_balance[] = {
 	{ _MMIO(0x2724), 0x00800000 },
 };
 
+static const struct i915_oa_reg flex_eu_config_sampler_balance[] = {
+};
+
 static const struct i915_oa_reg mux_config_sampler_balance[] = {
 	{ _MMIO(0x2eb9c), 0x01906400 },
 	{ _MMIO(0x2fb9c), 0x01906400 },
@@ -436,6 +454,11 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 		dev_priv->perf.oa.b_counter_regs_len =
 			ARRAY_SIZE(b_counter_config_render_basic);
 
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
 		return 0;
 	case METRIC_SET_ID_COMPUTE_BASIC:
 		dev_priv->perf.oa.mux_regs =
@@ -456,6 +479,11 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 		dev_priv->perf.oa.b_counter_regs_len =
 			ARRAY_SIZE(b_counter_config_compute_basic);
 
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
 		return 0;
 	case METRIC_SET_ID_COMPUTE_EXTENDED:
 		dev_priv->perf.oa.mux_regs =
@@ -476,6 +504,11 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 		dev_priv->perf.oa.b_counter_regs_len =
 			ARRAY_SIZE(b_counter_config_compute_extended);
 
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extended;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extended);
+
 		return 0;
 	case METRIC_SET_ID_MEMORY_READS:
 		dev_priv->perf.oa.mux_regs =
@@ -496,6 +529,11 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 		dev_priv->perf.oa.b_counter_regs_len =
 			ARRAY_SIZE(b_counter_config_memory_reads);
 
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_reads;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_reads);
+
 		return 0;
 	case METRIC_SET_ID_MEMORY_WRITES:
 		dev_priv->perf.oa.mux_regs =
@@ -516,6 +554,11 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 		dev_priv->perf.oa.b_counter_regs_len =
 			ARRAY_SIZE(b_counter_config_memory_writes);
 
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_writes;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_writes);
+
 		return 0;
 	case METRIC_SET_ID_SAMPLER_BALANCE:
 		dev_priv->perf.oa.mux_regs =
@@ -536,6 +579,11 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 		dev_priv->perf.oa.b_counter_regs_len =
 			ARRAY_SIZE(b_counter_config_sampler_balance);
 
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler_balance;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler_balance);
+
 		return 0;
 	default:
 		return -ENODEV;
@@ -714,19 +762,19 @@ i915_perf_register_sysfs_hsw(struct drm_i915_private *dev_priv)
 	return 0;
 
 error_sampler_balance:
-	if (get_sampler_balance_mux_config(dev_priv, &mux_len))
+	if (get_memory_writes_mux_config(dev_priv, &mux_len))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
 error_memory_writes:
-	if (get_sampler_balance_mux_config(dev_priv, &mux_len))
+	if (get_memory_reads_mux_config(dev_priv, &mux_len))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
 error_memory_reads:
-	if (get_sampler_balance_mux_config(dev_priv, &mux_len))
+	if (get_compute_extended_mux_config(dev_priv, &mux_len))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
 error_compute_extended:
-	if (get_sampler_balance_mux_config(dev_priv, &mux_len))
+	if (get_compute_basic_mux_config(dev_priv, &mux_len))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
 error_compute_basic:
-	if (get_sampler_balance_mux_config(dev_priv, &mux_len))
+	if (get_render_basic_mux_config(dev_priv, &mux_len))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
 error_render_basic:
 	return ret;
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt2.c b/drivers/gpu/drm/i915/i915_oa_sklgt2.c
index ea0317a4781c..742ff11f9ec4 100644
--- a/drivers/gpu/drm/i915/i915_oa_sklgt2.c
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt2.c
@@ -31,9 +31,26 @@
 
 enum metric_set_id {
 	METRIC_SET_ID_RENDER_BASIC = 1,
+	METRIC_SET_ID_COMPUTE_BASIC,
+	METRIC_SET_ID_RENDER_PIPE_PROFILE,
+	METRIC_SET_ID_MEMORY_READS,
+	METRIC_SET_ID_MEMORY_WRITES,
+	METRIC_SET_ID_COMPUTE_EXTENDED,
+	METRIC_SET_ID_COMPUTE_L3_CACHE,
+	METRIC_SET_ID_HDC_AND_SF,
+	METRIC_SET_ID_L3_1,
+	METRIC_SET_ID_L3_2,
+	METRIC_SET_ID_L3_3,
+	METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND,
+	METRIC_SET_ID_SAMPLER,
+	METRIC_SET_ID_TDL_1,
+	METRIC_SET_ID_TDL_2,
+	METRIC_SET_ID_COMPUTE_EXTRA,
+	METRIC_SET_ID_VME_PIPE,
+	METRIC_SET_ID_TEST_OA,
 };
 
-int i915_oa_n_builtin_metric_sets_sklgt2 = 1;
+int i915_oa_n_builtin_metric_sets_sklgt2 = 18;
 
 static const struct i915_oa_reg b_counter_config_render_basic[] = {
 	{ _MMIO(0x2710), 0x00000000 },
@@ -138,66 +155,2953 @@ get_render_basic_mux_config(struct drm_i915_private *dev_priv,
 	return NULL;
 }
 
+static const struct i915_oa_reg b_counter_config_compute_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic_0_slices_0x01_and_sku_lt_0x02[] = {
+	{ _MMIO(0x9888), 0x104f00e0 },
+	{ _MMIO(0x9888), 0x124f1c00 },
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f901403 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4e8200 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x004f0db2 },
+	{ _MMIO(0x9888), 0x064f0900 },
+	{ _MMIO(0x9888), 0x084f1880 },
+	{ _MMIO(0x9888), 0x0a4f0011 },
+	{ _MMIO(0x9888), 0x0c4f0e3c },
+	{ _MMIO(0x9888), 0x0e4f1d80 },
+	{ _MMIO(0x9888), 0x086c0002 },
+	{ _MMIO(0x9888), 0x0a6c0100 },
+	{ _MMIO(0x9888), 0x0e6c000c },
+	{ _MMIO(0x9888), 0x026c000b },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x081b4000 },
+	{ _MMIO(0x9888), 0x0a1b8000 },
+	{ _MMIO(0x9888), 0x0e1b4000 },
+	{ _MMIO(0x9888), 0x021b4000 },
+	{ _MMIO(0x9888), 0x1a1c4000 },
+	{ _MMIO(0x9888), 0x1c1c0012 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x005bc000 },
+	{ _MMIO(0x9888), 0x065b8000 },
+	{ _MMIO(0x9888), 0x085b8000 },
+	{ _MMIO(0x9888), 0x0a5b4000 },
+	{ _MMIO(0x9888), 0x0c5bc000 },
+	{ _MMIO(0x9888), 0x0e5b8000 },
+	{ _MMIO(0x9888), 0x105c8000 },
+	{ _MMIO(0x9888), 0x1a5ca000 },
+	{ _MMIO(0x9888), 0x1c5c002d },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x0a4c0800 },
+	{ _MMIO(0x9888), 0x0c4c0082 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020d2000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5500 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002cc000 },
+	{ _MMIO(0x9888), 0x0e2cc000 },
+	{ _MMIO(0x9888), 0x162cbe00 },
+	{ _MMIO(0x9888), 0x182c00ef },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x19900157 },
+	{ _MMIO(0x9888), 0x1b900167 },
+	{ _MMIO(0x9888), 0x1d900105 },
+	{ _MMIO(0x9888), 0x1f900103 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0xd28), 0x00000000 },
+	{ _MMIO(0x9888), 0x11900fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900840 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900842 },
+	{ _MMIO(0x9888), 0x47900840 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900840 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900040 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900840 },
+	{ _MMIO(0x9888), 0x53901111 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic_0_slices_0x01_and_sku_gte_0x02[] = {
+	{ _MMIO(0x9888), 0x104f00e0 },
+	{ _MMIO(0x9888), 0x124f1c00 },
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f901403 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x1a4e0820 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x064f0900 },
+	{ _MMIO(0x9888), 0x084f0032 },
+	{ _MMIO(0x9888), 0x0a4f1810 },
+	{ _MMIO(0x9888), 0x0c4f0e00 },
+	{ _MMIO(0x9888), 0x0e4f003c },
+	{ _MMIO(0x9888), 0x004f0d80 },
+	{ _MMIO(0x9888), 0x024f003b },
+	{ _MMIO(0x9888), 0x006c0002 },
+	{ _MMIO(0x9888), 0x086c0000 },
+	{ _MMIO(0x9888), 0x0c6c000c },
+	{ _MMIO(0x9888), 0x0e6c0b00 },
+	{ _MMIO(0x9888), 0x186c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x081b8000 },
+	{ _MMIO(0x9888), 0x0c1b4000 },
+	{ _MMIO(0x9888), 0x0e1b8000 },
+	{ _MMIO(0x9888), 0x101c8000 },
+	{ _MMIO(0x9888), 0x1a1c8000 },
+	{ _MMIO(0x9888), 0x1c1c0024 },
+	{ _MMIO(0x9888), 0x065b8000 },
+	{ _MMIO(0x9888), 0x085b4000 },
+	{ _MMIO(0x9888), 0x0a5bc000 },
+	{ _MMIO(0x9888), 0x0c5b8000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x005b8000 },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x1a5c6000 },
+	{ _MMIO(0x9888), 0x1c5c001b },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x145c8000 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4c2000 },
+	{ _MMIO(0x9888), 0x0c4c0208 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020d2000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5500 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2cc000 },
+	{ _MMIO(0x9888), 0x162cfb00 },
+	{ _MMIO(0x9888), 0x182c00be },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042cc000 },
+	{ _MMIO(0x9888), 0x19900157 },
+	{ _MMIO(0x9888), 0x1b900167 },
+	{ _MMIO(0x9888), 0x1d900105 },
+	{ _MMIO(0x9888), 0x1f900103 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x11900fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900842 },
+	{ _MMIO(0x9888), 0x47900802 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900802 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900002 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53901111 },
+};
+
+static const struct i915_oa_reg *
+get_compute_basic_mux_config(struct drm_i915_private *dev_priv,
+			     int *len)
+{
+	if ((INTEL_INFO(dev_priv)->sseu.slice_mask & 0x01) &&
+	    (dev_priv->drm.pdev->revision < 0x02)) {
+		*len = ARRAY_SIZE(mux_config_compute_basic_0_slices_0x01_and_sku_lt_0x02);
+		return mux_config_compute_basic_0_slices_0x01_and_sku_lt_0x02;
+	} else if ((INTEL_INFO(dev_priv)->sseu.slice_mask & 0x01) &&
+		   (dev_priv->drm.pdev->revision >= 0x02)) {
+		*len = ARRAY_SIZE(mux_config_compute_basic_0_slices_0x01_and_sku_gte_0x02);
+		return mux_config_compute_basic_0_slices_0x01_and_sku_gte_0x02;
+	}
+
+	*len = 0;
+	return NULL;
+}
+
+static const struct i915_oa_reg b_counter_config_render_pipe_profile[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ffea },
+	{ _MMIO(0x2774), 0x00007ffc },
+	{ _MMIO(0x2778), 0x0007affa },
+	{ _MMIO(0x277c), 0x0000f5fd },
+	{ _MMIO(0x2780), 0x00079ffa },
+	{ _MMIO(0x2784), 0x0000f3fb },
+	{ _MMIO(0x2788), 0x0007bf7a },
+	{ _MMIO(0x278c), 0x0000f7e7 },
+	{ _MMIO(0x2790), 0x0007fefa },
+	{ _MMIO(0x2794), 0x0000f7cf },
+	{ _MMIO(0x2798), 0x00077ffa },
+	{ _MMIO(0x279c), 0x0000efdf },
+	{ _MMIO(0x27a0), 0x0006fffa },
+	{ _MMIO(0x27a4), 0x0000cfbf },
+	{ _MMIO(0x27a8), 0x0003fffa },
+	{ _MMIO(0x27ac), 0x00005f7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_pipe_profile[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile_0_sku_lt_0x02[] = {
+	{ _MMIO(0x9888), 0x0c0e001f },
+	{ _MMIO(0x9888), 0x0a0f0000 },
+	{ _MMIO(0x9888), 0x10116800 },
+	{ _MMIO(0x9888), 0x178a03e0 },
+	{ _MMIO(0x9888), 0x11824c00 },
+	{ _MMIO(0x9888), 0x11830020 },
+	{ _MMIO(0x9888), 0x13840020 },
+	{ _MMIO(0x9888), 0x11850019 },
+	{ _MMIO(0x9888), 0x11860007 },
+	{ _MMIO(0x9888), 0x01870c40 },
+	{ _MMIO(0x9888), 0x17880000 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x0a4c0040 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x040d4000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020e5400 },
+	{ _MMIO(0x9888), 0x000e0000 },
+	{ _MMIO(0x9888), 0x080f0040 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x100f0000 },
+	{ _MMIO(0x9888), 0x0e0f0040 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06110012 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x01898000 },
+	{ _MMIO(0x9888), 0x0d890100 },
+	{ _MMIO(0x9888), 0x03898000 },
+	{ _MMIO(0x9888), 0x09808000 },
+	{ _MMIO(0x9888), 0x0b808000 },
+	{ _MMIO(0x9888), 0x0380c000 },
+	{ _MMIO(0x9888), 0x0f8a0075 },
+	{ _MMIO(0x9888), 0x1d8a0000 },
+	{ _MMIO(0x9888), 0x118a8000 },
+	{ _MMIO(0x9888), 0x1b8a4000 },
+	{ _MMIO(0x9888), 0x138a8000 },
+	{ _MMIO(0x9888), 0x1d81a000 },
+	{ _MMIO(0x9888), 0x15818000 },
+	{ _MMIO(0x9888), 0x17818000 },
+	{ _MMIO(0x9888), 0x0b820030 },
+	{ _MMIO(0x9888), 0x07828000 },
+	{ _MMIO(0x9888), 0x0d824000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x05824000 },
+	{ _MMIO(0x9888), 0x0d830003 },
+	{ _MMIO(0x9888), 0x0583000c },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x03838000 },
+	{ _MMIO(0x9888), 0x07838000 },
+	{ _MMIO(0x9888), 0x0b840980 },
+	{ _MMIO(0x9888), 0x03844d80 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x09848000 },
+	{ _MMIO(0x9888), 0x09850080 },
+	{ _MMIO(0x9888), 0x03850003 },
+	{ _MMIO(0x9888), 0x01850000 },
+	{ _MMIO(0x9888), 0x07860000 },
+	{ _MMIO(0x9888), 0x0f860400 },
+	{ _MMIO(0x9888), 0x09870032 },
+	{ _MMIO(0x9888), 0x01888052 },
+	{ _MMIO(0x9888), 0x11880000 },
+	{ _MMIO(0x9888), 0x09884000 },
+	{ _MMIO(0x9888), 0x15968000 },
+	{ _MMIO(0x9888), 0x17968000 },
+	{ _MMIO(0x9888), 0x0f96c000 },
+	{ _MMIO(0x9888), 0x1f950011 },
+	{ _MMIO(0x9888), 0x1d950014 },
+	{ _MMIO(0x9888), 0x0592c000 },
+	{ _MMIO(0x9888), 0x0b928000 },
+	{ _MMIO(0x9888), 0x0d924000 },
+	{ _MMIO(0x9888), 0x0f924000 },
+	{ _MMIO(0x9888), 0x11928000 },
+	{ _MMIO(0x9888), 0x1392c000 },
+	{ _MMIO(0x9888), 0x09924000 },
+	{ _MMIO(0x9888), 0x01985000 },
+	{ _MMIO(0x9888), 0x07988000 },
+	{ _MMIO(0x9888), 0x09981000 },
+	{ _MMIO(0x9888), 0x0b982000 },
+	{ _MMIO(0x9888), 0x0d982000 },
+	{ _MMIO(0x9888), 0x0f989000 },
+	{ _MMIO(0x9888), 0x05982000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x0b978000 },
+	{ _MMIO(0x9888), 0x0f974000 },
+	{ _MMIO(0x9888), 0x11974000 },
+	{ _MMIO(0x9888), 0x13978000 },
+	{ _MMIO(0x9888), 0x09974000 },
+	{ _MMIO(0xd28), 0x00000000 },
+	{ _MMIO(0x9888), 0x1190c080 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x419010a0 },
+	{ _MMIO(0x9888), 0x55904000 },
+	{ _MMIO(0x9888), 0x45901000 },
+	{ _MMIO(0x9888), 0x47900084 },
+	{ _MMIO(0x9888), 0x57904400 },
+	{ _MMIO(0x9888), 0x499000a5 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900081 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x439014a4 },
+	{ _MMIO(0x9888), 0x53900400 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile_0_sku_gte_0x02[] = {
+	{ _MMIO(0x9888), 0x0c0e001f },
+	{ _MMIO(0x9888), 0x0a0f0000 },
+	{ _MMIO(0x9888), 0x10116800 },
+	{ _MMIO(0x9888), 0x178a03e0 },
+	{ _MMIO(0x9888), 0x11824c00 },
+	{ _MMIO(0x9888), 0x11830020 },
+	{ _MMIO(0x9888), 0x13840020 },
+	{ _MMIO(0x9888), 0x11850019 },
+	{ _MMIO(0x9888), 0x11860007 },
+	{ _MMIO(0x9888), 0x01870c40 },
+	{ _MMIO(0x9888), 0x17880000 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x0a4c0040 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x040d4000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020e5400 },
+	{ _MMIO(0x9888), 0x000e0000 },
+	{ _MMIO(0x9888), 0x080f0040 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x100f0000 },
+	{ _MMIO(0x9888), 0x0e0f0040 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06110012 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x01898000 },
+	{ _MMIO(0x9888), 0x0d890100 },
+	{ _MMIO(0x9888), 0x03898000 },
+	{ _MMIO(0x9888), 0x09808000 },
+	{ _MMIO(0x9888), 0x0b808000 },
+	{ _MMIO(0x9888), 0x0380c000 },
+	{ _MMIO(0x9888), 0x0f8a0075 },
+	{ _MMIO(0x9888), 0x1d8a0000 },
+	{ _MMIO(0x9888), 0x118a8000 },
+	{ _MMIO(0x9888), 0x1b8a4000 },
+	{ _MMIO(0x9888), 0x138a8000 },
+	{ _MMIO(0x9888), 0x1d81a000 },
+	{ _MMIO(0x9888), 0x15818000 },
+	{ _MMIO(0x9888), 0x17818000 },
+	{ _MMIO(0x9888), 0x0b820030 },
+	{ _MMIO(0x9888), 0x07828000 },
+	{ _MMIO(0x9888), 0x0d824000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x05824000 },
+	{ _MMIO(0x9888), 0x0d830003 },
+	{ _MMIO(0x9888), 0x0583000c },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x03838000 },
+	{ _MMIO(0x9888), 0x07838000 },
+	{ _MMIO(0x9888), 0x0b840980 },
+	{ _MMIO(0x9888), 0x03844d80 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x09848000 },
+	{ _MMIO(0x9888), 0x09850080 },
+	{ _MMIO(0x9888), 0x03850003 },
+	{ _MMIO(0x9888), 0x01850000 },
+	{ _MMIO(0x9888), 0x07860000 },
+	{ _MMIO(0x9888), 0x0f860400 },
+	{ _MMIO(0x9888), 0x09870032 },
+	{ _MMIO(0x9888), 0x01888052 },
+	{ _MMIO(0x9888), 0x11880000 },
+	{ _MMIO(0x9888), 0x09884000 },
+	{ _MMIO(0x9888), 0x1b931001 },
+	{ _MMIO(0x9888), 0x1d930001 },
+	{ _MMIO(0x9888), 0x19934000 },
+	{ _MMIO(0x9888), 0x1b958000 },
+	{ _MMIO(0x9888), 0x1d950094 },
+	{ _MMIO(0x9888), 0x19958000 },
+	{ _MMIO(0x9888), 0x05e5a000 },
+	{ _MMIO(0x9888), 0x01e5c000 },
+	{ _MMIO(0x9888), 0x0592c000 },
+	{ _MMIO(0x9888), 0x0b928000 },
+	{ _MMIO(0x9888), 0x0d924000 },
+	{ _MMIO(0x9888), 0x0f924000 },
+	{ _MMIO(0x9888), 0x11928000 },
+	{ _MMIO(0x9888), 0x1392c000 },
+	{ _MMIO(0x9888), 0x09924000 },
+	{ _MMIO(0x9888), 0x01985000 },
+	{ _MMIO(0x9888), 0x07988000 },
+	{ _MMIO(0x9888), 0x09981000 },
+	{ _MMIO(0x9888), 0x0b982000 },
+	{ _MMIO(0x9888), 0x0d982000 },
+	{ _MMIO(0x9888), 0x0f989000 },
+	{ _MMIO(0x9888), 0x05982000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1190c080 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x419010a0 },
+	{ _MMIO(0x9888), 0x55904000 },
+	{ _MMIO(0x9888), 0x45901000 },
+	{ _MMIO(0x9888), 0x47900084 },
+	{ _MMIO(0x9888), 0x57904400 },
+	{ _MMIO(0x9888), 0x499000a5 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900081 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x439014a4 },
+	{ _MMIO(0x9888), 0x53900400 },
+};
+
+static const struct i915_oa_reg *
+get_render_pipe_profile_mux_config(struct drm_i915_private *dev_priv,
+				   int *len)
+{
+	if (dev_priv->drm.pdev->revision < 0x02) {
+		*len = ARRAY_SIZE(mux_config_render_pipe_profile_0_sku_lt_0x02);
+		return mux_config_render_pipe_profile_0_sku_lt_0x02;
+	} else if (dev_priv->drm.pdev->revision >= 0x02) {
+		*len = ARRAY_SIZE(mux_config_render_pipe_profile_0_sku_gte_0x02);
+		return mux_config_render_pipe_profile_0_sku_gte_0x02;
+	}
+
+	*len = 0;
+	return NULL;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_reads[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f872 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_reads[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads_0_slices_0x01_and_sku_lt_0x02[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x13946000 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x0f968000 },
+	{ _MMIO(0x9888), 0x1196c000 },
+	{ _MMIO(0x9888), 0x13964000 },
+	{ _MMIO(0x9888), 0x11938000 },
+	{ _MMIO(0x9888), 0x1b93fe00 },
+	{ _MMIO(0x9888), 0x01940010 },
+	{ _MMIO(0x9888), 0x07941100 },
+	{ _MMIO(0x9888), 0x09941312 },
+	{ _MMIO(0x9888), 0x0b941514 },
+	{ _MMIO(0x9888), 0x0d941716 },
+	{ _MMIO(0x9888), 0x11940000 },
+	{ _MMIO(0x9888), 0x19940000 },
+	{ _MMIO(0x9888), 0x1b940000 },
+	{ _MMIO(0x9888), 0x1d940000 },
+	{ _MMIO(0x9888), 0x1b954000 },
+	{ _MMIO(0x9888), 0x1d95a550 },
+	{ _MMIO(0x9888), 0x1f9502aa },
+	{ _MMIO(0x9888), 0x2f900157 },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13908000 },
+	{ _MMIO(0x9888), 0x21908000 },
+	{ _MMIO(0x9888), 0x23908000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27908000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0xd28), 0x00000000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads_0_sku_lt_0x05_and_sku_gte_0x02[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x13946000 },
+	{ _MMIO(0x9888), 0x15940016 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x19930800 },
+	{ _MMIO(0x9888), 0x1b93aa55 },
+	{ _MMIO(0x9888), 0x1d9300aa },
+	{ _MMIO(0x9888), 0x01940010 },
+	{ _MMIO(0x9888), 0x07941100 },
+	{ _MMIO(0x9888), 0x09941312 },
+	{ _MMIO(0x9888), 0x0b941514 },
+	{ _MMIO(0x9888), 0x0d941716 },
+	{ _MMIO(0x9888), 0x0f940018 },
+	{ _MMIO(0x9888), 0x1b940000 },
+	{ _MMIO(0x9888), 0x11940000 },
+	{ _MMIO(0x9888), 0x01e58000 },
+	{ _MMIO(0x9888), 0x03e57000 },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13908000 },
+	{ _MMIO(0x9888), 0x21908000 },
+	{ _MMIO(0x9888), 0x23908000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27908000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d908000 },
+	{ _MMIO(0x9888), 0x2f908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c20 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900400 },
+	{ _MMIO(0x9888), 0x47900421 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900421 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900061 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads_0_sku_gte_0x05[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f900064 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930055 },
+	{ _MMIO(0x9888), 0x03e58000 },
+	{ _MMIO(0x9888), 0x05e5c000 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x13900150 },
+	{ _MMIO(0x9888), 0x21900151 },
+	{ _MMIO(0x9888), 0x23900152 },
+	{ _MMIO(0x9888), 0x25900153 },
+	{ _MMIO(0x9888), 0x27900154 },
+	{ _MMIO(0x9888), 0x29900155 },
+	{ _MMIO(0x9888), 0x2b900156 },
+	{ _MMIO(0x9888), 0x2d900157 },
+	{ _MMIO(0x9888), 0x2f90015f },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c60 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900c63 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c63 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg *
+get_memory_reads_mux_config(struct drm_i915_private *dev_priv,
+			    int *len)
+{
+	if ((INTEL_INFO(dev_priv)->sseu.slice_mask & 0x01) &&
+	    (dev_priv->drm.pdev->revision < 0x02)) {
+		*len = ARRAY_SIZE(mux_config_memory_reads_0_slices_0x01_and_sku_lt_0x02);
+		return mux_config_memory_reads_0_slices_0x01_and_sku_lt_0x02;
+	} else if ((dev_priv->drm.pdev->revision < 0x05) &&
+		   (dev_priv->drm.pdev->revision >= 0x02)) {
+		*len = ARRAY_SIZE(mux_config_memory_reads_0_sku_lt_0x05_and_sku_gte_0x02);
+		return mux_config_memory_reads_0_sku_lt_0x05_and_sku_gte_0x02;
+	} else if (dev_priv->drm.pdev->revision >= 0x05) {
+		*len = ARRAY_SIZE(mux_config_memory_reads_0_sku_gte_0x05);
+		return mux_config_memory_reads_0_sku_gte_0x05;
+	}
+
+	*len = 0;
+	return NULL;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_writes[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f822 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_writes[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes_0_slices_0x01_and_sku_lt_0x02[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x13945400 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f901400 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x0f968000 },
+	{ _MMIO(0x9888), 0x1196c000 },
+	{ _MMIO(0x9888), 0x13964000 },
+	{ _MMIO(0x9888), 0x11938000 },
+	{ _MMIO(0x9888), 0x1b93fe00 },
+	{ _MMIO(0x9888), 0x01940010 },
+	{ _MMIO(0x9888), 0x07941100 },
+	{ _MMIO(0x9888), 0x09941312 },
+	{ _MMIO(0x9888), 0x0b941514 },
+	{ _MMIO(0x9888), 0x0d941716 },
+	{ _MMIO(0x9888), 0x11940000 },
+	{ _MMIO(0x9888), 0x19940000 },
+	{ _MMIO(0x9888), 0x1b940000 },
+	{ _MMIO(0x9888), 0x1d940000 },
+	{ _MMIO(0x9888), 0x1b954000 },
+	{ _MMIO(0x9888), 0x1d95a550 },
+	{ _MMIO(0x9888), 0x1f9502aa },
+	{ _MMIO(0x9888), 0x2f900167 },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13908000 },
+	{ _MMIO(0x9888), 0x21908000 },
+	{ _MMIO(0x9888), 0x23908000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27908000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0xd28), 0x00000000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes_0_sku_lt_0x05_and_sku_gte_0x02[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x13945400 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f901400 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x19930800 },
+	{ _MMIO(0x9888), 0x1b93aa55 },
+	{ _MMIO(0x9888), 0x1d93002a },
+	{ _MMIO(0x9888), 0x01940010 },
+	{ _MMIO(0x9888), 0x07941100 },
+	{ _MMIO(0x9888), 0x09941312 },
+	{ _MMIO(0x9888), 0x0b941514 },
+	{ _MMIO(0x9888), 0x0d941716 },
+	{ _MMIO(0x9888), 0x1b940000 },
+	{ _MMIO(0x9888), 0x11940000 },
+	{ _MMIO(0x9888), 0x01e58000 },
+	{ _MMIO(0x9888), 0x03e57000 },
+	{ _MMIO(0x9888), 0x2f900167 },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13908000 },
+	{ _MMIO(0x9888), 0x21908000 },
+	{ _MMIO(0x9888), 0x23908000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27908000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c20 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900400 },
+	{ _MMIO(0x9888), 0x47900421 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900421 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes_0_sku_gte_0x05[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f901000 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930055 },
+	{ _MMIO(0x9888), 0x03e58000 },
+	{ _MMIO(0x9888), 0x05e5c000 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x13900160 },
+	{ _MMIO(0x9888), 0x21900161 },
+	{ _MMIO(0x9888), 0x23900162 },
+	{ _MMIO(0x9888), 0x25900163 },
+	{ _MMIO(0x9888), 0x27900164 },
+	{ _MMIO(0x9888), 0x29900165 },
+	{ _MMIO(0x9888), 0x2b900166 },
+	{ _MMIO(0x9888), 0x2d900167 },
+	{ _MMIO(0x9888), 0x2f900150 },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c60 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900c63 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c63 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg *
+get_memory_writes_mux_config(struct drm_i915_private *dev_priv,
+			     int *len)
+{
+	if ((INTEL_INFO(dev_priv)->sseu.slice_mask & 0x01) &&
+	    (dev_priv->drm.pdev->revision < 0x02)) {
+		*len = ARRAY_SIZE(mux_config_memory_writes_0_slices_0x01_and_sku_lt_0x02);
+		return mux_config_memory_writes_0_slices_0x01_and_sku_lt_0x02;
+	} else if ((dev_priv->drm.pdev->revision < 0x05) &&
+		   (dev_priv->drm.pdev->revision >= 0x02)) {
+		*len = ARRAY_SIZE(mux_config_memory_writes_0_sku_lt_0x05_and_sku_gte_0x02);
+		return mux_config_memory_writes_0_sku_lt_0x05_and_sku_gte_0x02;
+	} else if (dev_priv->drm.pdev->revision >= 0x05) {
+		*len = ARRAY_SIZE(mux_config_memory_writes_0_sku_gte_0x05);
+		return mux_config_memory_writes_0_sku_gte_0x05;
+	}
+
+	*len = 0;
+	return NULL;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extended[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fc2a },
+	{ _MMIO(0x2774), 0x0000bf00 },
+	{ _MMIO(0x2778), 0x0007fc6a },
+	{ _MMIO(0x277c), 0x0000bf00 },
+	{ _MMIO(0x2780), 0x0007fc92 },
+	{ _MMIO(0x2784), 0x0000bf00 },
+	{ _MMIO(0x2788), 0x0007fca2 },
+	{ _MMIO(0x278c), 0x0000bf00 },
+	{ _MMIO(0x2790), 0x0007fc32 },
+	{ _MMIO(0x2794), 0x0000bf00 },
+	{ _MMIO(0x2798), 0x0007fc9a },
+	{ _MMIO(0x279c), 0x0000bf00 },
+	{ _MMIO(0x27a0), 0x0007fe6a },
+	{ _MMIO(0x27a4), 0x0000bf00 },
+	{ _MMIO(0x27a8), 0x0007fe7a },
+	{ _MMIO(0x27ac), 0x0000bf00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extended[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended_0_subslices_0x01[] = {
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x141c8160 },
+	{ _MMIO(0x9888), 0x161c8015 },
+	{ _MMIO(0x9888), 0x181c0120 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4eaaa0 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0e6c0b01 },
+	{ _MMIO(0x9888), 0x006c0200 },
+	{ _MMIO(0x9888), 0x026c000c },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x001b8000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x001c0041 },
+	{ _MMIO(0x9888), 0x061c4200 },
+	{ _MMIO(0x9888), 0x081c4443 },
+	{ _MMIO(0x9888), 0x0a1c4645 },
+	{ _MMIO(0x9888), 0x0c1c7647 },
+	{ _MMIO(0x9888), 0x041c7357 },
+	{ _MMIO(0x9888), 0x1c1c0030 },
+	{ _MMIO(0x9888), 0x101c0000 },
+	{ _MMIO(0x9888), 0x1a1c0000 },
+	{ _MMIO(0x9888), 0x121c8000 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4caa2a },
+	{ _MMIO(0x9888), 0x0c4c02aa },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5515 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162caa00 },
+	{ _MMIO(0x9888), 0x182c00aa },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0xd28), 0x00000000 },
+	{ _MMIO(0x9888), 0x11907fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900040 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900802 },
+	{ _MMIO(0x9888), 0x47900842 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900842 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900800 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg *
+get_compute_extended_mux_config(struct drm_i915_private *dev_priv,
+				int *len)
+{
+	if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x01) {
+		*len = ARRAY_SIZE(mux_config_compute_extended_0_subslices_0x01);
+		return mux_config_compute_extended_0_subslices_0x01;
+	}
+
+	*len = 0;
+	return NULL;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_l3_cache[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fffa },
+	{ _MMIO(0x2774), 0x0000fefe },
+	{ _MMIO(0x2778), 0x0007fffa },
+	{ _MMIO(0x277c), 0x0000fefd },
+	{ _MMIO(0x2790), 0x0007fffa },
+	{ _MMIO(0x2794), 0x0000fbef },
+	{ _MMIO(0x2798), 0x0007fffa },
+	{ _MMIO(0x279c), 0x0000fbdf },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_l3_cache[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00101100 },
+	{ _MMIO(0xe45c), 0x00201200 },
+	{ _MMIO(0xe55c), 0x00301300 },
+	{ _MMIO(0xe65c), 0x00401400 },
+};
+
+static const struct i915_oa_reg mux_config_compute_l3_cache[] = {
+	{ _MMIO(0x9888), 0x166c0760 },
+	{ _MMIO(0x9888), 0x1593001e },
+	{ _MMIO(0x9888), 0x3f901403 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4e8020 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x006c0051 },
+	{ _MMIO(0x9888), 0x066c5000 },
+	{ _MMIO(0x9888), 0x086c5c5d },
+	{ _MMIO(0x9888), 0x0e6c5e5f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x186c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x061b8000 },
+	{ _MMIO(0x9888), 0x081bc000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x101c8000 },
+	{ _MMIO(0x9888), 0x1a1ce000 },
+	{ _MMIO(0x9888), 0x1c1c0030 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4c2a00 },
+	{ _MMIO(0x9888), 0x0c4c0280 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f1500 },
+	{ _MMIO(0x9888), 0x100f0140 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162c0a00 },
+	{ _MMIO(0x9888), 0x182c00a0 },
+	{ _MMIO(0x9888), 0x03933300 },
+	{ _MMIO(0x9888), 0x05930032 },
+	{ _MMIO(0x9888), 0x11930000 },
+	{ _MMIO(0x9888), 0x1b930000 },
+	{ _MMIO(0x9888), 0x1d900157 },
+	{ _MMIO(0x9888), 0x1f900167 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1190030f },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900042 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x53901111 },
+	{ _MMIO(0x9888), 0x43900420 },
+};
+
+static const struct i915_oa_reg *
+get_compute_l3_cache_mux_config(struct drm_i915_private *dev_priv,
+				int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_l3_cache);
+	return mux_config_compute_l3_cache;
+}
+
+static const struct i915_oa_reg b_counter_config_hdc_and_sf[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x10800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+};
+
+static const struct i915_oa_reg flex_eu_config_hdc_and_sf[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_hdc_and_sf[] = {
+	{ _MMIO(0x9888), 0x104f0232 },
+	{ _MMIO(0x9888), 0x124f4640 },
+	{ _MMIO(0x9888), 0x106c0232 },
+	{ _MMIO(0x9888), 0x11834400 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x004f1880 },
+	{ _MMIO(0x9888), 0x024f08bb },
+	{ _MMIO(0x9888), 0x044f001b },
+	{ _MMIO(0x9888), 0x046c0100 },
+	{ _MMIO(0x9888), 0x066c000b },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x041b8000 },
+	{ _MMIO(0x9888), 0x061b4000 },
+	{ _MMIO(0x9888), 0x1a1c1800 },
+	{ _MMIO(0x9888), 0x005b8000 },
+	{ _MMIO(0x9888), 0x025bc000 },
+	{ _MMIO(0x9888), 0x045b4000 },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x145c8000 },
+	{ _MMIO(0x9888), 0x165c8000 },
+	{ _MMIO(0x9888), 0x185c8000 },
+	{ _MMIO(0x9888), 0x0a4c00a0 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042cc000 },
+	{ _MMIO(0x9888), 0x062cc000 },
+	{ _MMIO(0x9888), 0x082cc000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x0f8305c0 },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x07830000 },
+	{ _MMIO(0x9888), 0x1d950080 },
+	{ _MMIO(0x9888), 0x13928000 },
+	{ _MMIO(0x9888), 0x0f988000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b9000a0 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_hdc_and_sf_mux_config(struct drm_i915_private *dev_priv,
+			  int *len)
+{
+	*len = ARRAY_SIZE(mux_config_hdc_and_sf);
+	return mux_config_hdc_and_sf;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1[] = {
+	{ _MMIO(0x9888), 0x126c7b40 },
+	{ _MMIO(0x9888), 0x166c0020 },
+	{ _MMIO(0x9888), 0x0a603444 },
+	{ _MMIO(0x9888), 0x0a613400 },
+	{ _MMIO(0x9888), 0x1a4ea800 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x064f4000 },
+	{ _MMIO(0x9888), 0x0c6c5327 },
+	{ _MMIO(0x9888), 0x0e6c5425 },
+	{ _MMIO(0x9888), 0x006c2a00 },
+	{ _MMIO(0x9888), 0x026c285b },
+	{ _MMIO(0x9888), 0x046c005c },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0800 },
+	{ _MMIO(0x9888), 0x0c1bc000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x001b8000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x1c1c003c },
+	{ _MMIO(0x9888), 0x121c8000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c0800 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x1a5c1000 },
+	{ _MMIO(0x9888), 0x10600000 },
+	{ _MMIO(0x9888), 0x04600000 },
+	{ _MMIO(0x9888), 0x0c610044 },
+	{ _MMIO(0x9888), 0x10610000 },
+	{ _MMIO(0x9888), 0x06610000 },
+	{ _MMIO(0x9888), 0x0c4c02a8 },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x0a4c002a },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0154 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x182c00aa },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2cc000 },
+	{ _MMIO(0x9888), 0x1190ffc0 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900420 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900021 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900400 },
+	{ _MMIO(0x9888), 0x43900421 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+};
+
+static const struct i915_oa_reg *
+get_l3_1_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_1);
+	return mux_config_l3_1;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00028002 },
+	{ _MMIO(0x277c), 0x000087ff },
+	{ _MMIO(0x2780), 0x00020002 },
+	{ _MMIO(0x2784), 0x00008fff },
+	{ _MMIO(0x2788), 0x00008002 },
+	{ _MMIO(0x278c), 0x0000a7ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_2[] = {
+	{ _MMIO(0x9888), 0x126c02e0 },
+	{ _MMIO(0x9888), 0x146c0001 },
+	{ _MMIO(0x9888), 0x0a623400 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x064f4000 },
+	{ _MMIO(0x9888), 0x026c3324 },
+	{ _MMIO(0x9888), 0x046c3422 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c0800 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x1a5c1000 },
+	{ _MMIO(0x9888), 0x06614000 },
+	{ _MMIO(0x9888), 0x0c620044 },
+	{ _MMIO(0x9888), 0x10620000 },
+	{ _MMIO(0x9888), 0x06620000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c002a },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f4000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2cc000 },
+	{ _MMIO(0x9888), 0x1190f800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_l3_2_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_2);
+	return mux_config_l3_2;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_3[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00028002 },
+	{ _MMIO(0x277c), 0x000087ff },
+	{ _MMIO(0x2780), 0x00020002 },
+	{ _MMIO(0x2784), 0x00008fff },
+	{ _MMIO(0x2788), 0x00008002 },
+	{ _MMIO(0x278c), 0x0000a7ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_3[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_3[] = {
+	{ _MMIO(0x9888), 0x126c4e80 },
+	{ _MMIO(0x9888), 0x146c0000 },
+	{ _MMIO(0x9888), 0x0a633400 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x026c3321 },
+	{ _MMIO(0x9888), 0x046c342f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1a6c2000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x061b4000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c1800 },
+	{ _MMIO(0x9888), 0x06604000 },
+	{ _MMIO(0x9888), 0x0c630044 },
+	{ _MMIO(0x9888), 0x10630000 },
+	{ _MMIO(0x9888), 0x06630000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c00aa },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f4000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x1190f800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900002 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_l3_3_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_3);
+	return mux_config_l3_3;
+}
+
+static const struct i915_oa_reg b_counter_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000efff },
+	{ _MMIO(0x2778), 0x00006000 },
+	{ _MMIO(0x277c), 0x0000f3ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x9888), 0x102f3800 },
+	{ _MMIO(0x9888), 0x144d0500 },
+	{ _MMIO(0x9888), 0x120d03c0 },
+	{ _MMIO(0x9888), 0x140d03cf },
+	{ _MMIO(0x9888), 0x0c0f0004 },
+	{ _MMIO(0x9888), 0x0c4e4000 },
+	{ _MMIO(0x9888), 0x042f0480 },
+	{ _MMIO(0x9888), 0x082f0000 },
+	{ _MMIO(0x9888), 0x022f0000 },
+	{ _MMIO(0x9888), 0x0a4c0090 },
+	{ _MMIO(0x9888), 0x064d0027 },
+	{ _MMIO(0x9888), 0x004d0000 },
+	{ _MMIO(0x9888), 0x000d0d40 },
+	{ _MMIO(0x9888), 0x020d803f },
+	{ _MMIO(0x9888), 0x040d8023 },
+	{ _MMIO(0x9888), 0x100d0000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020f0010 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x0e0f0050 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41901400 },
+	{ _MMIO(0x9888), 0x43901485 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900001 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_rasterizer_and_pixel_backend_mux_config(struct drm_i915_private *dev_priv,
+					    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_rasterizer_and_pixel_backend);
+	return mux_config_rasterizer_and_pixel_backend;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler[] = {
+	{ _MMIO(0x9888), 0x14152c00 },
+	{ _MMIO(0x9888), 0x16150005 },
+	{ _MMIO(0x9888), 0x121600a0 },
+	{ _MMIO(0x9888), 0x14352c00 },
+	{ _MMIO(0x9888), 0x16350005 },
+	{ _MMIO(0x9888), 0x123600a0 },
+	{ _MMIO(0x9888), 0x14552c00 },
+	{ _MMIO(0x9888), 0x16550005 },
+	{ _MMIO(0x9888), 0x125600a0 },
+	{ _MMIO(0x9888), 0x062f6000 },
+	{ _MMIO(0x9888), 0x022f2000 },
+	{ _MMIO(0x9888), 0x0c4c0050 },
+	{ _MMIO(0x9888), 0x0a4c0010 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0350 },
+	{ _MMIO(0x9888), 0x0c0fb000 },
+	{ _MMIO(0x9888), 0x0e0f00da },
+	{ _MMIO(0x9888), 0x182c0028 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x022dc000 },
+	{ _MMIO(0x9888), 0x042d4000 },
+	{ _MMIO(0x9888), 0x0c138000 },
+	{ _MMIO(0x9888), 0x0e132000 },
+	{ _MMIO(0x9888), 0x0413c000 },
+	{ _MMIO(0x9888), 0x1c140018 },
+	{ _MMIO(0x9888), 0x0c157000 },
+	{ _MMIO(0x9888), 0x0e150078 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x04162180 },
+	{ _MMIO(0x9888), 0x02160000 },
+	{ _MMIO(0x9888), 0x04174000 },
+	{ _MMIO(0x9888), 0x0233a000 },
+	{ _MMIO(0x9888), 0x04333000 },
+	{ _MMIO(0x9888), 0x14348000 },
+	{ _MMIO(0x9888), 0x16348000 },
+	{ _MMIO(0x9888), 0x02357870 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x04360043 },
+	{ _MMIO(0x9888), 0x02360000 },
+	{ _MMIO(0x9888), 0x04371000 },
+	{ _MMIO(0x9888), 0x0e538000 },
+	{ _MMIO(0x9888), 0x00538000 },
+	{ _MMIO(0x9888), 0x06533000 },
+	{ _MMIO(0x9888), 0x1c540020 },
+	{ _MMIO(0x9888), 0x12548000 },
+	{ _MMIO(0x9888), 0x0e557000 },
+	{ _MMIO(0x9888), 0x00557800 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x06560043 },
+	{ _MMIO(0x9888), 0x02560000 },
+	{ _MMIO(0x9888), 0x06571000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900060 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900060 },
+};
+
+static const struct i915_oa_reg *
+get_sampler_mux_config(struct drm_i915_private *dev_priv,
+		       int *len)
+{
+	*len = ARRAY_SIZE(mux_config_sampler);
+	return mux_config_sampler;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x00007fff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x00009fff },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000efff },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000f3ff },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fdff },
+	{ _MMIO(0x2798), 0x00000000 },
+	{ _MMIO(0x279c), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_1[] = {
+	{ _MMIO(0x9888), 0x12120000 },
+	{ _MMIO(0x9888), 0x12320000 },
+	{ _MMIO(0x9888), 0x12520000 },
+	{ _MMIO(0x9888), 0x002f8000 },
+	{ _MMIO(0x9888), 0x022f3000 },
+	{ _MMIO(0x9888), 0x0a4c0015 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f03a0 },
+	{ _MMIO(0x9888), 0x0c0ff000 },
+	{ _MMIO(0x9888), 0x0e0f0095 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2d8000 },
+	{ _MMIO(0x9888), 0x0e2d4000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x02108000 },
+	{ _MMIO(0x9888), 0x0410c000 },
+	{ _MMIO(0x9888), 0x02118000 },
+	{ _MMIO(0x9888), 0x0411c000 },
+	{ _MMIO(0x9888), 0x02121880 },
+	{ _MMIO(0x9888), 0x041219b5 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x02134000 },
+	{ _MMIO(0x9888), 0x04135000 },
+	{ _MMIO(0x9888), 0x0c308000 },
+	{ _MMIO(0x9888), 0x0e304000 },
+	{ _MMIO(0x9888), 0x06304000 },
+	{ _MMIO(0x9888), 0x0c318000 },
+	{ _MMIO(0x9888), 0x0e314000 },
+	{ _MMIO(0x9888), 0x06314000 },
+	{ _MMIO(0x9888), 0x0c321a80 },
+	{ _MMIO(0x9888), 0x0e320033 },
+	{ _MMIO(0x9888), 0x06320031 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x0c334000 },
+	{ _MMIO(0x9888), 0x0e331000 },
+	{ _MMIO(0x9888), 0x06331000 },
+	{ _MMIO(0x9888), 0x0e508000 },
+	{ _MMIO(0x9888), 0x00508000 },
+	{ _MMIO(0x9888), 0x02504000 },
+	{ _MMIO(0x9888), 0x0e518000 },
+	{ _MMIO(0x9888), 0x00518000 },
+	{ _MMIO(0x9888), 0x02514000 },
+	{ _MMIO(0x9888), 0x0e521880 },
+	{ _MMIO(0x9888), 0x00521a80 },
+	{ _MMIO(0x9888), 0x02520033 },
+	{ _MMIO(0x9888), 0x0e534000 },
+	{ _MMIO(0x9888), 0x00534000 },
+	{ _MMIO(0x9888), 0x02531000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900062 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+};
+
+static const struct i915_oa_reg *
+get_tdl_1_mux_config(struct drm_i915_private *dev_priv,
+		     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_tdl_1);
+	return mux_config_tdl_1;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_2[] = {
+	{ _MMIO(0x9888), 0x12124d60 },
+	{ _MMIO(0x9888), 0x12322e60 },
+	{ _MMIO(0x9888), 0x12524d60 },
+	{ _MMIO(0x9888), 0x022f3000 },
+	{ _MMIO(0x9888), 0x0a4c0014 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0fe000 },
+	{ _MMIO(0x9888), 0x0e0f0097 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x002d8000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x0410c000 },
+	{ _MMIO(0x9888), 0x0411c000 },
+	{ _MMIO(0x9888), 0x04121fb7 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x04135000 },
+	{ _MMIO(0x9888), 0x00308000 },
+	{ _MMIO(0x9888), 0x06304000 },
+	{ _MMIO(0x9888), 0x00318000 },
+	{ _MMIO(0x9888), 0x06314000 },
+	{ _MMIO(0x9888), 0x00321b80 },
+	{ _MMIO(0x9888), 0x0632003f },
+	{ _MMIO(0x9888), 0x00334000 },
+	{ _MMIO(0x9888), 0x06331000 },
+	{ _MMIO(0x9888), 0x0250c000 },
+	{ _MMIO(0x9888), 0x0251c000 },
+	{ _MMIO(0x9888), 0x02521fb7 },
+	{ _MMIO(0x9888), 0x00520000 },
+	{ _MMIO(0x9888), 0x02535000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x43900063 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_tdl_2_mux_config(struct drm_i915_private *dev_priv,
+		     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_tdl_2);
+	return mux_config_tdl_2;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extra[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extra[] = {
+	{ _MMIO(0xe458), 0x00001000 },
+	{ _MMIO(0xe558), 0x00003002 },
+	{ _MMIO(0xe658), 0x00005004 },
+	{ _MMIO(0xe758), 0x00011010 },
+	{ _MMIO(0xe45c), 0x00050012 },
+	{ _MMIO(0xe55c), 0x00052051 },
+	{ _MMIO(0xe65c), 0x00000008 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extra[] = {
+	{ _MMIO(0x9888), 0x121203e0 },
+	{ _MMIO(0x9888), 0x123203e0 },
+	{ _MMIO(0x9888), 0x125203e0 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x0a4c0040 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0e0f006c },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x042d8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06114000 },
+	{ _MMIO(0x9888), 0x06120033 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x04308000 },
+	{ _MMIO(0x9888), 0x04318000 },
+	{ _MMIO(0x9888), 0x04321980 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x04334000 },
+	{ _MMIO(0x9888), 0x04504000 },
+	{ _MMIO(0x9888), 0x04514000 },
+	{ _MMIO(0x9888), 0x04520033 },
+	{ _MMIO(0x9888), 0x00520000 },
+	{ _MMIO(0x9888), 0x04531000 },
+	{ _MMIO(0x9888), 0x1190e000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900c00 },
+	{ _MMIO(0x9888), 0x45900002 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_compute_extra_mux_config(struct drm_i915_private *dev_priv,
+			     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_extra);
+	return mux_config_compute_extra;
+}
+
+static const struct i915_oa_reg b_counter_config_vme_pipe[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00100030 },
+	{ _MMIO(0x2774), 0x0000fff9 },
+	{ _MMIO(0x2778), 0x00000002 },
+	{ _MMIO(0x277c), 0x0000fffc },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000fff3 },
+	{ _MMIO(0x2788), 0x00100180 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00000002 },
+	{ _MMIO(0x279c), 0x0000ff3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_vme_pipe[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00008003 },
+};
+
+static const struct i915_oa_reg mux_config_vme_pipe[] = {
+	{ _MMIO(0x9888), 0x141a5800 },
+	{ _MMIO(0x9888), 0x161a00c0 },
+	{ _MMIO(0x9888), 0x12180240 },
+	{ _MMIO(0x9888), 0x14180002 },
+	{ _MMIO(0x9888), 0x143a5800 },
+	{ _MMIO(0x9888), 0x163a00c0 },
+	{ _MMIO(0x9888), 0x12380240 },
+	{ _MMIO(0x9888), 0x14380002 },
+	{ _MMIO(0x9888), 0x002f1000 },
+	{ _MMIO(0x9888), 0x022f8000 },
+	{ _MMIO(0x9888), 0x042f3000 },
+	{ _MMIO(0x9888), 0x004c4000 },
+	{ _MMIO(0x9888), 0x0a4c1500 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f9500 },
+	{ _MMIO(0x9888), 0x100f002a },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162c0a00 },
+	{ _MMIO(0x9888), 0x0a2dc000 },
+	{ _MMIO(0x9888), 0x0c2dc000 },
+	{ _MMIO(0x9888), 0x04193000 },
+	{ _MMIO(0x9888), 0x081a28c1 },
+	{ _MMIO(0x9888), 0x001a0000 },
+	{ _MMIO(0x9888), 0x00133000 },
+	{ _MMIO(0x9888), 0x0613c000 },
+	{ _MMIO(0x9888), 0x0813f000 },
+	{ _MMIO(0x9888), 0x00172000 },
+	{ _MMIO(0x9888), 0x06178000 },
+	{ _MMIO(0x9888), 0x0817a000 },
+	{ _MMIO(0x9888), 0x00180037 },
+	{ _MMIO(0x9888), 0x06180940 },
+	{ _MMIO(0x9888), 0x08180000 },
+	{ _MMIO(0x9888), 0x02180000 },
+	{ _MMIO(0x9888), 0x04183000 },
+	{ _MMIO(0x9888), 0x06393000 },
+	{ _MMIO(0x9888), 0x0c3a28c1 },
+	{ _MMIO(0x9888), 0x003a0000 },
+	{ _MMIO(0x9888), 0x0a33f000 },
+	{ _MMIO(0x9888), 0x0c33f000 },
+	{ _MMIO(0x9888), 0x0a37a000 },
+	{ _MMIO(0x9888), 0x0c37a000 },
+	{ _MMIO(0x9888), 0x0a380977 },
+	{ _MMIO(0x9888), 0x08380000 },
+	{ _MMIO(0x9888), 0x04380000 },
+	{ _MMIO(0x9888), 0x06383000 },
+	{ _MMIO(0x9888), 0x119000ff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900040 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900800 },
+	{ _MMIO(0x9888), 0x47901000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900844 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_vme_pipe_mux_config(struct drm_i915_private *dev_priv,
+			int *len)
+{
+	*len = ARRAY_SIZE(mux_config_vme_pipe);
+	return mux_config_vme_pipe;
+}
+
+static const struct i915_oa_reg b_counter_config_test_oa[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2770), 0x00000004 },
+	{ _MMIO(0x2774), 0x00000000 },
+	{ _MMIO(0x2778), 0x00000003 },
+	{ _MMIO(0x277c), 0x00000000 },
+	{ _MMIO(0x2780), 0x00000007 },
+	{ _MMIO(0x2784), 0x00000000 },
+	{ _MMIO(0x2788), 0x00100002 },
+	{ _MMIO(0x278c), 0x0000fff7 },
+	{ _MMIO(0x2790), 0x00100002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00100082 },
+	{ _MMIO(0x279c), 0x0000ffef },
+	{ _MMIO(0x27a0), 0x001000c2 },
+	{ _MMIO(0x27a4), 0x0000ffe7 },
+	{ _MMIO(0x27a8), 0x00100001 },
+	{ _MMIO(0x27ac), 0x0000ffe7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_test_oa[] = {
+};
+
+static const struct i915_oa_reg mux_config_test_oa[] = {
+	{ _MMIO(0x9888), 0x11810000 },
+	{ _MMIO(0x9888), 0x07810016 },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930040 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_test_oa_mux_config(struct drm_i915_private *dev_priv,
+		       int *len)
+{
+	*len = ARRAY_SIZE(mux_config_test_oa);
+	return mux_config_test_oa;
+}
+
 int i915_oa_select_metric_set_sklgt2(struct drm_i915_private *dev_priv)
 {
-	dev_priv->perf.oa.mux_regs = NULL;
-	dev_priv->perf.oa.mux_regs_len = 0;
-	dev_priv->perf.oa.b_counter_regs = NULL;
-	dev_priv->perf.oa.b_counter_regs_len = 0;
-	dev_priv->perf.oa.flex_regs = NULL;
-	dev_priv->perf.oa.flex_regs_len = 0;
+	dev_priv->perf.oa.mux_regs = NULL;
+	dev_priv->perf.oa.mux_regs_len = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_render_basic_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_basic_mux_config(dev_priv,
+						     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
+		return 0;
+	case METRIC_SET_ID_RENDER_PIPE_PROFILE:
+		dev_priv->perf.oa.mux_regs =
+			get_render_pipe_profile_mux_config(dev_priv,
+							   &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_PIPE_PROFILE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_pipe_profile;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_pipe_profile);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_pipe_profile;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_pipe_profile);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_READS:
+		dev_priv->perf.oa.mux_regs =
+			get_memory_reads_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_READS\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_reads;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_reads);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_reads;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_reads);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_WRITES:
+		dev_priv->perf.oa.mux_regs =
+			get_memory_writes_mux_config(dev_priv,
+						     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_WRITES\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_writes;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_writes);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_writes;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_writes);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTENDED:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_extended_mux_config(dev_priv,
+							&dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTENDED\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extended;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extended);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extended;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extended);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_L3_CACHE:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_l3_cache_mux_config(dev_priv,
+							&dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_L3_CACHE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_l3_cache;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_l3_cache);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_l3_cache;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_l3_cache);
+
+		return 0;
+	case METRIC_SET_ID_HDC_AND_SF:
+		dev_priv->perf.oa.mux_regs =
+			get_hdc_and_sf_mux_config(dev_priv,
+						  &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"HDC_AND_SF\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_hdc_and_sf;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_hdc_and_sf);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_hdc_and_sf;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_hdc_and_sf);
+
+		return 0;
+	case METRIC_SET_ID_L3_1:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_1_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_1);
+
+		return 0;
+	case METRIC_SET_ID_L3_2:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_2_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_2);
+
+		return 0;
+	case METRIC_SET_ID_L3_3:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_3_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_3\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_3;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_3);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_3;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_3);
+
+		return 0;
+	case METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND:
+		dev_priv->perf.oa.mux_regs =
+			get_rasterizer_and_pixel_backend_mux_config(dev_priv,
+								    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RASTERIZER_AND_PIXEL_BACKEND\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_rasterizer_and_pixel_backend);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_rasterizer_and_pixel_backend);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER:
+		dev_priv->perf.oa.mux_regs =
+			get_sampler_mux_config(dev_priv,
+					       &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler);
+
+		return 0;
+	case METRIC_SET_ID_TDL_1:
+		dev_priv->perf.oa.mux_regs =
+			get_tdl_1_mux_config(dev_priv,
+					     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_1);
+
+		return 0;
+	case METRIC_SET_ID_TDL_2:
+		dev_priv->perf.oa.mux_regs =
+			get_tdl_2_mux_config(dev_priv,
+					     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_2);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTRA:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_extra_mux_config(dev_priv,
+						     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTRA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extra;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extra);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extra;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extra);
+
+		return 0;
+	case METRIC_SET_ID_VME_PIPE:
+		dev_priv->perf.oa.mux_regs =
+			get_vme_pipe_mux_config(dev_priv,
+						&dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"VME_PIPE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_vme_pipe;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_vme_pipe);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_vme_pipe;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_vme_pipe);
+
+		return 0;
+	case METRIC_SET_ID_TEST_OA:
+		dev_priv->perf.oa.mux_regs =
+			get_test_oa_mux_config(dev_priv,
+					       &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TEST_OA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_test_oa;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_test_oa);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_test_oa;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_test_oa);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "f519e481-24d2-4d42-87c9-3fdd12c00202",
+	.attrs =  attrs_render_basic,
+};
+
+static ssize_t
+show_compute_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_BASIC);
+}
+
+static struct device_attribute dev_attr_compute_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_basic[] = {
+	&dev_attr_compute_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_basic = {
+	.name = "fe47b29d-ae51-423e-bff4-27d965a95b60",
+	.attrs =  attrs_compute_basic,
+};
+
+static ssize_t
+show_render_pipe_profile_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_PIPE_PROFILE);
+}
+
+static struct device_attribute dev_attr_render_pipe_profile_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_pipe_profile_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_pipe_profile[] = {
+	&dev_attr_render_pipe_profile_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_pipe_profile = {
+	.name = "e0ad5ae0-84ba-4f29-a723-1906c12cb774",
+	.attrs =  attrs_render_pipe_profile,
+};
+
+static ssize_t
+show_memory_reads_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_READS);
+}
 
-	switch (dev_priv->perf.oa.metrics_set) {
-	case METRIC_SET_ID_RENDER_BASIC:
-		dev_priv->perf.oa.mux_regs =
-			get_render_basic_mux_config(dev_priv,
-						    &dev_priv->perf.oa.mux_regs_len);
-		if (!dev_priv->perf.oa.mux_regs) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+static struct device_attribute dev_attr_memory_reads_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_reads_id,
+	.store = NULL,
+};
 
-			/* EINVAL because *_register_sysfs already checked this
-			 * and so it wouldn't have been advertised to userspace and
-			 * so shouldn't have been requested
-			 */
-			return -EINVAL;
-		}
+static struct attribute *attrs_memory_reads[] = {
+	&dev_attr_memory_reads_id.attr,
+	NULL,
+};
 
-		dev_priv->perf.oa.b_counter_regs =
-			b_counter_config_render_basic;
-		dev_priv->perf.oa.b_counter_regs_len =
-			ARRAY_SIZE(b_counter_config_render_basic);
+static struct attribute_group group_memory_reads = {
+	.name = "9bc436dd-6130-4add-affc-283eb6eaa864",
+	.attrs =  attrs_memory_reads,
+};
 
-		dev_priv->perf.oa.flex_regs =
-			flex_eu_config_render_basic;
-		dev_priv->perf.oa.flex_regs_len =
-			ARRAY_SIZE(flex_eu_config_render_basic);
+static ssize_t
+show_memory_writes_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_WRITES);
+}
 
-		return 0;
-	default:
-		return -ENODEV;
-	}
+static struct device_attribute dev_attr_memory_writes_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_writes_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_writes[] = {
+	&dev_attr_memory_writes_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_writes = {
+	.name = "2ea0da8f-3527-4669-9d9d-13099a7435bf",
+	.attrs =  attrs_memory_writes,
+};
+
+static ssize_t
+show_compute_extended_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTENDED);
+}
+
+static struct device_attribute dev_attr_compute_extended_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extended_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extended[] = {
+	&dev_attr_compute_extended_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extended = {
+	.name = "d97d16af-028b-4cd1-a672-6210cb5513dd",
+	.attrs =  attrs_compute_extended,
+};
+
+static ssize_t
+show_compute_l3_cache_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_L3_CACHE);
 }
 
+static struct device_attribute dev_attr_compute_l3_cache_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_l3_cache_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_l3_cache[] = {
+	&dev_attr_compute_l3_cache_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_l3_cache = {
+	.name = "9fb22842-e708-43f7-9752-e0e41670c39e",
+	.attrs =  attrs_compute_l3_cache,
+};
+
 static ssize_t
-show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+show_hdc_and_sf_id(struct device *kdev, struct device_attribute *attr, char *buf)
 {
-	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+	return sprintf(buf, "%d\n", METRIC_SET_ID_HDC_AND_SF);
 }
 
-static struct device_attribute dev_attr_render_basic_id = {
+static struct device_attribute dev_attr_hdc_and_sf_id = {
 	.attr = { .name = "id", .mode = 0444 },
-	.show = show_render_basic_id,
+	.show = show_hdc_and_sf_id,
 	.store = NULL,
 };
 
-static struct attribute *attrs_render_basic[] = {
-	&dev_attr_render_basic_id.attr,
+static struct attribute *attrs_hdc_and_sf[] = {
+	&dev_attr_hdc_and_sf_id.attr,
 	NULL,
 };
 
-static struct attribute_group group_render_basic = {
-	.name = "f519e481-24d2-4d42-87c9-3fdd12c00202",
-	.attrs =  attrs_render_basic,
+static struct attribute_group group_hdc_and_sf = {
+	.name = "5378e2a1-4248-4188-a4ae-da25a794c603",
+	.attrs =  attrs_hdc_and_sf,
+};
+
+static ssize_t
+show_l3_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_1);
+}
+
+static struct device_attribute dev_attr_l3_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_1[] = {
+	&dev_attr_l3_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_1 = {
+	.name = "f42cdd6a-b000-42cb-870f-5eb423a7f514",
+	.attrs =  attrs_l3_1,
+};
+
+static ssize_t
+show_l3_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_2);
+}
+
+static struct device_attribute dev_attr_l3_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_2[] = {
+	&dev_attr_l3_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_2 = {
+	.name = "b9bf2423-d88c-4a7b-a051-627611d00dcc",
+	.attrs =  attrs_l3_2,
+};
+
+static ssize_t
+show_l3_3_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_3);
+}
+
+static struct device_attribute dev_attr_l3_3_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_3_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_3[] = {
+	&dev_attr_l3_3_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_3 = {
+	.name = "2414a93d-d84f-406e-99c0-472161194b40",
+	.attrs =  attrs_l3_3,
+};
+
+static ssize_t
+show_rasterizer_and_pixel_backend_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND);
+}
+
+static struct device_attribute dev_attr_rasterizer_and_pixel_backend_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_rasterizer_and_pixel_backend_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_rasterizer_and_pixel_backend[] = {
+	&dev_attr_rasterizer_and_pixel_backend_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_rasterizer_and_pixel_backend = {
+	.name = "53a45d2d-170b-4cf5-b7bb-585120c8e2f5",
+	.attrs =  attrs_rasterizer_and_pixel_backend,
+};
+
+static ssize_t
+show_sampler_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER);
+}
+
+static struct device_attribute dev_attr_sampler_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler[] = {
+	&dev_attr_sampler_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler = {
+	.name = "b4cff514-a91e-4798-a0b3-426ca13fc9c1",
+	.attrs =  attrs_sampler,
+};
+
+static ssize_t
+show_tdl_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_1);
+}
+
+static struct device_attribute dev_attr_tdl_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_1[] = {
+	&dev_attr_tdl_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_1 = {
+	.name = "7821d13b-9b8b-4405-9618-78cd56b62cce",
+	.attrs =  attrs_tdl_1,
+};
+
+static ssize_t
+show_tdl_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_2);
+}
+
+static struct device_attribute dev_attr_tdl_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_2[] = {
+	&dev_attr_tdl_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_2 = {
+	.name = "893f1a4d-919d-4388-8cb7-746d73ea7259",
+	.attrs =  attrs_tdl_2,
+};
+
+static ssize_t
+show_compute_extra_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTRA);
+}
+
+static struct device_attribute dev_attr_compute_extra_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extra_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extra[] = {
+	&dev_attr_compute_extra_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extra = {
+	.name = "41a24047-7484-4ead-ae37-de907e5ff2b2",
+	.attrs =  attrs_compute_extra,
+};
+
+static ssize_t
+show_vme_pipe_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_VME_PIPE);
+}
+
+static struct device_attribute dev_attr_vme_pipe_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_vme_pipe_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_vme_pipe[] = {
+	&dev_attr_vme_pipe_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_vme_pipe = {
+	.name = "95910492-943f-44bd-9461-390240f243fd",
+	.attrs =  attrs_vme_pipe,
+};
+
+static ssize_t
+show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TEST_OA);
+}
+
+static struct device_attribute dev_attr_test_oa_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_test_oa_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_test_oa[] = {
+	&dev_attr_test_oa_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_test_oa = {
+	.name = "1651949f-0ac0-4cb1-a06f-dafd74a407d1",
+	.attrs =  attrs_test_oa,
 };
 
 int
@@ -211,9 +3115,145 @@ i915_perf_register_sysfs_sklgt2(struct drm_i915_private *dev_priv)
 		if (ret)
 			goto error_render_basic;
 	}
+	if (get_compute_basic_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+		if (ret)
+			goto error_compute_basic;
+	}
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+		if (ret)
+			goto error_render_pipe_profile;
+	}
+	if (get_memory_reads_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+		if (ret)
+			goto error_memory_reads;
+	}
+	if (get_memory_writes_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+		if (ret)
+			goto error_memory_writes;
+	}
+	if (get_compute_extended_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+		if (ret)
+			goto error_compute_extended;
+	}
+	if (get_compute_l3_cache_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+		if (ret)
+			goto error_compute_l3_cache;
+	}
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+		if (ret)
+			goto error_hdc_and_sf;
+	}
+	if (get_l3_1_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+		if (ret)
+			goto error_l3_1;
+	}
+	if (get_l3_2_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+		if (ret)
+			goto error_l3_2;
+	}
+	if (get_l3_3_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+		if (ret)
+			goto error_l3_3;
+	}
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+		if (ret)
+			goto error_rasterizer_and_pixel_backend;
+	}
+	if (get_sampler_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler);
+		if (ret)
+			goto error_sampler;
+	}
+	if (get_tdl_1_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+		if (ret)
+			goto error_tdl_1;
+	}
+	if (get_tdl_2_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+		if (ret)
+			goto error_tdl_2;
+	}
+	if (get_compute_extra_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+		if (ret)
+			goto error_compute_extra;
+	}
+	if (get_vme_pipe_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+		if (ret)
+			goto error_vme_pipe;
+	}
+	if (get_test_oa_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+		if (ret)
+			goto error_test_oa;
+	}
 
 	return 0;
 
+error_test_oa:
+	if (get_vme_pipe_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+error_vme_pipe:
+	if (get_compute_extra_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+error_compute_extra:
+	if (get_tdl_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+error_tdl_2:
+	if (get_tdl_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+error_tdl_1:
+	if (get_sampler_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+error_sampler:
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+error_rasterizer_and_pixel_backend:
+	if (get_l3_3_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+error_l3_3:
+	if (get_l3_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+error_l3_2:
+	if (get_l3_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+error_l3_1:
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+error_hdc_and_sf:
+	if (get_compute_l3_cache_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+error_compute_l3_cache:
+	if (get_compute_extended_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+error_compute_extended:
+	if (get_memory_writes_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+error_memory_writes:
+	if (get_memory_reads_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+error_memory_reads:
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+error_render_pipe_profile:
+	if (get_compute_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+error_compute_basic:
+	if (get_render_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
 error_render_basic:
 	return ret;
 }
@@ -225,4 +3265,38 @@ i915_perf_unregister_sysfs_sklgt2(struct drm_i915_private *dev_priv)
 
 	if (get_render_basic_mux_config(dev_priv, &mux_len))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+	if (get_compute_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+	if (get_memory_reads_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+	if (get_memory_writes_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+	if (get_compute_extended_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+	if (get_compute_l3_cache_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+	if (get_l3_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+	if (get_l3_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+	if (get_l3_3_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+	if (get_sampler_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+	if (get_tdl_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+	if (get_tdl_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+	if (get_compute_extra_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+	if (get_vme_pipe_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+	if (get_test_oa_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_test_oa);
 }
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt3.c b/drivers/gpu/drm/i915/i915_oa_sklgt3.c
index a21211136191..bbc01a877eea 100644
--- a/drivers/gpu/drm/i915/i915_oa_sklgt3.c
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt3.c
@@ -31,9 +31,26 @@
 
 enum metric_set_id {
 	METRIC_SET_ID_RENDER_BASIC = 1,
+	METRIC_SET_ID_COMPUTE_BASIC,
+	METRIC_SET_ID_RENDER_PIPE_PROFILE,
+	METRIC_SET_ID_MEMORY_READS,
+	METRIC_SET_ID_MEMORY_WRITES,
+	METRIC_SET_ID_COMPUTE_EXTENDED,
+	METRIC_SET_ID_COMPUTE_L3_CACHE,
+	METRIC_SET_ID_HDC_AND_SF,
+	METRIC_SET_ID_L3_1,
+	METRIC_SET_ID_L3_2,
+	METRIC_SET_ID_L3_3,
+	METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND,
+	METRIC_SET_ID_SAMPLER,
+	METRIC_SET_ID_TDL_1,
+	METRIC_SET_ID_TDL_2,
+	METRIC_SET_ID_COMPUTE_EXTRA,
+	METRIC_SET_ID_VME_PIPE,
+	METRIC_SET_ID_TEST_OA,
 };
 
-int i915_oa_n_builtin_metric_sets_sklgt3 = 1;
+int i915_oa_n_builtin_metric_sets_sklgt3 = 18;
 
 static const struct i915_oa_reg b_counter_config_render_basic[] = {
 	{ _MMIO(0x2710), 0x00000000 },
@@ -146,66 +163,2499 @@ get_render_basic_mux_config(struct drm_i915_private *dev_priv,
 	return mux_config_render_basic;
 }
 
+static const struct i915_oa_reg b_counter_config_compute_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic[] = {
+	{ _MMIO(0x9888), 0x104f00e0 },
+	{ _MMIO(0x9888), 0x124f1c00 },
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x1a4e0820 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x064f0900 },
+	{ _MMIO(0x9888), 0x084f0032 },
+	{ _MMIO(0x9888), 0x0a4f1891 },
+	{ _MMIO(0x9888), 0x0c4f0e00 },
+	{ _MMIO(0x9888), 0x0e4f003c },
+	{ _MMIO(0x9888), 0x004f0d80 },
+	{ _MMIO(0x9888), 0x024f003b },
+	{ _MMIO(0x9888), 0x006c0002 },
+	{ _MMIO(0x9888), 0x086c0100 },
+	{ _MMIO(0x9888), 0x0c6c000c },
+	{ _MMIO(0x9888), 0x0e6c0b00 },
+	{ _MMIO(0x9888), 0x186c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x081b8000 },
+	{ _MMIO(0x9888), 0x0c1b4000 },
+	{ _MMIO(0x9888), 0x0e1b8000 },
+	{ _MMIO(0x9888), 0x101c8000 },
+	{ _MMIO(0x9888), 0x1a1c8000 },
+	{ _MMIO(0x9888), 0x1c1c0024 },
+	{ _MMIO(0x9888), 0x065b8000 },
+	{ _MMIO(0x9888), 0x085b4000 },
+	{ _MMIO(0x9888), 0x0a5bc000 },
+	{ _MMIO(0x9888), 0x0c5b8000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x005b8000 },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x1a5c6000 },
+	{ _MMIO(0x9888), 0x1c5c001b },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x145c8000 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4c2000 },
+	{ _MMIO(0x9888), 0x0c4c0208 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020d2000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5500 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2cc000 },
+	{ _MMIO(0x9888), 0x162cfb00 },
+	{ _MMIO(0x9888), 0x182c00be },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042cc000 },
+	{ _MMIO(0x9888), 0x19900157 },
+	{ _MMIO(0x9888), 0x1b900158 },
+	{ _MMIO(0x9888), 0x1d900105 },
+	{ _MMIO(0x9888), 0x1f900103 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x11900fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900863 },
+	{ _MMIO(0x9888), 0x47900802 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900802 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900002 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900c62 },
+	{ _MMIO(0x9888), 0x53903333 },
+};
+
+static const struct i915_oa_reg *
+get_compute_basic_mux_config(struct drm_i915_private *dev_priv,
+			     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_basic);
+	return mux_config_compute_basic;
+}
+
+static const struct i915_oa_reg b_counter_config_render_pipe_profile[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ffea },
+	{ _MMIO(0x2774), 0x00007ffc },
+	{ _MMIO(0x2778), 0x0007affa },
+	{ _MMIO(0x277c), 0x0000f5fd },
+	{ _MMIO(0x2780), 0x00079ffa },
+	{ _MMIO(0x2784), 0x0000f3fb },
+	{ _MMIO(0x2788), 0x0007bf7a },
+	{ _MMIO(0x278c), 0x0000f7e7 },
+	{ _MMIO(0x2790), 0x0007fefa },
+	{ _MMIO(0x2794), 0x0000f7cf },
+	{ _MMIO(0x2798), 0x00077ffa },
+	{ _MMIO(0x279c), 0x0000efdf },
+	{ _MMIO(0x27a0), 0x0006fffa },
+	{ _MMIO(0x27a4), 0x0000cfbf },
+	{ _MMIO(0x27a8), 0x0003fffa },
+	{ _MMIO(0x27ac), 0x00005f7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_pipe_profile[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile[] = {
+	{ _MMIO(0x9888), 0x0c0e001f },
+	{ _MMIO(0x9888), 0x0a0f0000 },
+	{ _MMIO(0x9888), 0x10116800 },
+	{ _MMIO(0x9888), 0x178a03e0 },
+	{ _MMIO(0x9888), 0x11824c00 },
+	{ _MMIO(0x9888), 0x11830020 },
+	{ _MMIO(0x9888), 0x13840020 },
+	{ _MMIO(0x9888), 0x11850019 },
+	{ _MMIO(0x9888), 0x11860007 },
+	{ _MMIO(0x9888), 0x01870c40 },
+	{ _MMIO(0x9888), 0x17880000 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x0a4c0040 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x040d4000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020e5400 },
+	{ _MMIO(0x9888), 0x000e0000 },
+	{ _MMIO(0x9888), 0x080f0040 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x100f0000 },
+	{ _MMIO(0x9888), 0x0e0f0040 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06110012 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x01898000 },
+	{ _MMIO(0x9888), 0x0d890100 },
+	{ _MMIO(0x9888), 0x03898000 },
+	{ _MMIO(0x9888), 0x09808000 },
+	{ _MMIO(0x9888), 0x0b808000 },
+	{ _MMIO(0x9888), 0x0380c000 },
+	{ _MMIO(0x9888), 0x0f8a0075 },
+	{ _MMIO(0x9888), 0x1d8a0000 },
+	{ _MMIO(0x9888), 0x118a8000 },
+	{ _MMIO(0x9888), 0x1b8a4000 },
+	{ _MMIO(0x9888), 0x138a8000 },
+	{ _MMIO(0x9888), 0x1d81a000 },
+	{ _MMIO(0x9888), 0x15818000 },
+	{ _MMIO(0x9888), 0x17818000 },
+	{ _MMIO(0x9888), 0x0b820030 },
+	{ _MMIO(0x9888), 0x07828000 },
+	{ _MMIO(0x9888), 0x0d824000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x05824000 },
+	{ _MMIO(0x9888), 0x0d830003 },
+	{ _MMIO(0x9888), 0x0583000c },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x03838000 },
+	{ _MMIO(0x9888), 0x07838000 },
+	{ _MMIO(0x9888), 0x0b840980 },
+	{ _MMIO(0x9888), 0x03844d80 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x09848000 },
+	{ _MMIO(0x9888), 0x09850080 },
+	{ _MMIO(0x9888), 0x03850003 },
+	{ _MMIO(0x9888), 0x01850000 },
+	{ _MMIO(0x9888), 0x07860000 },
+	{ _MMIO(0x9888), 0x0f860400 },
+	{ _MMIO(0x9888), 0x09870032 },
+	{ _MMIO(0x9888), 0x01888052 },
+	{ _MMIO(0x9888), 0x11880000 },
+	{ _MMIO(0x9888), 0x09884000 },
+	{ _MMIO(0x9888), 0x1b931001 },
+	{ _MMIO(0x9888), 0x1d930001 },
+	{ _MMIO(0x9888), 0x19934000 },
+	{ _MMIO(0x9888), 0x1b958000 },
+	{ _MMIO(0x9888), 0x1d950094 },
+	{ _MMIO(0x9888), 0x19958000 },
+	{ _MMIO(0x9888), 0x09e58000 },
+	{ _MMIO(0x9888), 0x0be58000 },
+	{ _MMIO(0x9888), 0x03e5c000 },
+	{ _MMIO(0x9888), 0x0592c000 },
+	{ _MMIO(0x9888), 0x0b928000 },
+	{ _MMIO(0x9888), 0x0d924000 },
+	{ _MMIO(0x9888), 0x0f924000 },
+	{ _MMIO(0x9888), 0x11928000 },
+	{ _MMIO(0x9888), 0x1392c000 },
+	{ _MMIO(0x9888), 0x09924000 },
+	{ _MMIO(0x9888), 0x01985000 },
+	{ _MMIO(0x9888), 0x07988000 },
+	{ _MMIO(0x9888), 0x09981000 },
+	{ _MMIO(0x9888), 0x0b982000 },
+	{ _MMIO(0x9888), 0x0d982000 },
+	{ _MMIO(0x9888), 0x0f989000 },
+	{ _MMIO(0x9888), 0x05982000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1190c080 },
+	{ _MMIO(0x9888), 0x51901150 },
+	{ _MMIO(0x9888), 0x41901400 },
+	{ _MMIO(0x9888), 0x55905111 },
+	{ _MMIO(0x9888), 0x45901400 },
+	{ _MMIO(0x9888), 0x479004a5 },
+	{ _MMIO(0x9888), 0x57903455 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b9000a0 },
+	{ _MMIO(0x9888), 0x59900001 },
+	{ _MMIO(0x9888), 0x43900005 },
+	{ _MMIO(0x9888), 0x53900455 },
+};
+
+static const struct i915_oa_reg *
+get_render_pipe_profile_mux_config(struct drm_i915_private *dev_priv,
+				   int *len)
+{
+	*len = ARRAY_SIZE(mux_config_render_pipe_profile);
+	return mux_config_render_pipe_profile;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_reads[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f872 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_reads[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f900064 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930055 },
+	{ _MMIO(0x9888), 0x03e58000 },
+	{ _MMIO(0x9888), 0x05e5c000 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x13900150 },
+	{ _MMIO(0x9888), 0x21900151 },
+	{ _MMIO(0x9888), 0x23900152 },
+	{ _MMIO(0x9888), 0x25900153 },
+	{ _MMIO(0x9888), 0x27900154 },
+	{ _MMIO(0x9888), 0x29900155 },
+	{ _MMIO(0x9888), 0x2b900156 },
+	{ _MMIO(0x9888), 0x2d900157 },
+	{ _MMIO(0x9888), 0x2f90015f },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c60 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900c63 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c63 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg *
+get_memory_reads_mux_config(struct drm_i915_private *dev_priv,
+			    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_memory_reads);
+	return mux_config_memory_reads;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_writes[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f822 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_writes[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f901000 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930055 },
+	{ _MMIO(0x9888), 0x03e58000 },
+	{ _MMIO(0x9888), 0x05e5c000 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x13900160 },
+	{ _MMIO(0x9888), 0x21900161 },
+	{ _MMIO(0x9888), 0x23900162 },
+	{ _MMIO(0x9888), 0x25900163 },
+	{ _MMIO(0x9888), 0x27900164 },
+	{ _MMIO(0x9888), 0x29900165 },
+	{ _MMIO(0x9888), 0x2b900166 },
+	{ _MMIO(0x9888), 0x2d900167 },
+	{ _MMIO(0x9888), 0x2f900150 },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c60 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900c63 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c63 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg *
+get_memory_writes_mux_config(struct drm_i915_private *dev_priv,
+			     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_memory_writes);
+	return mux_config_memory_writes;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extended[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fc2a },
+	{ _MMIO(0x2774), 0x0000bf00 },
+	{ _MMIO(0x2778), 0x0007fc6a },
+	{ _MMIO(0x277c), 0x0000bf00 },
+	{ _MMIO(0x2780), 0x0007fc92 },
+	{ _MMIO(0x2784), 0x0000bf00 },
+	{ _MMIO(0x2788), 0x0007fca2 },
+	{ _MMIO(0x278c), 0x0000bf00 },
+	{ _MMIO(0x2790), 0x0007fc32 },
+	{ _MMIO(0x2794), 0x0000bf00 },
+	{ _MMIO(0x2798), 0x0007fc9a },
+	{ _MMIO(0x279c), 0x0000bf00 },
+	{ _MMIO(0x27a0), 0x0007fe6a },
+	{ _MMIO(0x27a4), 0x0000bf00 },
+	{ _MMIO(0x27a8), 0x0007fe7a },
+	{ _MMIO(0x27ac), 0x0000bf00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extended[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended[] = {
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x141c8160 },
+	{ _MMIO(0x9888), 0x161c8015 },
+	{ _MMIO(0x9888), 0x181c0120 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4eaaa0 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0e6c0b01 },
+	{ _MMIO(0x9888), 0x006c0200 },
+	{ _MMIO(0x9888), 0x026c000c },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x001b8000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x001c0041 },
+	{ _MMIO(0x9888), 0x061c4200 },
+	{ _MMIO(0x9888), 0x081c4443 },
+	{ _MMIO(0x9888), 0x0a1c4645 },
+	{ _MMIO(0x9888), 0x0c1c7647 },
+	{ _MMIO(0x9888), 0x041c7357 },
+	{ _MMIO(0x9888), 0x1c1c0030 },
+	{ _MMIO(0x9888), 0x101c0000 },
+	{ _MMIO(0x9888), 0x1a1c0000 },
+	{ _MMIO(0x9888), 0x121c8000 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4caa2a },
+	{ _MMIO(0x9888), 0x0c4c02aa },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5515 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162caa00 },
+	{ _MMIO(0x9888), 0x182c00aa },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x11907fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900040 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900802 },
+	{ _MMIO(0x9888), 0x47900842 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900842 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900800 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg *
+get_compute_extended_mux_config(struct drm_i915_private *dev_priv,
+				int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_extended);
+	return mux_config_compute_extended;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_l3_cache[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fffa },
+	{ _MMIO(0x2774), 0x0000fefe },
+	{ _MMIO(0x2778), 0x0007fffa },
+	{ _MMIO(0x277c), 0x0000fefd },
+	{ _MMIO(0x2790), 0x0007fffa },
+	{ _MMIO(0x2794), 0x0000fbef },
+	{ _MMIO(0x2798), 0x0007fffa },
+	{ _MMIO(0x279c), 0x0000fbdf },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_l3_cache[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00101100 },
+	{ _MMIO(0xe45c), 0x00201200 },
+	{ _MMIO(0xe55c), 0x00301300 },
+	{ _MMIO(0xe65c), 0x00401400 },
+};
+
+static const struct i915_oa_reg mux_config_compute_l3_cache[] = {
+	{ _MMIO(0x9888), 0x166c0760 },
+	{ _MMIO(0x9888), 0x1593001e },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4e8020 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x006c0051 },
+	{ _MMIO(0x9888), 0x066c5000 },
+	{ _MMIO(0x9888), 0x086c5c5d },
+	{ _MMIO(0x9888), 0x0e6c5e5f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x186c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x061b8000 },
+	{ _MMIO(0x9888), 0x081bc000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x101c8000 },
+	{ _MMIO(0x9888), 0x1a1ce000 },
+	{ _MMIO(0x9888), 0x1c1c0030 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4c2a00 },
+	{ _MMIO(0x9888), 0x0c4c0280 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f1500 },
+	{ _MMIO(0x9888), 0x100f0140 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162c0a00 },
+	{ _MMIO(0x9888), 0x182c00a0 },
+	{ _MMIO(0x9888), 0x03933300 },
+	{ _MMIO(0x9888), 0x05930032 },
+	{ _MMIO(0x9888), 0x11930000 },
+	{ _MMIO(0x9888), 0x1b930000 },
+	{ _MMIO(0x9888), 0x1d900157 },
+	{ _MMIO(0x9888), 0x1f900158 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1190030f },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900063 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x53903333 },
+	{ _MMIO(0x9888), 0x43900840 },
+};
+
+static const struct i915_oa_reg *
+get_compute_l3_cache_mux_config(struct drm_i915_private *dev_priv,
+				int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_l3_cache);
+	return mux_config_compute_l3_cache;
+}
+
+static const struct i915_oa_reg b_counter_config_hdc_and_sf[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x10800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+};
+
+static const struct i915_oa_reg flex_eu_config_hdc_and_sf[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_hdc_and_sf[] = {
+	{ _MMIO(0x9888), 0x104f0232 },
+	{ _MMIO(0x9888), 0x124f4640 },
+	{ _MMIO(0x9888), 0x106c0232 },
+	{ _MMIO(0x9888), 0x11834400 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x004f1880 },
+	{ _MMIO(0x9888), 0x024f08bb },
+	{ _MMIO(0x9888), 0x044f001b },
+	{ _MMIO(0x9888), 0x046c0100 },
+	{ _MMIO(0x9888), 0x066c000b },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x041b8000 },
+	{ _MMIO(0x9888), 0x061b4000 },
+	{ _MMIO(0x9888), 0x1a1c1800 },
+	{ _MMIO(0x9888), 0x005b8000 },
+	{ _MMIO(0x9888), 0x025bc000 },
+	{ _MMIO(0x9888), 0x045b4000 },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x145c8000 },
+	{ _MMIO(0x9888), 0x165c8000 },
+	{ _MMIO(0x9888), 0x185c8000 },
+	{ _MMIO(0x9888), 0x0a4c00a0 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042cc000 },
+	{ _MMIO(0x9888), 0x062cc000 },
+	{ _MMIO(0x9888), 0x082cc000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x0f8305c0 },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x07830000 },
+	{ _MMIO(0x9888), 0x1d950080 },
+	{ _MMIO(0x9888), 0x13928000 },
+	{ _MMIO(0x9888), 0x0f988000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x59900005 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_hdc_and_sf_mux_config(struct drm_i915_private *dev_priv,
+			  int *len)
+{
+	*len = ARRAY_SIZE(mux_config_hdc_and_sf);
+	return mux_config_hdc_and_sf;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1[] = {
+	{ _MMIO(0x9888), 0x126c7b40 },
+	{ _MMIO(0x9888), 0x166c0020 },
+	{ _MMIO(0x9888), 0x0a603444 },
+	{ _MMIO(0x9888), 0x0a613400 },
+	{ _MMIO(0x9888), 0x1a4ea800 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x064f4000 },
+	{ _MMIO(0x9888), 0x0c6c5327 },
+	{ _MMIO(0x9888), 0x0e6c5425 },
+	{ _MMIO(0x9888), 0x006c2a00 },
+	{ _MMIO(0x9888), 0x026c285b },
+	{ _MMIO(0x9888), 0x046c005c },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0800 },
+	{ _MMIO(0x9888), 0x0c1bc000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x001b8000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x1c1c003c },
+	{ _MMIO(0x9888), 0x121c8000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c0800 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x1a5c1000 },
+	{ _MMIO(0x9888), 0x10600000 },
+	{ _MMIO(0x9888), 0x04600000 },
+	{ _MMIO(0x9888), 0x0c610044 },
+	{ _MMIO(0x9888), 0x10610000 },
+	{ _MMIO(0x9888), 0x06610000 },
+	{ _MMIO(0x9888), 0x0c4c02a8 },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x0a4c002a },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0154 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x182c00aa },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2cc000 },
+	{ _MMIO(0x9888), 0x1190ffc0 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900420 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900021 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900400 },
+	{ _MMIO(0x9888), 0x43900421 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+};
+
+static const struct i915_oa_reg *
+get_l3_1_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_1);
+	return mux_config_l3_1;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00028002 },
+	{ _MMIO(0x277c), 0x000087ff },
+	{ _MMIO(0x2780), 0x00020002 },
+	{ _MMIO(0x2784), 0x00008fff },
+	{ _MMIO(0x2788), 0x00008002 },
+	{ _MMIO(0x278c), 0x0000a7ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_2[] = {
+	{ _MMIO(0x9888), 0x126c02e0 },
+	{ _MMIO(0x9888), 0x146c0001 },
+	{ _MMIO(0x9888), 0x0a623400 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x064f4000 },
+	{ _MMIO(0x9888), 0x026c3324 },
+	{ _MMIO(0x9888), 0x046c3422 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c0800 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x1a5c1000 },
+	{ _MMIO(0x9888), 0x06614000 },
+	{ _MMIO(0x9888), 0x0c620044 },
+	{ _MMIO(0x9888), 0x10620000 },
+	{ _MMIO(0x9888), 0x06620000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c002a },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f4000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2cc000 },
+	{ _MMIO(0x9888), 0x1190f800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_l3_2_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_2);
+	return mux_config_l3_2;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_3[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00028002 },
+	{ _MMIO(0x277c), 0x000087ff },
+	{ _MMIO(0x2780), 0x00020002 },
+	{ _MMIO(0x2784), 0x00008fff },
+	{ _MMIO(0x2788), 0x00008002 },
+	{ _MMIO(0x278c), 0x0000a7ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_3[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_3[] = {
+	{ _MMIO(0x9888), 0x126c4e80 },
+	{ _MMIO(0x9888), 0x146c0000 },
+	{ _MMIO(0x9888), 0x0a633400 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x026c3321 },
+	{ _MMIO(0x9888), 0x046c342f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1a6c2000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x061b4000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c1800 },
+	{ _MMIO(0x9888), 0x06604000 },
+	{ _MMIO(0x9888), 0x0c630044 },
+	{ _MMIO(0x9888), 0x10630000 },
+	{ _MMIO(0x9888), 0x06630000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c00aa },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f4000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x1190f800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900002 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_l3_3_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_3);
+	return mux_config_l3_3;
+}
+
+static const struct i915_oa_reg b_counter_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000efff },
+	{ _MMIO(0x2778), 0x00006000 },
+	{ _MMIO(0x277c), 0x0000f3ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x9888), 0x102f3800 },
+	{ _MMIO(0x9888), 0x144d0500 },
+	{ _MMIO(0x9888), 0x120d03c0 },
+	{ _MMIO(0x9888), 0x140d03cf },
+	{ _MMIO(0x9888), 0x0c0f0004 },
+	{ _MMIO(0x9888), 0x0c4e4000 },
+	{ _MMIO(0x9888), 0x042f0480 },
+	{ _MMIO(0x9888), 0x082f0000 },
+	{ _MMIO(0x9888), 0x022f0000 },
+	{ _MMIO(0x9888), 0x0a4c0090 },
+	{ _MMIO(0x9888), 0x064d0027 },
+	{ _MMIO(0x9888), 0x004d0000 },
+	{ _MMIO(0x9888), 0x000d0d40 },
+	{ _MMIO(0x9888), 0x020d803f },
+	{ _MMIO(0x9888), 0x040d8023 },
+	{ _MMIO(0x9888), 0x100d0000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020f0010 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x0e0f0050 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41901400 },
+	{ _MMIO(0x9888), 0x43901485 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900001 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_rasterizer_and_pixel_backend_mux_config(struct drm_i915_private *dev_priv,
+					    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_rasterizer_and_pixel_backend);
+	return mux_config_rasterizer_and_pixel_backend;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler[] = {
+	{ _MMIO(0x9888), 0x14152c00 },
+	{ _MMIO(0x9888), 0x16150005 },
+	{ _MMIO(0x9888), 0x121600a0 },
+	{ _MMIO(0x9888), 0x14352c00 },
+	{ _MMIO(0x9888), 0x16350005 },
+	{ _MMIO(0x9888), 0x123600a0 },
+	{ _MMIO(0x9888), 0x14552c00 },
+	{ _MMIO(0x9888), 0x16550005 },
+	{ _MMIO(0x9888), 0x125600a0 },
+	{ _MMIO(0x9888), 0x062f6000 },
+	{ _MMIO(0x9888), 0x022f2000 },
+	{ _MMIO(0x9888), 0x0c4c0050 },
+	{ _MMIO(0x9888), 0x0a4c0010 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0350 },
+	{ _MMIO(0x9888), 0x0c0fb000 },
+	{ _MMIO(0x9888), 0x0e0f00da },
+	{ _MMIO(0x9888), 0x182c0028 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x022dc000 },
+	{ _MMIO(0x9888), 0x042d4000 },
+	{ _MMIO(0x9888), 0x0c138000 },
+	{ _MMIO(0x9888), 0x0e132000 },
+	{ _MMIO(0x9888), 0x0413c000 },
+	{ _MMIO(0x9888), 0x1c140018 },
+	{ _MMIO(0x9888), 0x0c157000 },
+	{ _MMIO(0x9888), 0x0e150078 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x04162180 },
+	{ _MMIO(0x9888), 0x02160000 },
+	{ _MMIO(0x9888), 0x04174000 },
+	{ _MMIO(0x9888), 0x0233a000 },
+	{ _MMIO(0x9888), 0x04333000 },
+	{ _MMIO(0x9888), 0x14348000 },
+	{ _MMIO(0x9888), 0x16348000 },
+	{ _MMIO(0x9888), 0x02357870 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x04360043 },
+	{ _MMIO(0x9888), 0x02360000 },
+	{ _MMIO(0x9888), 0x04371000 },
+	{ _MMIO(0x9888), 0x0e538000 },
+	{ _MMIO(0x9888), 0x00538000 },
+	{ _MMIO(0x9888), 0x06533000 },
+	{ _MMIO(0x9888), 0x1c540020 },
+	{ _MMIO(0x9888), 0x12548000 },
+	{ _MMIO(0x9888), 0x0e557000 },
+	{ _MMIO(0x9888), 0x00557800 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x06560043 },
+	{ _MMIO(0x9888), 0x02560000 },
+	{ _MMIO(0x9888), 0x06571000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900060 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900060 },
+};
+
+static const struct i915_oa_reg *
+get_sampler_mux_config(struct drm_i915_private *dev_priv,
+		       int *len)
+{
+	*len = ARRAY_SIZE(mux_config_sampler);
+	return mux_config_sampler;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x00007fff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x00009fff },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000efff },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000f3ff },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fdff },
+	{ _MMIO(0x2798), 0x00000000 },
+	{ _MMIO(0x279c), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_1[] = {
+	{ _MMIO(0x9888), 0x12120000 },
+	{ _MMIO(0x9888), 0x12320000 },
+	{ _MMIO(0x9888), 0x12520000 },
+	{ _MMIO(0x9888), 0x002f8000 },
+	{ _MMIO(0x9888), 0x022f3000 },
+	{ _MMIO(0x9888), 0x0a4c0015 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f03a0 },
+	{ _MMIO(0x9888), 0x0c0ff000 },
+	{ _MMIO(0x9888), 0x0e0f0095 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2d8000 },
+	{ _MMIO(0x9888), 0x0e2d4000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x02108000 },
+	{ _MMIO(0x9888), 0x0410c000 },
+	{ _MMIO(0x9888), 0x02118000 },
+	{ _MMIO(0x9888), 0x0411c000 },
+	{ _MMIO(0x9888), 0x02121880 },
+	{ _MMIO(0x9888), 0x041219b5 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x02134000 },
+	{ _MMIO(0x9888), 0x04135000 },
+	{ _MMIO(0x9888), 0x0c308000 },
+	{ _MMIO(0x9888), 0x0e304000 },
+	{ _MMIO(0x9888), 0x06304000 },
+	{ _MMIO(0x9888), 0x0c318000 },
+	{ _MMIO(0x9888), 0x0e314000 },
+	{ _MMIO(0x9888), 0x06314000 },
+	{ _MMIO(0x9888), 0x0c321a80 },
+	{ _MMIO(0x9888), 0x0e320033 },
+	{ _MMIO(0x9888), 0x06320031 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x0c334000 },
+	{ _MMIO(0x9888), 0x0e331000 },
+	{ _MMIO(0x9888), 0x06331000 },
+	{ _MMIO(0x9888), 0x0e508000 },
+	{ _MMIO(0x9888), 0x00508000 },
+	{ _MMIO(0x9888), 0x02504000 },
+	{ _MMIO(0x9888), 0x0e518000 },
+	{ _MMIO(0x9888), 0x00518000 },
+	{ _MMIO(0x9888), 0x02514000 },
+	{ _MMIO(0x9888), 0x0e521880 },
+	{ _MMIO(0x9888), 0x00521a80 },
+	{ _MMIO(0x9888), 0x02520033 },
+	{ _MMIO(0x9888), 0x0e534000 },
+	{ _MMIO(0x9888), 0x00534000 },
+	{ _MMIO(0x9888), 0x02531000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900062 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+};
+
+static const struct i915_oa_reg *
+get_tdl_1_mux_config(struct drm_i915_private *dev_priv,
+		     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_tdl_1);
+	return mux_config_tdl_1;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_2[] = {
+	{ _MMIO(0x9888), 0x12124d60 },
+	{ _MMIO(0x9888), 0x12322e60 },
+	{ _MMIO(0x9888), 0x12524d60 },
+	{ _MMIO(0x9888), 0x022f3000 },
+	{ _MMIO(0x9888), 0x0a4c0014 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0fe000 },
+	{ _MMIO(0x9888), 0x0e0f0097 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x002d8000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x0410c000 },
+	{ _MMIO(0x9888), 0x0411c000 },
+	{ _MMIO(0x9888), 0x04121fb7 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x04135000 },
+	{ _MMIO(0x9888), 0x00308000 },
+	{ _MMIO(0x9888), 0x06304000 },
+	{ _MMIO(0x9888), 0x00318000 },
+	{ _MMIO(0x9888), 0x06314000 },
+	{ _MMIO(0x9888), 0x00321b80 },
+	{ _MMIO(0x9888), 0x0632003f },
+	{ _MMIO(0x9888), 0x00334000 },
+	{ _MMIO(0x9888), 0x06331000 },
+	{ _MMIO(0x9888), 0x0250c000 },
+	{ _MMIO(0x9888), 0x0251c000 },
+	{ _MMIO(0x9888), 0x02521fb7 },
+	{ _MMIO(0x9888), 0x00520000 },
+	{ _MMIO(0x9888), 0x02535000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x43900063 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_tdl_2_mux_config(struct drm_i915_private *dev_priv,
+		     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_tdl_2);
+	return mux_config_tdl_2;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extra[] = {
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extra[] = {
+};
+
+static const struct i915_oa_reg mux_config_compute_extra[] = {
+	{ _MMIO(0x9888), 0x121203e0 },
+	{ _MMIO(0x9888), 0x123203e0 },
+	{ _MMIO(0x9888), 0x125203e0 },
+	{ _MMIO(0x9888), 0x129203e0 },
+	{ _MMIO(0x9888), 0x12b203e0 },
+	{ _MMIO(0x9888), 0x12d203e0 },
+	{ _MMIO(0x9888), 0x024ec000 },
+	{ _MMIO(0x9888), 0x044ec000 },
+	{ _MMIO(0x9888), 0x064ec000 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x0a4c0042 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f006d },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x042d8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06114000 },
+	{ _MMIO(0x9888), 0x06120033 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x04308000 },
+	{ _MMIO(0x9888), 0x04318000 },
+	{ _MMIO(0x9888), 0x04321980 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x04334000 },
+	{ _MMIO(0x9888), 0x04504000 },
+	{ _MMIO(0x9888), 0x04514000 },
+	{ _MMIO(0x9888), 0x04520033 },
+	{ _MMIO(0x9888), 0x00520000 },
+	{ _MMIO(0x9888), 0x04531000 },
+	{ _MMIO(0x9888), 0x00af8000 },
+	{ _MMIO(0x9888), 0x0acc0001 },
+	{ _MMIO(0x9888), 0x008d8000 },
+	{ _MMIO(0x9888), 0x028da000 },
+	{ _MMIO(0x9888), 0x0c8fb000 },
+	{ _MMIO(0x9888), 0x0e8f0001 },
+	{ _MMIO(0x9888), 0x06ac8000 },
+	{ _MMIO(0x9888), 0x02ad4000 },
+	{ _MMIO(0x9888), 0x02908000 },
+	{ _MMIO(0x9888), 0x02918000 },
+	{ _MMIO(0x9888), 0x02921980 },
+	{ _MMIO(0x9888), 0x00920000 },
+	{ _MMIO(0x9888), 0x02934000 },
+	{ _MMIO(0x9888), 0x02b04000 },
+	{ _MMIO(0x9888), 0x02b14000 },
+	{ _MMIO(0x9888), 0x02b20033 },
+	{ _MMIO(0x9888), 0x00b20000 },
+	{ _MMIO(0x9888), 0x02b31000 },
+	{ _MMIO(0x9888), 0x00d08000 },
+	{ _MMIO(0x9888), 0x00d18000 },
+	{ _MMIO(0x9888), 0x00d21980 },
+	{ _MMIO(0x9888), 0x00d34000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900402 },
+	{ _MMIO(0x9888), 0x53901550 },
+	{ _MMIO(0x9888), 0x45900080 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_compute_extra_mux_config(struct drm_i915_private *dev_priv,
+			     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_extra);
+	return mux_config_compute_extra;
+}
+
+static const struct i915_oa_reg b_counter_config_vme_pipe[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00100030 },
+	{ _MMIO(0x2774), 0x0000fff9 },
+	{ _MMIO(0x2778), 0x00000002 },
+	{ _MMIO(0x277c), 0x0000fffc },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000fff3 },
+	{ _MMIO(0x2788), 0x00100180 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00000002 },
+	{ _MMIO(0x279c), 0x0000ff3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_vme_pipe[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00008003 },
+};
+
+static const struct i915_oa_reg mux_config_vme_pipe[] = {
+	{ _MMIO(0x9888), 0x141a5800 },
+	{ _MMIO(0x9888), 0x161a00c0 },
+	{ _MMIO(0x9888), 0x12180240 },
+	{ _MMIO(0x9888), 0x14180002 },
+	{ _MMIO(0x9888), 0x149a5800 },
+	{ _MMIO(0x9888), 0x169a00c0 },
+	{ _MMIO(0x9888), 0x12980240 },
+	{ _MMIO(0x9888), 0x14980002 },
+	{ _MMIO(0x9888), 0x1a4e3fc0 },
+	{ _MMIO(0x9888), 0x002f1000 },
+	{ _MMIO(0x9888), 0x022f8000 },
+	{ _MMIO(0x9888), 0x042f3000 },
+	{ _MMIO(0x9888), 0x004c4000 },
+	{ _MMIO(0x9888), 0x0a4c9500 },
+	{ _MMIO(0x9888), 0x0c4c002a },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f5500 },
+	{ _MMIO(0x9888), 0x100f0015 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162caa00 },
+	{ _MMIO(0x9888), 0x182c000a },
+	{ _MMIO(0x9888), 0x04193000 },
+	{ _MMIO(0x9888), 0x081a28c1 },
+	{ _MMIO(0x9888), 0x001a0000 },
+	{ _MMIO(0x9888), 0x00133000 },
+	{ _MMIO(0x9888), 0x0613c000 },
+	{ _MMIO(0x9888), 0x0813f000 },
+	{ _MMIO(0x9888), 0x00172000 },
+	{ _MMIO(0x9888), 0x06178000 },
+	{ _MMIO(0x9888), 0x0817a000 },
+	{ _MMIO(0x9888), 0x00180037 },
+	{ _MMIO(0x9888), 0x06180940 },
+	{ _MMIO(0x9888), 0x08180000 },
+	{ _MMIO(0x9888), 0x02180000 },
+	{ _MMIO(0x9888), 0x04183000 },
+	{ _MMIO(0x9888), 0x04afc000 },
+	{ _MMIO(0x9888), 0x06af3000 },
+	{ _MMIO(0x9888), 0x0acc4000 },
+	{ _MMIO(0x9888), 0x0ccc0015 },
+	{ _MMIO(0x9888), 0x0a8da000 },
+	{ _MMIO(0x9888), 0x0c8da000 },
+	{ _MMIO(0x9888), 0x0e8f4000 },
+	{ _MMIO(0x9888), 0x108f0015 },
+	{ _MMIO(0x9888), 0x16aca000 },
+	{ _MMIO(0x9888), 0x18ac000a },
+	{ _MMIO(0x9888), 0x06993000 },
+	{ _MMIO(0x9888), 0x0c9a28c1 },
+	{ _MMIO(0x9888), 0x009a0000 },
+	{ _MMIO(0x9888), 0x0a93f000 },
+	{ _MMIO(0x9888), 0x0c93f000 },
+	{ _MMIO(0x9888), 0x0a97a000 },
+	{ _MMIO(0x9888), 0x0c97a000 },
+	{ _MMIO(0x9888), 0x0a980977 },
+	{ _MMIO(0x9888), 0x08980000 },
+	{ _MMIO(0x9888), 0x04980000 },
+	{ _MMIO(0x9888), 0x06983000 },
+	{ _MMIO(0x9888), 0x119000ff },
+	{ _MMIO(0x9888), 0x51900050 },
+	{ _MMIO(0x9888), 0x41900000 },
+	{ _MMIO(0x9888), 0x55900115 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x47900884 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900002 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_vme_pipe_mux_config(struct drm_i915_private *dev_priv,
+			int *len)
+{
+	*len = ARRAY_SIZE(mux_config_vme_pipe);
+	return mux_config_vme_pipe;
+}
+
+static const struct i915_oa_reg b_counter_config_test_oa[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2770), 0x00000004 },
+	{ _MMIO(0x2774), 0x00000000 },
+	{ _MMIO(0x2778), 0x00000003 },
+	{ _MMIO(0x277c), 0x00000000 },
+	{ _MMIO(0x2780), 0x00000007 },
+	{ _MMIO(0x2784), 0x00000000 },
+	{ _MMIO(0x2788), 0x00100002 },
+	{ _MMIO(0x278c), 0x0000fff7 },
+	{ _MMIO(0x2790), 0x00100002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00100082 },
+	{ _MMIO(0x279c), 0x0000ffef },
+	{ _MMIO(0x27a0), 0x001000c2 },
+	{ _MMIO(0x27a4), 0x0000ffe7 },
+	{ _MMIO(0x27a8), 0x00100001 },
+	{ _MMIO(0x27ac), 0x0000ffe7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_test_oa[] = {
+};
+
+static const struct i915_oa_reg mux_config_test_oa[] = {
+	{ _MMIO(0x9888), 0x11810000 },
+	{ _MMIO(0x9888), 0x07810013 },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930040 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_test_oa_mux_config(struct drm_i915_private *dev_priv,
+		       int *len)
+{
+	*len = ARRAY_SIZE(mux_config_test_oa);
+	return mux_config_test_oa;
+}
+
 int i915_oa_select_metric_set_sklgt3(struct drm_i915_private *dev_priv)
 {
-	dev_priv->perf.oa.mux_regs = NULL;
-	dev_priv->perf.oa.mux_regs_len = 0;
-	dev_priv->perf.oa.b_counter_regs = NULL;
-	dev_priv->perf.oa.b_counter_regs_len = 0;
-	dev_priv->perf.oa.flex_regs = NULL;
-	dev_priv->perf.oa.flex_regs_len = 0;
+	dev_priv->perf.oa.mux_regs = NULL;
+	dev_priv->perf.oa.mux_regs_len = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_render_basic_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_basic_mux_config(dev_priv,
+						     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
+		return 0;
+	case METRIC_SET_ID_RENDER_PIPE_PROFILE:
+		dev_priv->perf.oa.mux_regs =
+			get_render_pipe_profile_mux_config(dev_priv,
+							   &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_PIPE_PROFILE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_pipe_profile;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_pipe_profile);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_pipe_profile;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_pipe_profile);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_READS:
+		dev_priv->perf.oa.mux_regs =
+			get_memory_reads_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_READS\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_reads;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_reads);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_reads;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_reads);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_WRITES:
+		dev_priv->perf.oa.mux_regs =
+			get_memory_writes_mux_config(dev_priv,
+						     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_WRITES\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_writes;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_writes);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_writes;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_writes);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTENDED:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_extended_mux_config(dev_priv,
+							&dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTENDED\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extended;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extended);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extended;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extended);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_L3_CACHE:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_l3_cache_mux_config(dev_priv,
+							&dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_L3_CACHE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_l3_cache;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_l3_cache);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_l3_cache;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_l3_cache);
+
+		return 0;
+	case METRIC_SET_ID_HDC_AND_SF:
+		dev_priv->perf.oa.mux_regs =
+			get_hdc_and_sf_mux_config(dev_priv,
+						  &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"HDC_AND_SF\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_hdc_and_sf;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_hdc_and_sf);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_hdc_and_sf;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_hdc_and_sf);
+
+		return 0;
+	case METRIC_SET_ID_L3_1:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_1_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_1);
+
+		return 0;
+	case METRIC_SET_ID_L3_2:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_2_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_2);
+
+		return 0;
+	case METRIC_SET_ID_L3_3:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_3_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_3\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_3;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_3);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_3;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_3);
+
+		return 0;
+	case METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND:
+		dev_priv->perf.oa.mux_regs =
+			get_rasterizer_and_pixel_backend_mux_config(dev_priv,
+								    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RASTERIZER_AND_PIXEL_BACKEND\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_rasterizer_and_pixel_backend);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_rasterizer_and_pixel_backend);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER:
+		dev_priv->perf.oa.mux_regs =
+			get_sampler_mux_config(dev_priv,
+					       &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler);
+
+		return 0;
+	case METRIC_SET_ID_TDL_1:
+		dev_priv->perf.oa.mux_regs =
+			get_tdl_1_mux_config(dev_priv,
+					     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_1);
+
+		return 0;
+	case METRIC_SET_ID_TDL_2:
+		dev_priv->perf.oa.mux_regs =
+			get_tdl_2_mux_config(dev_priv,
+					     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_2);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTRA:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_extra_mux_config(dev_priv,
+						     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTRA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extra;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extra);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extra;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extra);
+
+		return 0;
+	case METRIC_SET_ID_VME_PIPE:
+		dev_priv->perf.oa.mux_regs =
+			get_vme_pipe_mux_config(dev_priv,
+						&dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"VME_PIPE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_vme_pipe;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_vme_pipe);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_vme_pipe;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_vme_pipe);
+
+		return 0;
+	case METRIC_SET_ID_TEST_OA:
+		dev_priv->perf.oa.mux_regs =
+			get_test_oa_mux_config(dev_priv,
+					       &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TEST_OA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_test_oa;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_test_oa);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_test_oa;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_test_oa);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "4616d450-2393-4836-8146-53c5ed84d359",
+	.attrs =  attrs_render_basic,
+};
+
+static ssize_t
+show_compute_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_BASIC);
+}
+
+static struct device_attribute dev_attr_compute_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_basic[] = {
+	&dev_attr_compute_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_basic = {
+	.name = "4320492b-fd03-42ac-922f-dbe1ef3b7b58",
+	.attrs =  attrs_compute_basic,
+};
+
+static ssize_t
+show_render_pipe_profile_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_PIPE_PROFILE);
+}
+
+static struct device_attribute dev_attr_render_pipe_profile_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_pipe_profile_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_pipe_profile[] = {
+	&dev_attr_render_pipe_profile_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_pipe_profile = {
+	.name = "bd2d9cae-b9ec-4f5b-9d2f-934bed398a2d",
+	.attrs =  attrs_render_pipe_profile,
+};
+
+static ssize_t
+show_memory_reads_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_READS);
+}
 
-	switch (dev_priv->perf.oa.metrics_set) {
-	case METRIC_SET_ID_RENDER_BASIC:
-		dev_priv->perf.oa.mux_regs =
-			get_render_basic_mux_config(dev_priv,
-						    &dev_priv->perf.oa.mux_regs_len);
-		if (!dev_priv->perf.oa.mux_regs) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+static struct device_attribute dev_attr_memory_reads_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_reads_id,
+	.store = NULL,
+};
 
-			/* EINVAL because *_register_sysfs already checked this
-			 * and so it wouldn't have been advertised to userspace and
-			 * so shouldn't have been requested
-			 */
-			return -EINVAL;
-		}
+static struct attribute *attrs_memory_reads[] = {
+	&dev_attr_memory_reads_id.attr,
+	NULL,
+};
 
-		dev_priv->perf.oa.b_counter_regs =
-			b_counter_config_render_basic;
-		dev_priv->perf.oa.b_counter_regs_len =
-			ARRAY_SIZE(b_counter_config_render_basic);
+static struct attribute_group group_memory_reads = {
+	.name = "4ca0f3fe-7fd3-4924-98cb-1807d9879767",
+	.attrs =  attrs_memory_reads,
+};
 
-		dev_priv->perf.oa.flex_regs =
-			flex_eu_config_render_basic;
-		dev_priv->perf.oa.flex_regs_len =
-			ARRAY_SIZE(flex_eu_config_render_basic);
+static ssize_t
+show_memory_writes_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_WRITES);
+}
 
-		return 0;
-	default:
-		return -ENODEV;
-	}
+static struct device_attribute dev_attr_memory_writes_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_writes_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_writes[] = {
+	&dev_attr_memory_writes_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_writes = {
+	.name = "a0c0172c-ee13-403d-99ff-2bdf6936cf14",
+	.attrs =  attrs_memory_writes,
+};
+
+static ssize_t
+show_compute_extended_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTENDED);
+}
+
+static struct device_attribute dev_attr_compute_extended_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extended_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extended[] = {
+	&dev_attr_compute_extended_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extended = {
+	.name = "52435e0b-f188-42ea-8680-21a56ee20dee",
+	.attrs =  attrs_compute_extended,
+};
+
+static ssize_t
+show_compute_l3_cache_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_L3_CACHE);
 }
 
+static struct device_attribute dev_attr_compute_l3_cache_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_l3_cache_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_l3_cache[] = {
+	&dev_attr_compute_l3_cache_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_l3_cache = {
+	.name = "27076eeb-49f3-4fed-8423-c66506005c63",
+	.attrs =  attrs_compute_l3_cache,
+};
+
 static ssize_t
-show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+show_hdc_and_sf_id(struct device *kdev, struct device_attribute *attr, char *buf)
 {
-	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+	return sprintf(buf, "%d\n", METRIC_SET_ID_HDC_AND_SF);
 }
 
-static struct device_attribute dev_attr_render_basic_id = {
+static struct device_attribute dev_attr_hdc_and_sf_id = {
 	.attr = { .name = "id", .mode = 0444 },
-	.show = show_render_basic_id,
+	.show = show_hdc_and_sf_id,
 	.store = NULL,
 };
 
-static struct attribute *attrs_render_basic[] = {
-	&dev_attr_render_basic_id.attr,
+static struct attribute *attrs_hdc_and_sf[] = {
+	&dev_attr_hdc_and_sf_id.attr,
 	NULL,
 };
 
-static struct attribute_group group_render_basic = {
-	.name = "4616d450-2393-4836-8146-53c5ed84d359",
-	.attrs =  attrs_render_basic,
+static struct attribute_group group_hdc_and_sf = {
+	.name = "8071b409-c39a-4674-94d7-32962ecfb512",
+	.attrs =  attrs_hdc_and_sf,
+};
+
+static ssize_t
+show_l3_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_1);
+}
+
+static struct device_attribute dev_attr_l3_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_1[] = {
+	&dev_attr_l3_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_1 = {
+	.name = "5e0b391e-9ea8-4901-b2ff-b64ff616c7ed",
+	.attrs =  attrs_l3_1,
+};
+
+static ssize_t
+show_l3_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_2);
+}
+
+static struct device_attribute dev_attr_l3_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_2[] = {
+	&dev_attr_l3_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_2 = {
+	.name = "25dc828e-1d2d-426e-9546-a1d4233cdf16",
+	.attrs =  attrs_l3_2,
+};
+
+static ssize_t
+show_l3_3_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_3);
+}
+
+static struct device_attribute dev_attr_l3_3_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_3_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_3[] = {
+	&dev_attr_l3_3_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_3 = {
+	.name = "3dba9405-2d7e-4d70-8199-e734e82fd6bf",
+	.attrs =  attrs_l3_3,
+};
+
+static ssize_t
+show_rasterizer_and_pixel_backend_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND);
+}
+
+static struct device_attribute dev_attr_rasterizer_and_pixel_backend_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_rasterizer_and_pixel_backend_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_rasterizer_and_pixel_backend[] = {
+	&dev_attr_rasterizer_and_pixel_backend_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_rasterizer_and_pixel_backend = {
+	.name = "76935d7b-09c9-46bf-87f1-c18b4a86ebe5",
+	.attrs =  attrs_rasterizer_and_pixel_backend,
+};
+
+static ssize_t
+show_sampler_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER);
+}
+
+static struct device_attribute dev_attr_sampler_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler[] = {
+	&dev_attr_sampler_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler = {
+	.name = "1b34c0d6-4f4c-4d7b-833f-4aaf236d87a6",
+	.attrs =  attrs_sampler,
+};
+
+static ssize_t
+show_tdl_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_1);
+}
+
+static struct device_attribute dev_attr_tdl_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_1[] = {
+	&dev_attr_tdl_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_1 = {
+	.name = "b375c985-9953-455b-bda2-b03f7594e9db",
+	.attrs =  attrs_tdl_1,
+};
+
+static ssize_t
+show_tdl_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_2);
+}
+
+static struct device_attribute dev_attr_tdl_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_2[] = {
+	&dev_attr_tdl_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_2 = {
+	.name = "3e2be2bb-884a-49bb-82c5-2358e6bd5f2d",
+	.attrs =  attrs_tdl_2,
+};
+
+static ssize_t
+show_compute_extra_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTRA);
+}
+
+static struct device_attribute dev_attr_compute_extra_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extra_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extra[] = {
+	&dev_attr_compute_extra_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extra = {
+	.name = "2d80a648-7b5a-4e92-bbe7-3b5c76f2e221",
+	.attrs =  attrs_compute_extra,
+};
+
+static ssize_t
+show_vme_pipe_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_VME_PIPE);
+}
+
+static struct device_attribute dev_attr_vme_pipe_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_vme_pipe_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_vme_pipe[] = {
+	&dev_attr_vme_pipe_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_vme_pipe = {
+	.name = "cfae9232-6ffc-42cc-a703-9790016925f0",
+	.attrs =  attrs_vme_pipe,
+};
+
+static ssize_t
+show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TEST_OA);
+}
+
+static struct device_attribute dev_attr_test_oa_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_test_oa_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_test_oa[] = {
+	&dev_attr_test_oa_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_test_oa = {
+	.name = "2b985803-d3c9-4629-8a4f-634bfecba0e8",
+	.attrs =  attrs_test_oa,
 };
 
 int
@@ -219,9 +2669,145 @@ i915_perf_register_sysfs_sklgt3(struct drm_i915_private *dev_priv)
 		if (ret)
 			goto error_render_basic;
 	}
+	if (get_compute_basic_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+		if (ret)
+			goto error_compute_basic;
+	}
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+		if (ret)
+			goto error_render_pipe_profile;
+	}
+	if (get_memory_reads_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+		if (ret)
+			goto error_memory_reads;
+	}
+	if (get_memory_writes_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+		if (ret)
+			goto error_memory_writes;
+	}
+	if (get_compute_extended_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+		if (ret)
+			goto error_compute_extended;
+	}
+	if (get_compute_l3_cache_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+		if (ret)
+			goto error_compute_l3_cache;
+	}
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+		if (ret)
+			goto error_hdc_and_sf;
+	}
+	if (get_l3_1_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+		if (ret)
+			goto error_l3_1;
+	}
+	if (get_l3_2_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+		if (ret)
+			goto error_l3_2;
+	}
+	if (get_l3_3_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+		if (ret)
+			goto error_l3_3;
+	}
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+		if (ret)
+			goto error_rasterizer_and_pixel_backend;
+	}
+	if (get_sampler_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler);
+		if (ret)
+			goto error_sampler;
+	}
+	if (get_tdl_1_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+		if (ret)
+			goto error_tdl_1;
+	}
+	if (get_tdl_2_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+		if (ret)
+			goto error_tdl_2;
+	}
+	if (get_compute_extra_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+		if (ret)
+			goto error_compute_extra;
+	}
+	if (get_vme_pipe_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+		if (ret)
+			goto error_vme_pipe;
+	}
+	if (get_test_oa_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+		if (ret)
+			goto error_test_oa;
+	}
 
 	return 0;
 
+error_test_oa:
+	if (get_vme_pipe_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+error_vme_pipe:
+	if (get_compute_extra_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+error_compute_extra:
+	if (get_tdl_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+error_tdl_2:
+	if (get_tdl_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+error_tdl_1:
+	if (get_sampler_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+error_sampler:
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+error_rasterizer_and_pixel_backend:
+	if (get_l3_3_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+error_l3_3:
+	if (get_l3_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+error_l3_2:
+	if (get_l3_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+error_l3_1:
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+error_hdc_and_sf:
+	if (get_compute_l3_cache_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+error_compute_l3_cache:
+	if (get_compute_extended_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+error_compute_extended:
+	if (get_memory_writes_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+error_memory_writes:
+	if (get_memory_reads_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+error_memory_reads:
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+error_render_pipe_profile:
+	if (get_compute_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+error_compute_basic:
+	if (get_render_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
 error_render_basic:
 	return ret;
 }
@@ -233,4 +2819,38 @@ i915_perf_unregister_sysfs_sklgt3(struct drm_i915_private *dev_priv)
 
 	if (get_render_basic_mux_config(dev_priv, &mux_len))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+	if (get_compute_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+	if (get_memory_reads_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+	if (get_memory_writes_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+	if (get_compute_extended_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+	if (get_compute_l3_cache_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+	if (get_l3_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+	if (get_l3_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+	if (get_l3_3_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+	if (get_sampler_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+	if (get_tdl_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+	if (get_tdl_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+	if (get_compute_extra_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+	if (get_vme_pipe_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+	if (get_test_oa_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_test_oa);
 }
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt4.c b/drivers/gpu/drm/i915/i915_oa_sklgt4.c
index 1af4ebe7d51f..9acd016c7057 100644
--- a/drivers/gpu/drm/i915/i915_oa_sklgt4.c
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt4.c
@@ -31,9 +31,26 @@
 
 enum metric_set_id {
 	METRIC_SET_ID_RENDER_BASIC = 1,
+	METRIC_SET_ID_COMPUTE_BASIC,
+	METRIC_SET_ID_RENDER_PIPE_PROFILE,
+	METRIC_SET_ID_MEMORY_READS,
+	METRIC_SET_ID_MEMORY_WRITES,
+	METRIC_SET_ID_COMPUTE_EXTENDED,
+	METRIC_SET_ID_COMPUTE_L3_CACHE,
+	METRIC_SET_ID_HDC_AND_SF,
+	METRIC_SET_ID_L3_1,
+	METRIC_SET_ID_L3_2,
+	METRIC_SET_ID_L3_3,
+	METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND,
+	METRIC_SET_ID_SAMPLER,
+	METRIC_SET_ID_TDL_1,
+	METRIC_SET_ID_TDL_2,
+	METRIC_SET_ID_COMPUTE_EXTRA,
+	METRIC_SET_ID_VME_PIPE,
+	METRIC_SET_ID_TEST_OA,
 };
 
-int i915_oa_n_builtin_metric_sets_sklgt4 = 1;
+int i915_oa_n_builtin_metric_sets_sklgt4 = 18;
 
 static const struct i915_oa_reg b_counter_config_render_basic[] = {
 	{ _MMIO(0x2710), 0x00000000 },
@@ -157,66 +174,2542 @@ get_render_basic_mux_config(struct drm_i915_private *dev_priv,
 	return mux_config_render_basic;
 }
 
+static const struct i915_oa_reg b_counter_config_compute_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic[] = {
+	{ _MMIO(0x9888), 0x104f00e0 },
+	{ _MMIO(0x9888), 0x124f1c00 },
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x1a4e0820 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x064f0900 },
+	{ _MMIO(0x9888), 0x084f0032 },
+	{ _MMIO(0x9888), 0x0a4f1891 },
+	{ _MMIO(0x9888), 0x0c4f0e00 },
+	{ _MMIO(0x9888), 0x0e4f003c },
+	{ _MMIO(0x9888), 0x004f0d80 },
+	{ _MMIO(0x9888), 0x024f003b },
+	{ _MMIO(0x9888), 0x006c0002 },
+	{ _MMIO(0x9888), 0x086c0100 },
+	{ _MMIO(0x9888), 0x0c6c000c },
+	{ _MMIO(0x9888), 0x0e6c0b00 },
+	{ _MMIO(0x9888), 0x186c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x081b8000 },
+	{ _MMIO(0x9888), 0x0c1b4000 },
+	{ _MMIO(0x9888), 0x0e1b8000 },
+	{ _MMIO(0x9888), 0x101c8000 },
+	{ _MMIO(0x9888), 0x1a1c8000 },
+	{ _MMIO(0x9888), 0x1c1c0024 },
+	{ _MMIO(0x9888), 0x065b8000 },
+	{ _MMIO(0x9888), 0x085b4000 },
+	{ _MMIO(0x9888), 0x0a5bc000 },
+	{ _MMIO(0x9888), 0x0c5b8000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x005b8000 },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x1a5c6000 },
+	{ _MMIO(0x9888), 0x1c5c001b },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x145c8000 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4c2000 },
+	{ _MMIO(0x9888), 0x0c4c0208 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020d2000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5500 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2cc000 },
+	{ _MMIO(0x9888), 0x162cfb00 },
+	{ _MMIO(0x9888), 0x182c00be },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042cc000 },
+	{ _MMIO(0x9888), 0x19900157 },
+	{ _MMIO(0x9888), 0x1b900158 },
+	{ _MMIO(0x9888), 0x1d900105 },
+	{ _MMIO(0x9888), 0x1f900103 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x11900fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900821 },
+	{ _MMIO(0x9888), 0x47900802 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900802 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900002 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900422 },
+	{ _MMIO(0x9888), 0x53905555 },
+};
+
+static const struct i915_oa_reg *
+get_compute_basic_mux_config(struct drm_i915_private *dev_priv,
+			     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_basic);
+	return mux_config_compute_basic;
+}
+
+static const struct i915_oa_reg b_counter_config_render_pipe_profile[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ffea },
+	{ _MMIO(0x2774), 0x00007ffc },
+	{ _MMIO(0x2778), 0x0007affa },
+	{ _MMIO(0x277c), 0x0000f5fd },
+	{ _MMIO(0x2780), 0x00079ffa },
+	{ _MMIO(0x2784), 0x0000f3fb },
+	{ _MMIO(0x2788), 0x0007bf7a },
+	{ _MMIO(0x278c), 0x0000f7e7 },
+	{ _MMIO(0x2790), 0x0007fefa },
+	{ _MMIO(0x2794), 0x0000f7cf },
+	{ _MMIO(0x2798), 0x00077ffa },
+	{ _MMIO(0x279c), 0x0000efdf },
+	{ _MMIO(0x27a0), 0x0006fffa },
+	{ _MMIO(0x27a4), 0x0000cfbf },
+	{ _MMIO(0x27a8), 0x0003fffa },
+	{ _MMIO(0x27ac), 0x00005f7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_pipe_profile[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile[] = {
+	{ _MMIO(0x9888), 0x0c0e001f },
+	{ _MMIO(0x9888), 0x0a0f0000 },
+	{ _MMIO(0x9888), 0x10116800 },
+	{ _MMIO(0x9888), 0x178a03e0 },
+	{ _MMIO(0x9888), 0x11824c00 },
+	{ _MMIO(0x9888), 0x11830020 },
+	{ _MMIO(0x9888), 0x13840020 },
+	{ _MMIO(0x9888), 0x11850019 },
+	{ _MMIO(0x9888), 0x11860007 },
+	{ _MMIO(0x9888), 0x01870c40 },
+	{ _MMIO(0x9888), 0x17880000 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x0a4c0040 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x040d4000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020e5400 },
+	{ _MMIO(0x9888), 0x000e0000 },
+	{ _MMIO(0x9888), 0x080f0040 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x100f0000 },
+	{ _MMIO(0x9888), 0x0e0f0040 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06110012 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x01898000 },
+	{ _MMIO(0x9888), 0x0d890100 },
+	{ _MMIO(0x9888), 0x03898000 },
+	{ _MMIO(0x9888), 0x09808000 },
+	{ _MMIO(0x9888), 0x0b808000 },
+	{ _MMIO(0x9888), 0x0380c000 },
+	{ _MMIO(0x9888), 0x0f8a0075 },
+	{ _MMIO(0x9888), 0x1d8a0000 },
+	{ _MMIO(0x9888), 0x118a8000 },
+	{ _MMIO(0x9888), 0x1b8a4000 },
+	{ _MMIO(0x9888), 0x138a8000 },
+	{ _MMIO(0x9888), 0x1d81a000 },
+	{ _MMIO(0x9888), 0x15818000 },
+	{ _MMIO(0x9888), 0x17818000 },
+	{ _MMIO(0x9888), 0x0b820030 },
+	{ _MMIO(0x9888), 0x07828000 },
+	{ _MMIO(0x9888), 0x0d824000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x05824000 },
+	{ _MMIO(0x9888), 0x0d830003 },
+	{ _MMIO(0x9888), 0x0583000c },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x03838000 },
+	{ _MMIO(0x9888), 0x07838000 },
+	{ _MMIO(0x9888), 0x0b840980 },
+	{ _MMIO(0x9888), 0x03844d80 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x09848000 },
+	{ _MMIO(0x9888), 0x09850080 },
+	{ _MMIO(0x9888), 0x03850003 },
+	{ _MMIO(0x9888), 0x01850000 },
+	{ _MMIO(0x9888), 0x07860000 },
+	{ _MMIO(0x9888), 0x0f860400 },
+	{ _MMIO(0x9888), 0x09870032 },
+	{ _MMIO(0x9888), 0x01888052 },
+	{ _MMIO(0x9888), 0x11880000 },
+	{ _MMIO(0x9888), 0x09884000 },
+	{ _MMIO(0x9888), 0x1b931001 },
+	{ _MMIO(0x9888), 0x1d930001 },
+	{ _MMIO(0x9888), 0x19934000 },
+	{ _MMIO(0x9888), 0x1b958000 },
+	{ _MMIO(0x9888), 0x1d950094 },
+	{ _MMIO(0x9888), 0x19958000 },
+	{ _MMIO(0x9888), 0x09e58000 },
+	{ _MMIO(0x9888), 0x0be58000 },
+	{ _MMIO(0x9888), 0x03e5c000 },
+	{ _MMIO(0x9888), 0x0592c000 },
+	{ _MMIO(0x9888), 0x0b928000 },
+	{ _MMIO(0x9888), 0x0d924000 },
+	{ _MMIO(0x9888), 0x0f924000 },
+	{ _MMIO(0x9888), 0x11928000 },
+	{ _MMIO(0x9888), 0x1392c000 },
+	{ _MMIO(0x9888), 0x09924000 },
+	{ _MMIO(0x9888), 0x01985000 },
+	{ _MMIO(0x9888), 0x07988000 },
+	{ _MMIO(0x9888), 0x09981000 },
+	{ _MMIO(0x9888), 0x0b982000 },
+	{ _MMIO(0x9888), 0x0d982000 },
+	{ _MMIO(0x9888), 0x0f989000 },
+	{ _MMIO(0x9888), 0x05982000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1190c080 },
+	{ _MMIO(0x9888), 0x51901110 },
+	{ _MMIO(0x9888), 0x41900440 },
+	{ _MMIO(0x9888), 0x55901111 },
+	{ _MMIO(0x9888), 0x45900400 },
+	{ _MMIO(0x9888), 0x47900c21 },
+	{ _MMIO(0x9888), 0x57901411 },
+	{ _MMIO(0x9888), 0x49900042 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900024 },
+	{ _MMIO(0x9888), 0x59900001 },
+	{ _MMIO(0x9888), 0x43900841 },
+	{ _MMIO(0x9888), 0x53900411 },
+};
+
+static const struct i915_oa_reg *
+get_render_pipe_profile_mux_config(struct drm_i915_private *dev_priv,
+				   int *len)
+{
+	*len = ARRAY_SIZE(mux_config_render_pipe_profile);
+	return mux_config_render_pipe_profile;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_reads[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f872 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_reads[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f900064 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930055 },
+	{ _MMIO(0x9888), 0x03e58000 },
+	{ _MMIO(0x9888), 0x05e5c000 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x13900150 },
+	{ _MMIO(0x9888), 0x21900151 },
+	{ _MMIO(0x9888), 0x23900152 },
+	{ _MMIO(0x9888), 0x25900153 },
+	{ _MMIO(0x9888), 0x27900154 },
+	{ _MMIO(0x9888), 0x29900155 },
+	{ _MMIO(0x9888), 0x2b900156 },
+	{ _MMIO(0x9888), 0x2d900157 },
+	{ _MMIO(0x9888), 0x2f90015f },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c60 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900c63 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c63 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg *
+get_memory_reads_mux_config(struct drm_i915_private *dev_priv,
+			    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_memory_reads);
+	return mux_config_memory_reads;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_writes[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f822 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_writes[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f901000 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930055 },
+	{ _MMIO(0x9888), 0x03e58000 },
+	{ _MMIO(0x9888), 0x05e5c000 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x13900160 },
+	{ _MMIO(0x9888), 0x21900161 },
+	{ _MMIO(0x9888), 0x23900162 },
+	{ _MMIO(0x9888), 0x25900163 },
+	{ _MMIO(0x9888), 0x27900164 },
+	{ _MMIO(0x9888), 0x29900165 },
+	{ _MMIO(0x9888), 0x2b900166 },
+	{ _MMIO(0x9888), 0x2d900167 },
+	{ _MMIO(0x9888), 0x2f900150 },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c60 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900c63 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c63 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg *
+get_memory_writes_mux_config(struct drm_i915_private *dev_priv,
+			     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_memory_writes);
+	return mux_config_memory_writes;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extended[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fc2a },
+	{ _MMIO(0x2774), 0x0000bf00 },
+	{ _MMIO(0x2778), 0x0007fc6a },
+	{ _MMIO(0x277c), 0x0000bf00 },
+	{ _MMIO(0x2780), 0x0007fc92 },
+	{ _MMIO(0x2784), 0x0000bf00 },
+	{ _MMIO(0x2788), 0x0007fca2 },
+	{ _MMIO(0x278c), 0x0000bf00 },
+	{ _MMIO(0x2790), 0x0007fc32 },
+	{ _MMIO(0x2794), 0x0000bf00 },
+	{ _MMIO(0x2798), 0x0007fc9a },
+	{ _MMIO(0x279c), 0x0000bf00 },
+	{ _MMIO(0x27a0), 0x0007fe6a },
+	{ _MMIO(0x27a4), 0x0000bf00 },
+	{ _MMIO(0x27a8), 0x0007fe7a },
+	{ _MMIO(0x27ac), 0x0000bf00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extended[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended[] = {
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x141c8160 },
+	{ _MMIO(0x9888), 0x161c8015 },
+	{ _MMIO(0x9888), 0x181c0120 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4eaaa0 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0e6c0b01 },
+	{ _MMIO(0x9888), 0x006c0200 },
+	{ _MMIO(0x9888), 0x026c000c },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x001b8000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x001c0041 },
+	{ _MMIO(0x9888), 0x061c4200 },
+	{ _MMIO(0x9888), 0x081c4443 },
+	{ _MMIO(0x9888), 0x0a1c4645 },
+	{ _MMIO(0x9888), 0x0c1c7647 },
+	{ _MMIO(0x9888), 0x041c7357 },
+	{ _MMIO(0x9888), 0x1c1c0030 },
+	{ _MMIO(0x9888), 0x101c0000 },
+	{ _MMIO(0x9888), 0x1a1c0000 },
+	{ _MMIO(0x9888), 0x121c8000 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4caa2a },
+	{ _MMIO(0x9888), 0x0c4c02aa },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5515 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162caa00 },
+	{ _MMIO(0x9888), 0x182c00aa },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x11907fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900040 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900802 },
+	{ _MMIO(0x9888), 0x47900842 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900842 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900800 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg *
+get_compute_extended_mux_config(struct drm_i915_private *dev_priv,
+				int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_extended);
+	return mux_config_compute_extended;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_l3_cache[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fffa },
+	{ _MMIO(0x2774), 0x0000fefe },
+	{ _MMIO(0x2778), 0x0007fffa },
+	{ _MMIO(0x277c), 0x0000fefd },
+	{ _MMIO(0x2790), 0x0007fffa },
+	{ _MMIO(0x2794), 0x0000fbef },
+	{ _MMIO(0x2798), 0x0007fffa },
+	{ _MMIO(0x279c), 0x0000fbdf },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_l3_cache[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00101100 },
+	{ _MMIO(0xe45c), 0x00201200 },
+	{ _MMIO(0xe55c), 0x00301300 },
+	{ _MMIO(0xe65c), 0x00401400 },
+};
+
+static const struct i915_oa_reg mux_config_compute_l3_cache[] = {
+	{ _MMIO(0x9888), 0x166c0760 },
+	{ _MMIO(0x9888), 0x1593001e },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4e8020 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x006c0051 },
+	{ _MMIO(0x9888), 0x066c5000 },
+	{ _MMIO(0x9888), 0x086c5c5d },
+	{ _MMIO(0x9888), 0x0e6c5e5f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x186c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x061b8000 },
+	{ _MMIO(0x9888), 0x081bc000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x101c8000 },
+	{ _MMIO(0x9888), 0x1a1ce000 },
+	{ _MMIO(0x9888), 0x1c1c0030 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4c2a00 },
+	{ _MMIO(0x9888), 0x0c4c0280 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f1500 },
+	{ _MMIO(0x9888), 0x100f0140 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162c0a00 },
+	{ _MMIO(0x9888), 0x182c00a0 },
+	{ _MMIO(0x9888), 0x03933300 },
+	{ _MMIO(0x9888), 0x05930032 },
+	{ _MMIO(0x9888), 0x11930000 },
+	{ _MMIO(0x9888), 0x1b930000 },
+	{ _MMIO(0x9888), 0x1d900157 },
+	{ _MMIO(0x9888), 0x1f900158 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1190030f },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900021 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x53905555 },
+	{ _MMIO(0x9888), 0x43900000 },
+};
+
+static const struct i915_oa_reg *
+get_compute_l3_cache_mux_config(struct drm_i915_private *dev_priv,
+				int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_l3_cache);
+	return mux_config_compute_l3_cache;
+}
+
+static const struct i915_oa_reg b_counter_config_hdc_and_sf[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x10800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+};
+
+static const struct i915_oa_reg flex_eu_config_hdc_and_sf[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_hdc_and_sf[] = {
+	{ _MMIO(0x9888), 0x104f0232 },
+	{ _MMIO(0x9888), 0x124f4640 },
+	{ _MMIO(0x9888), 0x106c0232 },
+	{ _MMIO(0x9888), 0x11834400 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x004f1880 },
+	{ _MMIO(0x9888), 0x024f08bb },
+	{ _MMIO(0x9888), 0x044f001b },
+	{ _MMIO(0x9888), 0x046c0100 },
+	{ _MMIO(0x9888), 0x066c000b },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x041b8000 },
+	{ _MMIO(0x9888), 0x061b4000 },
+	{ _MMIO(0x9888), 0x1a1c1800 },
+	{ _MMIO(0x9888), 0x005b8000 },
+	{ _MMIO(0x9888), 0x025bc000 },
+	{ _MMIO(0x9888), 0x045b4000 },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x145c8000 },
+	{ _MMIO(0x9888), 0x165c8000 },
+	{ _MMIO(0x9888), 0x185c8000 },
+	{ _MMIO(0x9888), 0x0a4c00a0 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042cc000 },
+	{ _MMIO(0x9888), 0x062cc000 },
+	{ _MMIO(0x9888), 0x082cc000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x0f8305c0 },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x07830000 },
+	{ _MMIO(0x9888), 0x1d950080 },
+	{ _MMIO(0x9888), 0x13928000 },
+	{ _MMIO(0x9888), 0x0f988000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x59900001 },
+	{ _MMIO(0x9888), 0x4b900040 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_hdc_and_sf_mux_config(struct drm_i915_private *dev_priv,
+			  int *len)
+{
+	*len = ARRAY_SIZE(mux_config_hdc_and_sf);
+	return mux_config_hdc_and_sf;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1[] = {
+	{ _MMIO(0x9888), 0x126c7b40 },
+	{ _MMIO(0x9888), 0x166c0020 },
+	{ _MMIO(0x9888), 0x0a603444 },
+	{ _MMIO(0x9888), 0x0a613400 },
+	{ _MMIO(0x9888), 0x1a4ea800 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x064f4000 },
+	{ _MMIO(0x9888), 0x0c6c5327 },
+	{ _MMIO(0x9888), 0x0e6c5425 },
+	{ _MMIO(0x9888), 0x006c2a00 },
+	{ _MMIO(0x9888), 0x026c285b },
+	{ _MMIO(0x9888), 0x046c005c },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0800 },
+	{ _MMIO(0x9888), 0x0c1bc000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x001b8000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x1c1c003c },
+	{ _MMIO(0x9888), 0x121c8000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c0800 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x1a5c1000 },
+	{ _MMIO(0x9888), 0x10600000 },
+	{ _MMIO(0x9888), 0x04600000 },
+	{ _MMIO(0x9888), 0x0c610044 },
+	{ _MMIO(0x9888), 0x10610000 },
+	{ _MMIO(0x9888), 0x06610000 },
+	{ _MMIO(0x9888), 0x0c4c02a8 },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x0a4c002a },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0154 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x182c00aa },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2cc000 },
+	{ _MMIO(0x9888), 0x1190ffc0 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900420 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900021 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900400 },
+	{ _MMIO(0x9888), 0x43900421 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+};
+
+static const struct i915_oa_reg *
+get_l3_1_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_1);
+	return mux_config_l3_1;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00028002 },
+	{ _MMIO(0x277c), 0x000087ff },
+	{ _MMIO(0x2780), 0x00020002 },
+	{ _MMIO(0x2784), 0x00008fff },
+	{ _MMIO(0x2788), 0x00008002 },
+	{ _MMIO(0x278c), 0x0000a7ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_2[] = {
+	{ _MMIO(0x9888), 0x126c02e0 },
+	{ _MMIO(0x9888), 0x146c0001 },
+	{ _MMIO(0x9888), 0x0a623400 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x064f4000 },
+	{ _MMIO(0x9888), 0x026c3324 },
+	{ _MMIO(0x9888), 0x046c3422 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c0800 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x1a5c1000 },
+	{ _MMIO(0x9888), 0x06614000 },
+	{ _MMIO(0x9888), 0x0c620044 },
+	{ _MMIO(0x9888), 0x10620000 },
+	{ _MMIO(0x9888), 0x06620000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c002a },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f4000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2cc000 },
+	{ _MMIO(0x9888), 0x1190f800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_l3_2_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_2);
+	return mux_config_l3_2;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_3[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00028002 },
+	{ _MMIO(0x277c), 0x000087ff },
+	{ _MMIO(0x2780), 0x00020002 },
+	{ _MMIO(0x2784), 0x00008fff },
+	{ _MMIO(0x2788), 0x00008002 },
+	{ _MMIO(0x278c), 0x0000a7ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_3[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_3[] = {
+	{ _MMIO(0x9888), 0x126c4e80 },
+	{ _MMIO(0x9888), 0x146c0000 },
+	{ _MMIO(0x9888), 0x0a633400 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x026c3321 },
+	{ _MMIO(0x9888), 0x046c342f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1a6c2000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x061b4000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c1800 },
+	{ _MMIO(0x9888), 0x06604000 },
+	{ _MMIO(0x9888), 0x0c630044 },
+	{ _MMIO(0x9888), 0x10630000 },
+	{ _MMIO(0x9888), 0x06630000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c00aa },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f4000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x1190f800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900002 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_l3_3_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_3);
+	return mux_config_l3_3;
+}
+
+static const struct i915_oa_reg b_counter_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000efff },
+	{ _MMIO(0x2778), 0x00006000 },
+	{ _MMIO(0x277c), 0x0000f3ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x9888), 0x102f3800 },
+	{ _MMIO(0x9888), 0x144d0500 },
+	{ _MMIO(0x9888), 0x120d03c0 },
+	{ _MMIO(0x9888), 0x140d03cf },
+	{ _MMIO(0x9888), 0x0c0f0004 },
+	{ _MMIO(0x9888), 0x0c4e4000 },
+	{ _MMIO(0x9888), 0x042f0480 },
+	{ _MMIO(0x9888), 0x082f0000 },
+	{ _MMIO(0x9888), 0x022f0000 },
+	{ _MMIO(0x9888), 0x0a4c0090 },
+	{ _MMIO(0x9888), 0x064d0027 },
+	{ _MMIO(0x9888), 0x004d0000 },
+	{ _MMIO(0x9888), 0x000d0d40 },
+	{ _MMIO(0x9888), 0x020d803f },
+	{ _MMIO(0x9888), 0x040d8023 },
+	{ _MMIO(0x9888), 0x100d0000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020f0010 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x0e0f0050 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41901400 },
+	{ _MMIO(0x9888), 0x43901485 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900001 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_rasterizer_and_pixel_backend_mux_config(struct drm_i915_private *dev_priv,
+					    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_rasterizer_and_pixel_backend);
+	return mux_config_rasterizer_and_pixel_backend;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler[] = {
+	{ _MMIO(0x9888), 0x14152c00 },
+	{ _MMIO(0x9888), 0x16150005 },
+	{ _MMIO(0x9888), 0x121600a0 },
+	{ _MMIO(0x9888), 0x14352c00 },
+	{ _MMIO(0x9888), 0x16350005 },
+	{ _MMIO(0x9888), 0x123600a0 },
+	{ _MMIO(0x9888), 0x14552c00 },
+	{ _MMIO(0x9888), 0x16550005 },
+	{ _MMIO(0x9888), 0x125600a0 },
+	{ _MMIO(0x9888), 0x062f6000 },
+	{ _MMIO(0x9888), 0x022f2000 },
+	{ _MMIO(0x9888), 0x0c4c0050 },
+	{ _MMIO(0x9888), 0x0a4c0010 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0350 },
+	{ _MMIO(0x9888), 0x0c0fb000 },
+	{ _MMIO(0x9888), 0x0e0f00da },
+	{ _MMIO(0x9888), 0x182c0028 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x022dc000 },
+	{ _MMIO(0x9888), 0x042d4000 },
+	{ _MMIO(0x9888), 0x0c138000 },
+	{ _MMIO(0x9888), 0x0e132000 },
+	{ _MMIO(0x9888), 0x0413c000 },
+	{ _MMIO(0x9888), 0x1c140018 },
+	{ _MMIO(0x9888), 0x0c157000 },
+	{ _MMIO(0x9888), 0x0e150078 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x04162180 },
+	{ _MMIO(0x9888), 0x02160000 },
+	{ _MMIO(0x9888), 0x04174000 },
+	{ _MMIO(0x9888), 0x0233a000 },
+	{ _MMIO(0x9888), 0x04333000 },
+	{ _MMIO(0x9888), 0x14348000 },
+	{ _MMIO(0x9888), 0x16348000 },
+	{ _MMIO(0x9888), 0x02357870 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x04360043 },
+	{ _MMIO(0x9888), 0x02360000 },
+	{ _MMIO(0x9888), 0x04371000 },
+	{ _MMIO(0x9888), 0x0e538000 },
+	{ _MMIO(0x9888), 0x00538000 },
+	{ _MMIO(0x9888), 0x06533000 },
+	{ _MMIO(0x9888), 0x1c540020 },
+	{ _MMIO(0x9888), 0x12548000 },
+	{ _MMIO(0x9888), 0x0e557000 },
+	{ _MMIO(0x9888), 0x00557800 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x06560043 },
+	{ _MMIO(0x9888), 0x02560000 },
+	{ _MMIO(0x9888), 0x06571000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900060 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900060 },
+};
+
+static const struct i915_oa_reg *
+get_sampler_mux_config(struct drm_i915_private *dev_priv,
+		       int *len)
+{
+	*len = ARRAY_SIZE(mux_config_sampler);
+	return mux_config_sampler;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x00007fff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x00009fff },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000efff },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000f3ff },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fdff },
+	{ _MMIO(0x2798), 0x00000000 },
+	{ _MMIO(0x279c), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_1[] = {
+	{ _MMIO(0x9888), 0x12120000 },
+	{ _MMIO(0x9888), 0x12320000 },
+	{ _MMIO(0x9888), 0x12520000 },
+	{ _MMIO(0x9888), 0x002f8000 },
+	{ _MMIO(0x9888), 0x022f3000 },
+	{ _MMIO(0x9888), 0x0a4c0015 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f03a0 },
+	{ _MMIO(0x9888), 0x0c0ff000 },
+	{ _MMIO(0x9888), 0x0e0f0095 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2d8000 },
+	{ _MMIO(0x9888), 0x0e2d4000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x02108000 },
+	{ _MMIO(0x9888), 0x0410c000 },
+	{ _MMIO(0x9888), 0x02118000 },
+	{ _MMIO(0x9888), 0x0411c000 },
+	{ _MMIO(0x9888), 0x02121880 },
+	{ _MMIO(0x9888), 0x041219b5 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x02134000 },
+	{ _MMIO(0x9888), 0x04135000 },
+	{ _MMIO(0x9888), 0x0c308000 },
+	{ _MMIO(0x9888), 0x0e304000 },
+	{ _MMIO(0x9888), 0x06304000 },
+	{ _MMIO(0x9888), 0x0c318000 },
+	{ _MMIO(0x9888), 0x0e314000 },
+	{ _MMIO(0x9888), 0x06314000 },
+	{ _MMIO(0x9888), 0x0c321a80 },
+	{ _MMIO(0x9888), 0x0e320033 },
+	{ _MMIO(0x9888), 0x06320031 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x0c334000 },
+	{ _MMIO(0x9888), 0x0e331000 },
+	{ _MMIO(0x9888), 0x06331000 },
+	{ _MMIO(0x9888), 0x0e508000 },
+	{ _MMIO(0x9888), 0x00508000 },
+	{ _MMIO(0x9888), 0x02504000 },
+	{ _MMIO(0x9888), 0x0e518000 },
+	{ _MMIO(0x9888), 0x00518000 },
+	{ _MMIO(0x9888), 0x02514000 },
+	{ _MMIO(0x9888), 0x0e521880 },
+	{ _MMIO(0x9888), 0x00521a80 },
+	{ _MMIO(0x9888), 0x02520033 },
+	{ _MMIO(0x9888), 0x0e534000 },
+	{ _MMIO(0x9888), 0x00534000 },
+	{ _MMIO(0x9888), 0x02531000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900062 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+};
+
+static const struct i915_oa_reg *
+get_tdl_1_mux_config(struct drm_i915_private *dev_priv,
+		     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_tdl_1);
+	return mux_config_tdl_1;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_2[] = {
+	{ _MMIO(0x9888), 0x12124d60 },
+	{ _MMIO(0x9888), 0x12322e60 },
+	{ _MMIO(0x9888), 0x12524d60 },
+	{ _MMIO(0x9888), 0x022f3000 },
+	{ _MMIO(0x9888), 0x0a4c0014 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0fe000 },
+	{ _MMIO(0x9888), 0x0e0f0097 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x002d8000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x0410c000 },
+	{ _MMIO(0x9888), 0x0411c000 },
+	{ _MMIO(0x9888), 0x04121fb7 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x04135000 },
+	{ _MMIO(0x9888), 0x00308000 },
+	{ _MMIO(0x9888), 0x06304000 },
+	{ _MMIO(0x9888), 0x00318000 },
+	{ _MMIO(0x9888), 0x06314000 },
+	{ _MMIO(0x9888), 0x00321b80 },
+	{ _MMIO(0x9888), 0x0632003f },
+	{ _MMIO(0x9888), 0x00334000 },
+	{ _MMIO(0x9888), 0x06331000 },
+	{ _MMIO(0x9888), 0x0250c000 },
+	{ _MMIO(0x9888), 0x0251c000 },
+	{ _MMIO(0x9888), 0x02521fb7 },
+	{ _MMIO(0x9888), 0x00520000 },
+	{ _MMIO(0x9888), 0x02535000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x43900063 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_tdl_2_mux_config(struct drm_i915_private *dev_priv,
+		     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_tdl_2);
+	return mux_config_tdl_2;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extra[] = {
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extra[] = {
+};
+
+static const struct i915_oa_reg mux_config_compute_extra[] = {
+	{ _MMIO(0x9888), 0x121203e0 },
+	{ _MMIO(0x9888), 0x123203e0 },
+	{ _MMIO(0x9888), 0x125203e0 },
+	{ _MMIO(0x9888), 0x129203e0 },
+	{ _MMIO(0x9888), 0x12b203e0 },
+	{ _MMIO(0x9888), 0x12d203e0 },
+	{ _MMIO(0x9888), 0x131203e0 },
+	{ _MMIO(0x9888), 0x133203e0 },
+	{ _MMIO(0x9888), 0x135203e0 },
+	{ _MMIO(0x9888), 0x1a4ef000 },
+	{ _MMIO(0x9888), 0x1c4e0003 },
+	{ _MMIO(0x9888), 0x024ec000 },
+	{ _MMIO(0x9888), 0x044ec000 },
+	{ _MMIO(0x9888), 0x064ec000 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x0c4c02a0 },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x0a4c0042 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0150 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f006d },
+	{ _MMIO(0x9888), 0x182c00a8 },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x042d8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06114000 },
+	{ _MMIO(0x9888), 0x06120033 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x04308000 },
+	{ _MMIO(0x9888), 0x04318000 },
+	{ _MMIO(0x9888), 0x04321980 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x04334000 },
+	{ _MMIO(0x9888), 0x04504000 },
+	{ _MMIO(0x9888), 0x04514000 },
+	{ _MMIO(0x9888), 0x04520033 },
+	{ _MMIO(0x9888), 0x00520000 },
+	{ _MMIO(0x9888), 0x04531000 },
+	{ _MMIO(0x9888), 0x1acef000 },
+	{ _MMIO(0x9888), 0x1cce0003 },
+	{ _MMIO(0x9888), 0x00af8000 },
+	{ _MMIO(0x9888), 0x0ccc02a0 },
+	{ _MMIO(0x9888), 0x0acc0001 },
+	{ _MMIO(0x9888), 0x0c8d8000 },
+	{ _MMIO(0x9888), 0x0e8da000 },
+	{ _MMIO(0x9888), 0x008d8000 },
+	{ _MMIO(0x9888), 0x028da000 },
+	{ _MMIO(0x9888), 0x108f0150 },
+	{ _MMIO(0x9888), 0x0c8fb000 },
+	{ _MMIO(0x9888), 0x0e8f0001 },
+	{ _MMIO(0x9888), 0x18ac00a8 },
+	{ _MMIO(0x9888), 0x06ac8000 },
+	{ _MMIO(0x9888), 0x02ad4000 },
+	{ _MMIO(0x9888), 0x02908000 },
+	{ _MMIO(0x9888), 0x02918000 },
+	{ _MMIO(0x9888), 0x02921980 },
+	{ _MMIO(0x9888), 0x00920000 },
+	{ _MMIO(0x9888), 0x02934000 },
+	{ _MMIO(0x9888), 0x02b04000 },
+	{ _MMIO(0x9888), 0x02b14000 },
+	{ _MMIO(0x9888), 0x02b20033 },
+	{ _MMIO(0x9888), 0x00b20000 },
+	{ _MMIO(0x9888), 0x02b31000 },
+	{ _MMIO(0x9888), 0x00d08000 },
+	{ _MMIO(0x9888), 0x00d18000 },
+	{ _MMIO(0x9888), 0x00d21980 },
+	{ _MMIO(0x9888), 0x00d34000 },
+	{ _MMIO(0x9888), 0x072f8000 },
+	{ _MMIO(0x9888), 0x0d4c0100 },
+	{ _MMIO(0x9888), 0x0d0d8000 },
+	{ _MMIO(0x9888), 0x0f0da000 },
+	{ _MMIO(0x9888), 0x110f01b0 },
+	{ _MMIO(0x9888), 0x192c0080 },
+	{ _MMIO(0x9888), 0x0f2d4000 },
+	{ _MMIO(0x9888), 0x0f108000 },
+	{ _MMIO(0x9888), 0x0f118000 },
+	{ _MMIO(0x9888), 0x0f121980 },
+	{ _MMIO(0x9888), 0x01120000 },
+	{ _MMIO(0x9888), 0x0f134000 },
+	{ _MMIO(0x9888), 0x0f304000 },
+	{ _MMIO(0x9888), 0x0f314000 },
+	{ _MMIO(0x9888), 0x0f320033 },
+	{ _MMIO(0x9888), 0x01320000 },
+	{ _MMIO(0x9888), 0x0f331000 },
+	{ _MMIO(0x9888), 0x0d508000 },
+	{ _MMIO(0x9888), 0x0d518000 },
+	{ _MMIO(0x9888), 0x0d521980 },
+	{ _MMIO(0x9888), 0x01520000 },
+	{ _MMIO(0x9888), 0x0d534000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900002 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51901100 },
+	{ _MMIO(0x9888), 0x41901000 },
+	{ _MMIO(0x9888), 0x43901423 },
+	{ _MMIO(0x9888), 0x53903331 },
+	{ _MMIO(0x9888), 0x45900044 },
+};
+
+static const struct i915_oa_reg *
+get_compute_extra_mux_config(struct drm_i915_private *dev_priv,
+			     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_extra);
+	return mux_config_compute_extra;
+}
+
+static const struct i915_oa_reg b_counter_config_vme_pipe[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00100030 },
+	{ _MMIO(0x2774), 0x0000fff9 },
+	{ _MMIO(0x2778), 0x00000002 },
+	{ _MMIO(0x277c), 0x0000fffc },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000fff3 },
+	{ _MMIO(0x2788), 0x00100180 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00000002 },
+	{ _MMIO(0x279c), 0x0000ff3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_vme_pipe[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00008003 },
+};
+
+static const struct i915_oa_reg mux_config_vme_pipe[] = {
+	{ _MMIO(0x9888), 0x141a5800 },
+	{ _MMIO(0x9888), 0x161a00c0 },
+	{ _MMIO(0x9888), 0x12180240 },
+	{ _MMIO(0x9888), 0x14180002 },
+	{ _MMIO(0x9888), 0x149a5800 },
+	{ _MMIO(0x9888), 0x169a00c0 },
+	{ _MMIO(0x9888), 0x12980240 },
+	{ _MMIO(0x9888), 0x14980002 },
+	{ _MMIO(0x9888), 0x1a4e3fc0 },
+	{ _MMIO(0x9888), 0x002f1000 },
+	{ _MMIO(0x9888), 0x022f8000 },
+	{ _MMIO(0x9888), 0x042f3000 },
+	{ _MMIO(0x9888), 0x004c4000 },
+	{ _MMIO(0x9888), 0x0a4c9500 },
+	{ _MMIO(0x9888), 0x0c4c002a },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f5500 },
+	{ _MMIO(0x9888), 0x100f0015 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162caa00 },
+	{ _MMIO(0x9888), 0x182c000a },
+	{ _MMIO(0x9888), 0x04193000 },
+	{ _MMIO(0x9888), 0x081a28c1 },
+	{ _MMIO(0x9888), 0x001a0000 },
+	{ _MMIO(0x9888), 0x00133000 },
+	{ _MMIO(0x9888), 0x0613c000 },
+	{ _MMIO(0x9888), 0x0813f000 },
+	{ _MMIO(0x9888), 0x00172000 },
+	{ _MMIO(0x9888), 0x06178000 },
+	{ _MMIO(0x9888), 0x0817a000 },
+	{ _MMIO(0x9888), 0x00180037 },
+	{ _MMIO(0x9888), 0x06180940 },
+	{ _MMIO(0x9888), 0x08180000 },
+	{ _MMIO(0x9888), 0x02180000 },
+	{ _MMIO(0x9888), 0x04183000 },
+	{ _MMIO(0x9888), 0x04afc000 },
+	{ _MMIO(0x9888), 0x06af3000 },
+	{ _MMIO(0x9888), 0x0acc4000 },
+	{ _MMIO(0x9888), 0x0ccc0015 },
+	{ _MMIO(0x9888), 0x0a8da000 },
+	{ _MMIO(0x9888), 0x0c8da000 },
+	{ _MMIO(0x9888), 0x0e8f4000 },
+	{ _MMIO(0x9888), 0x108f0015 },
+	{ _MMIO(0x9888), 0x16aca000 },
+	{ _MMIO(0x9888), 0x18ac000a },
+	{ _MMIO(0x9888), 0x06993000 },
+	{ _MMIO(0x9888), 0x0c9a28c1 },
+	{ _MMIO(0x9888), 0x009a0000 },
+	{ _MMIO(0x9888), 0x0a93f000 },
+	{ _MMIO(0x9888), 0x0c93f000 },
+	{ _MMIO(0x9888), 0x0a97a000 },
+	{ _MMIO(0x9888), 0x0c97a000 },
+	{ _MMIO(0x9888), 0x0a980977 },
+	{ _MMIO(0x9888), 0x08980000 },
+	{ _MMIO(0x9888), 0x04980000 },
+	{ _MMIO(0x9888), 0x06983000 },
+	{ _MMIO(0x9888), 0x119000ff },
+	{ _MMIO(0x9888), 0x51900010 },
+	{ _MMIO(0x9888), 0x41900060 },
+	{ _MMIO(0x9888), 0x55900111 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900821 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900002 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_vme_pipe_mux_config(struct drm_i915_private *dev_priv,
+			int *len)
+{
+	*len = ARRAY_SIZE(mux_config_vme_pipe);
+	return mux_config_vme_pipe;
+}
+
+static const struct i915_oa_reg b_counter_config_test_oa[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2770), 0x00000004 },
+	{ _MMIO(0x2774), 0x00000000 },
+	{ _MMIO(0x2778), 0x00000003 },
+	{ _MMIO(0x277c), 0x00000000 },
+	{ _MMIO(0x2780), 0x00000007 },
+	{ _MMIO(0x2784), 0x00000000 },
+	{ _MMIO(0x2788), 0x00100002 },
+	{ _MMIO(0x278c), 0x0000fff7 },
+	{ _MMIO(0x2790), 0x00100002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00100082 },
+	{ _MMIO(0x279c), 0x0000ffef },
+	{ _MMIO(0x27a0), 0x001000c2 },
+	{ _MMIO(0x27a4), 0x0000ffe7 },
+	{ _MMIO(0x27a8), 0x00100001 },
+	{ _MMIO(0x27ac), 0x0000ffe7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_test_oa[] = {
+};
+
+static const struct i915_oa_reg mux_config_test_oa[] = {
+	{ _MMIO(0x9888), 0x11810000 },
+	{ _MMIO(0x9888), 0x07810013 },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930040 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_test_oa_mux_config(struct drm_i915_private *dev_priv,
+		       int *len)
+{
+	*len = ARRAY_SIZE(mux_config_test_oa);
+	return mux_config_test_oa;
+}
+
 int i915_oa_select_metric_set_sklgt4(struct drm_i915_private *dev_priv)
 {
-	dev_priv->perf.oa.mux_regs = NULL;
-	dev_priv->perf.oa.mux_regs_len = 0;
-	dev_priv->perf.oa.b_counter_regs = NULL;
-	dev_priv->perf.oa.b_counter_regs_len = 0;
-	dev_priv->perf.oa.flex_regs = NULL;
-	dev_priv->perf.oa.flex_regs_len = 0;
+	dev_priv->perf.oa.mux_regs = NULL;
+	dev_priv->perf.oa.mux_regs_len = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_render_basic_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_basic_mux_config(dev_priv,
+						     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
+		return 0;
+	case METRIC_SET_ID_RENDER_PIPE_PROFILE:
+		dev_priv->perf.oa.mux_regs =
+			get_render_pipe_profile_mux_config(dev_priv,
+							   &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_PIPE_PROFILE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_pipe_profile;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_pipe_profile);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_pipe_profile;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_pipe_profile);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_READS:
+		dev_priv->perf.oa.mux_regs =
+			get_memory_reads_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_READS\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_reads;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_reads);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_reads;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_reads);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_WRITES:
+		dev_priv->perf.oa.mux_regs =
+			get_memory_writes_mux_config(dev_priv,
+						     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_WRITES\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_writes;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_writes);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_writes;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_writes);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTENDED:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_extended_mux_config(dev_priv,
+							&dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTENDED\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extended;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extended);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extended;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extended);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_L3_CACHE:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_l3_cache_mux_config(dev_priv,
+							&dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_L3_CACHE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_l3_cache;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_l3_cache);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_l3_cache;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_l3_cache);
+
+		return 0;
+	case METRIC_SET_ID_HDC_AND_SF:
+		dev_priv->perf.oa.mux_regs =
+			get_hdc_and_sf_mux_config(dev_priv,
+						  &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"HDC_AND_SF\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_hdc_and_sf;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_hdc_and_sf);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_hdc_and_sf;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_hdc_and_sf);
+
+		return 0;
+	case METRIC_SET_ID_L3_1:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_1_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_1);
+
+		return 0;
+	case METRIC_SET_ID_L3_2:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_2_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_2);
+
+		return 0;
+	case METRIC_SET_ID_L3_3:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_3_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_3\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_3;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_3);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_3;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_3);
+
+		return 0;
+	case METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND:
+		dev_priv->perf.oa.mux_regs =
+			get_rasterizer_and_pixel_backend_mux_config(dev_priv,
+								    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RASTERIZER_AND_PIXEL_BACKEND\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_rasterizer_and_pixel_backend);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_rasterizer_and_pixel_backend);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER:
+		dev_priv->perf.oa.mux_regs =
+			get_sampler_mux_config(dev_priv,
+					       &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler);
+
+		return 0;
+	case METRIC_SET_ID_TDL_1:
+		dev_priv->perf.oa.mux_regs =
+			get_tdl_1_mux_config(dev_priv,
+					     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_1);
+
+		return 0;
+	case METRIC_SET_ID_TDL_2:
+		dev_priv->perf.oa.mux_regs =
+			get_tdl_2_mux_config(dev_priv,
+					     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_2);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTRA:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_extra_mux_config(dev_priv,
+						     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTRA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extra;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extra);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extra;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extra);
+
+		return 0;
+	case METRIC_SET_ID_VME_PIPE:
+		dev_priv->perf.oa.mux_regs =
+			get_vme_pipe_mux_config(dev_priv,
+						&dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"VME_PIPE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_vme_pipe;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_vme_pipe);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_vme_pipe;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_vme_pipe);
+
+		return 0;
+	case METRIC_SET_ID_TEST_OA:
+		dev_priv->perf.oa.mux_regs =
+			get_test_oa_mux_config(dev_priv,
+					       &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TEST_OA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_test_oa;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_test_oa);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_test_oa;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_test_oa);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "bad77c24-cc64-480d-99bf-e7b740713800",
+	.attrs =  attrs_render_basic,
+};
+
+static ssize_t
+show_compute_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_BASIC);
+}
+
+static struct device_attribute dev_attr_compute_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_basic[] = {
+	&dev_attr_compute_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_basic = {
+	.name = "7277228f-e7f3-4743-945a-6a2049d11377",
+	.attrs =  attrs_compute_basic,
+};
+
+static ssize_t
+show_render_pipe_profile_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_PIPE_PROFILE);
+}
+
+static struct device_attribute dev_attr_render_pipe_profile_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_pipe_profile_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_pipe_profile[] = {
+	&dev_attr_render_pipe_profile_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_pipe_profile = {
+	.name = "463c668c-3f60-49b6-8f85-d995b635b3b2",
+	.attrs =  attrs_render_pipe_profile,
+};
+
+static ssize_t
+show_memory_reads_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_READS);
+}
 
-	switch (dev_priv->perf.oa.metrics_set) {
-	case METRIC_SET_ID_RENDER_BASIC:
-		dev_priv->perf.oa.mux_regs =
-			get_render_basic_mux_config(dev_priv,
-						    &dev_priv->perf.oa.mux_regs_len);
-		if (!dev_priv->perf.oa.mux_regs) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+static struct device_attribute dev_attr_memory_reads_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_reads_id,
+	.store = NULL,
+};
 
-			/* EINVAL because *_register_sysfs already checked this
-			 * and so it wouldn't have been advertised to userspace and
-			 * so shouldn't have been requested
-			 */
-			return -EINVAL;
-		}
+static struct attribute *attrs_memory_reads[] = {
+	&dev_attr_memory_reads_id.attr,
+	NULL,
+};
 
-		dev_priv->perf.oa.b_counter_regs =
-			b_counter_config_render_basic;
-		dev_priv->perf.oa.b_counter_regs_len =
-			ARRAY_SIZE(b_counter_config_render_basic);
+static struct attribute_group group_memory_reads = {
+	.name = "3ae6e74c-72c3-4040-9bd0-7961430b8cc8",
+	.attrs =  attrs_memory_reads,
+};
 
-		dev_priv->perf.oa.flex_regs =
-			flex_eu_config_render_basic;
-		dev_priv->perf.oa.flex_regs_len =
-			ARRAY_SIZE(flex_eu_config_render_basic);
+static ssize_t
+show_memory_writes_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_WRITES);
+}
 
-		return 0;
-	default:
-		return -ENODEV;
-	}
+static struct device_attribute dev_attr_memory_writes_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_writes_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_writes[] = {
+	&dev_attr_memory_writes_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_writes = {
+	.name = "055f256d-4052-467c-8dec-6064a4806433",
+	.attrs =  attrs_memory_writes,
+};
+
+static ssize_t
+show_compute_extended_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTENDED);
+}
+
+static struct device_attribute dev_attr_compute_extended_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extended_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extended[] = {
+	&dev_attr_compute_extended_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extended = {
+	.name = "753972d4-87cd-4460-824d-754463ac5054",
+	.attrs =  attrs_compute_extended,
+};
+
+static ssize_t
+show_compute_l3_cache_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_L3_CACHE);
 }
 
+static struct device_attribute dev_attr_compute_l3_cache_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_l3_cache_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_l3_cache[] = {
+	&dev_attr_compute_l3_cache_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_l3_cache = {
+	.name = "4e4392e9-8f73-457b-ab44-b49f7a0c733b",
+	.attrs =  attrs_compute_l3_cache,
+};
+
 static ssize_t
-show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+show_hdc_and_sf_id(struct device *kdev, struct device_attribute *attr, char *buf)
 {
-	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+	return sprintf(buf, "%d\n", METRIC_SET_ID_HDC_AND_SF);
 }
 
-static struct device_attribute dev_attr_render_basic_id = {
+static struct device_attribute dev_attr_hdc_and_sf_id = {
 	.attr = { .name = "id", .mode = 0444 },
-	.show = show_render_basic_id,
+	.show = show_hdc_and_sf_id,
 	.store = NULL,
 };
 
-static struct attribute *attrs_render_basic[] = {
-	&dev_attr_render_basic_id.attr,
+static struct attribute *attrs_hdc_and_sf[] = {
+	&dev_attr_hdc_and_sf_id.attr,
 	NULL,
 };
 
-static struct attribute_group group_render_basic = {
-	.name = "bad77c24-cc64-480d-99bf-e7b740713800",
-	.attrs =  attrs_render_basic,
+static struct attribute_group group_hdc_and_sf = {
+	.name = "730d95dd-7da8-4e1c-ab8d-c0eb1e4c1805",
+	.attrs =  attrs_hdc_and_sf,
+};
+
+static ssize_t
+show_l3_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_1);
+}
+
+static struct device_attribute dev_attr_l3_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_1[] = {
+	&dev_attr_l3_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_1 = {
+	.name = "d9e86d70-462b-462a-851e-fd63e8c13d63",
+	.attrs =  attrs_l3_1,
+};
+
+static ssize_t
+show_l3_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_2);
+}
+
+static struct device_attribute dev_attr_l3_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_2[] = {
+	&dev_attr_l3_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_2 = {
+	.name = "52200424-6ee9-48b3-b7fa-0afcf1975e4d",
+	.attrs =  attrs_l3_2,
+};
+
+static ssize_t
+show_l3_3_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_3);
+}
+
+static struct device_attribute dev_attr_l3_3_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_3_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_3[] = {
+	&dev_attr_l3_3_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_3 = {
+	.name = "1988315f-0a26-44df-acb0-df7ec86b1456",
+	.attrs =  attrs_l3_3,
+};
+
+static ssize_t
+show_rasterizer_and_pixel_backend_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND);
+}
+
+static struct device_attribute dev_attr_rasterizer_and_pixel_backend_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_rasterizer_and_pixel_backend_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_rasterizer_and_pixel_backend[] = {
+	&dev_attr_rasterizer_and_pixel_backend_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_rasterizer_and_pixel_backend = {
+	.name = "f1f17ca7-286e-4ae5-9d15-9fccad6c665d",
+	.attrs =  attrs_rasterizer_and_pixel_backend,
+};
+
+static ssize_t
+show_sampler_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER);
+}
+
+static struct device_attribute dev_attr_sampler_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler[] = {
+	&dev_attr_sampler_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler = {
+	.name = "00a9e0fb-3d2e-4405-852c-dce6334ffb3b",
+	.attrs =  attrs_sampler,
+};
+
+static ssize_t
+show_tdl_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_1);
+}
+
+static struct device_attribute dev_attr_tdl_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_1[] = {
+	&dev_attr_tdl_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_1 = {
+	.name = "13dcc50a-7ec0-409b-99d6-a3f932cedcb3",
+	.attrs =  attrs_tdl_1,
+};
+
+static ssize_t
+show_tdl_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_2);
+}
+
+static struct device_attribute dev_attr_tdl_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_2[] = {
+	&dev_attr_tdl_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_2 = {
+	.name = "97875e21-6624-4aee-9191-682feb3eae21",
+	.attrs =  attrs_tdl_2,
+};
+
+static ssize_t
+show_compute_extra_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTRA);
+}
+
+static struct device_attribute dev_attr_compute_extra_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extra_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extra[] = {
+	&dev_attr_compute_extra_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extra = {
+	.name = "a5aa857d-e8f0-4dfa-8981-ce340fa748fd",
+	.attrs =  attrs_compute_extra,
+};
+
+static ssize_t
+show_vme_pipe_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_VME_PIPE);
+}
+
+static struct device_attribute dev_attr_vme_pipe_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_vme_pipe_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_vme_pipe[] = {
+	&dev_attr_vme_pipe_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_vme_pipe = {
+	.name = "0e8d8b86-4ee7-4cdd-aaaa-58adc92cb29e",
+	.attrs =  attrs_vme_pipe,
+};
+
+static ssize_t
+show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TEST_OA);
+}
+
+static struct device_attribute dev_attr_test_oa_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_test_oa_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_test_oa[] = {
+	&dev_attr_test_oa_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_test_oa = {
+	.name = "882fa433-1f4a-4a67-a962-c741888fe5f5",
+	.attrs =  attrs_test_oa,
 };
 
 int
@@ -230,9 +2723,145 @@ i915_perf_register_sysfs_sklgt4(struct drm_i915_private *dev_priv)
 		if (ret)
 			goto error_render_basic;
 	}
+	if (get_compute_basic_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+		if (ret)
+			goto error_compute_basic;
+	}
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+		if (ret)
+			goto error_render_pipe_profile;
+	}
+	if (get_memory_reads_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+		if (ret)
+			goto error_memory_reads;
+	}
+	if (get_memory_writes_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+		if (ret)
+			goto error_memory_writes;
+	}
+	if (get_compute_extended_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+		if (ret)
+			goto error_compute_extended;
+	}
+	if (get_compute_l3_cache_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+		if (ret)
+			goto error_compute_l3_cache;
+	}
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+		if (ret)
+			goto error_hdc_and_sf;
+	}
+	if (get_l3_1_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+		if (ret)
+			goto error_l3_1;
+	}
+	if (get_l3_2_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+		if (ret)
+			goto error_l3_2;
+	}
+	if (get_l3_3_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+		if (ret)
+			goto error_l3_3;
+	}
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+		if (ret)
+			goto error_rasterizer_and_pixel_backend;
+	}
+	if (get_sampler_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler);
+		if (ret)
+			goto error_sampler;
+	}
+	if (get_tdl_1_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+		if (ret)
+			goto error_tdl_1;
+	}
+	if (get_tdl_2_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+		if (ret)
+			goto error_tdl_2;
+	}
+	if (get_compute_extra_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+		if (ret)
+			goto error_compute_extra;
+	}
+	if (get_vme_pipe_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+		if (ret)
+			goto error_vme_pipe;
+	}
+	if (get_test_oa_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+		if (ret)
+			goto error_test_oa;
+	}
 
 	return 0;
 
+error_test_oa:
+	if (get_vme_pipe_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+error_vme_pipe:
+	if (get_compute_extra_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+error_compute_extra:
+	if (get_tdl_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+error_tdl_2:
+	if (get_tdl_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+error_tdl_1:
+	if (get_sampler_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+error_sampler:
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+error_rasterizer_and_pixel_backend:
+	if (get_l3_3_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+error_l3_3:
+	if (get_l3_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+error_l3_2:
+	if (get_l3_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+error_l3_1:
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+error_hdc_and_sf:
+	if (get_compute_l3_cache_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+error_compute_l3_cache:
+	if (get_compute_extended_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+error_compute_extended:
+	if (get_memory_writes_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+error_memory_writes:
+	if (get_memory_reads_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+error_memory_reads:
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+error_render_pipe_profile:
+	if (get_compute_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+error_compute_basic:
+	if (get_render_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
 error_render_basic:
 	return ret;
 }
@@ -244,4 +2873,38 @@ i915_perf_unregister_sysfs_sklgt4(struct drm_i915_private *dev_priv)
 
 	if (get_render_basic_mux_config(dev_priv, &mux_len))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+	if (get_compute_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+	if (get_memory_reads_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+	if (get_memory_writes_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+	if (get_compute_extended_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+	if (get_compute_l3_cache_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+	if (get_l3_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+	if (get_l3_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+	if (get_l3_3_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+	if (get_sampler_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+	if (get_tdl_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+	if (get_tdl_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+	if (get_compute_extra_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+	if (get_vme_pipe_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+	if (get_test_oa_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_test_oa);
 }
-- 
2.12.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v4 14/15] drm/i915/perf: per-gen timebase for checking sample freq
  2017-04-12 15:55 [PATCH v4 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Robert Bragg
                   ` (12 preceding siblings ...)
  2017-04-12 15:55 ` [PATCH v4 13/15] drm/i915/perf: Add more OA configs for BDW, CHV, SKL + BXT Robert Bragg
@ 2017-04-12 15:55 ` Robert Bragg
  2017-04-12 15:55 ` [PATCH v4 15/15] drm/i915/perf: remove perf.hook_lock Robert Bragg
                   ` (4 subsequent siblings)
  18 siblings, 0 replies; 87+ messages in thread
From: Robert Bragg @ 2017-04-12 15:55 UTC (permalink / raw)
  To: intel-gfx; +Cc: Lionel Landwerlin

An oa_exponent_to_ns() utility and per-gen timebase constants where
recently removed when updating the tail pointer race condition WA, and
this restores those so we can update the _PROP_OA_EXPONENT validation
done in read_properties_unlocked() to not assume we have a 12.5MHz
timebase as we did for Haswell.

Accordingly the oa_sample_rate_hard_limit value that's referenced by
proc_dointvec_minmax defining the absolute limit for the OA sampling
frequency is now initialized to (timestamp_frequency / 2) instead of the
6.25MHz constant for Haswell.

v2:
    Specify frequency of 19.2MHz for BXT (Ville)
    Initialize oa_sample_rate_hard_limit per-gen too (Lionel)

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Cc: Lionel Landwerlin <lionel.g.landwerlin@linux.intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Acked-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h  |  1 +
 drivers/gpu/drm/i915/i915_perf.c | 37 ++++++++++++++++++++++++++-----------
 2 files changed, 27 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index b8dcf281db53..59dcce3b40a9 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2457,6 +2457,7 @@ struct drm_i915_private {
 
 			bool periodic;
 			int period_exponent;
+			int timestamp_frequency;
 
 			int metrics_set;
 
diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 611f996bece7..5de8d57e0b77 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -288,10 +288,12 @@ static u32 i915_perf_stream_paranoid = true;
 
 /* For sysctl proc_dointvec_minmax of i915_oa_max_sample_rate
  *
- * 160ns is the smallest sampling period we can theoretically program the OA
- * unit with on Haswell, corresponding to 6.25MHz.
+ * The highest sampling frequency we can theoretically program the OA unit
+ * with is always half the timestamp frequency: E.g. 6.25Mhz for Haswell.
+ *
+ * Initialized just before we register the sysctl parameter.
  */
-static int oa_sample_rate_hard_limit = 6250000;
+static int oa_sample_rate_hard_limit;
 
 /* Theoretically we can program the OA unit to sample every 160ns but don't
  * allow that by default unless root...
@@ -2560,6 +2562,12 @@ i915_perf_open_ioctl_locked(struct drm_i915_private *dev_priv,
 	return ret;
 }
 
+static u64 oa_exponent_to_ns(struct drm_i915_private *dev_priv, int exponent)
+{
+	return div_u64(1000000000ULL * (2ULL << exponent),
+		       dev_priv->perf.oa.timestamp_frequency);
+}
+
 /**
  * read_properties_unlocked - validate + copy userspace stream open properties
  * @dev_priv: i915 device instance
@@ -2656,16 +2664,13 @@ static int read_properties_unlocked(struct drm_i915_private *dev_priv,
 			}
 
 			/* Theoretically we can program the OA unit to sample
-			 * every 160ns but don't allow that by default unless
-			 * root.
-			 *
-			 * On Haswell the period is derived from the exponent
-			 * as:
-			 *
-			 *   period = 80ns * 2^(exponent + 1)
+			 * e.g. every 160ns for HSW, 167ns for BDW/SKL or 104ns
+			 * for BXT. We don't allow such high sampling
+			 * frequencies by default unless root.
 			 */
+
 			BUILD_BUG_ON(sizeof(oa_period) != 8);
-			oa_period = 80ull * (2ull << value);
+			oa_period = oa_exponent_to_ns(dev_priv, value);
 
 			/* This check is primarily to ensure that oa_period <=
 			 * UINT32_MAX (before passing to do_div which only
@@ -2921,6 +2926,8 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
 		dev_priv->perf.oa.ops.oa_hw_tail_read =
 			gen7_oa_hw_tail_read;
 
+		dev_priv->perf.oa.timestamp_frequency = 12500000;
+
 		dev_priv->perf.oa.oa_formats = hsw_oa_formats;
 
 		dev_priv->perf.oa.n_builtin_sets =
@@ -2934,6 +2941,8 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
 		 */
 
 		if (IS_GEN8(dev_priv)) {
+			dev_priv->perf.oa.timestamp_frequency = 12500000;
+
 			dev_priv->perf.oa.ctx_oactxctrl_offset = 0x120;
 			dev_priv->perf.oa.ctx_flexeu0_offset = 0x2ce;
 			dev_priv->perf.oa.gen8_valid_ctx_bit = (1<<25);
@@ -2950,6 +2959,8 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
 					i915_oa_select_metric_set_chv;
 			}
 		} else if (IS_GEN9(dev_priv)) {
+			dev_priv->perf.oa.timestamp_frequency = 12000000;
+
 			dev_priv->perf.oa.ctx_oactxctrl_offset = 0x128;
 			dev_priv->perf.oa.ctx_flexeu0_offset = 0x3de;
 			dev_priv->perf.oa.gen8_valid_ctx_bit = (1<<16);
@@ -2970,6 +2981,8 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
 				dev_priv->perf.oa.ops.select_metric_set =
 					i915_oa_select_metric_set_sklgt4;
 			} else if (IS_BROXTON(dev_priv)) {
+				dev_priv->perf.oa.timestamp_frequency = 19200000;
+
 				dev_priv->perf.oa.n_builtin_sets =
 					i915_oa_n_builtin_metric_sets_bxt;
 				dev_priv->perf.oa.ops.select_metric_set =
@@ -3004,6 +3017,8 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
 		spin_lock_init(&dev_priv->perf.hook_lock);
 		spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);
 
+		oa_sample_rate_hard_limit =
+			dev_priv->perf.oa.timestamp_frequency / 2;
 		dev_priv->perf.sysctl_header = register_sysctl_table(dev_root);
 
 		dev_priv->perf.initialized = true;
-- 
2.12.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v4 15/15] drm/i915/perf: remove perf.hook_lock
  2017-04-12 15:55 [PATCH v4 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Robert Bragg
                   ` (13 preceding siblings ...)
  2017-04-12 15:55 ` [PATCH v4 14/15] drm/i915/perf: per-gen timebase for checking sample freq Robert Bragg
@ 2017-04-12 15:55 ` Robert Bragg
  2017-04-12 17:33 ` ✓ Fi.CI.BAT: success for Enable OA unit for Gen 8 and 9 in i915 perf (rev6) Patchwork
                   ` (3 subsequent siblings)
  18 siblings, 0 replies; 87+ messages in thread
From: Robert Bragg @ 2017-04-12 15:55 UTC (permalink / raw)
  To: intel-gfx

In earlier iterations of the i915-perf driver we had a number of
callbacks/hooks from other parts of the i915 driver to e.g. notify us
when a legacy context was pinned and these could run asynchronously with
respect to the stream file operations and might also run in atomic
context.

dev_priv->perf.hook_lock had been for serialising access to state needed
within these callbacks, but as the code has evolved some of the hooks
have gone away or are implemented to avoid needing to lock any state.

The remaining use of this lock was actually redundant considering how
the gen7 oacontrol state used to be updated as part of a context pin
hook.

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Acked-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h  |  2 --
 drivers/gpu/drm/i915/i915_perf.c | 32 ++++++++++----------------------
 2 files changed, 10 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 59dcce3b40a9..94c1f5331daf 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2438,8 +2438,6 @@ struct drm_i915_private {
 		struct mutex lock;
 		struct list_head streams;
 
-		spinlock_t hook_lock;
-
 		struct {
 			struct i915_perf_stream *exclusive_stream;
 
diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 5de8d57e0b77..1f25a6690f61 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -1690,9 +1690,17 @@ static void gen8_disable_metric_set(struct drm_i915_private *dev_priv)
 	/* NOP */
 }
 
-static void gen7_update_oacontrol_locked(struct drm_i915_private *dev_priv)
+static void gen7_oa_enable(struct drm_i915_private *dev_priv)
 {
-	lockdep_assert_held(&dev_priv->perf.hook_lock);
+	/* Reset buf pointers so we don't forward reports from before now.
+	 *
+	 * Think carefully if considering trying to avoid this, since it
+	 * also ensures status flags and the buffer itself are cleared
+	 * in error paths, and we have checks for invalid reports based
+	 * on the assumption that certain fields are written to zeroed
+	 * memory which this helps maintains.
+	 */
+	gen7_init_oa_buffer(dev_priv);
 
 	if (dev_priv->perf.oa.exclusive_stream->enabled) {
 		struct i915_gem_context *ctx =
@@ -1715,25 +1723,6 @@ static void gen7_update_oacontrol_locked(struct drm_i915_private *dev_priv)
 		I915_WRITE(GEN7_OACONTROL, 0);
 }
 
-static void gen7_oa_enable(struct drm_i915_private *dev_priv)
-{
-	unsigned long flags;
-
-	/* Reset buf pointers so we don't forward reports from before now.
-	 *
-	 * Think carefully if considering trying to avoid this, since it
-	 * also ensures status flags and the buffer itself are cleared
-	 * in error paths, and we have checks for invalid reports based
-	 * on the assumption that certain fields are written to zeroed
-	 * memory which this helps maintains.
-	 */
-	gen7_init_oa_buffer(dev_priv);
-
-	spin_lock_irqsave(&dev_priv->perf.hook_lock, flags);
-	gen7_update_oacontrol_locked(dev_priv);
-	spin_unlock_irqrestore(&dev_priv->perf.hook_lock, flags);
-}
-
 static void gen8_oa_enable(struct drm_i915_private *dev_priv)
 {
 	u32 report_format = dev_priv->perf.oa.oa_buffer.format;
@@ -3014,7 +3003,6 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
 
 		INIT_LIST_HEAD(&dev_priv->perf.streams);
 		mutex_init(&dev_priv->perf.lock);
-		spin_lock_init(&dev_priv->perf.hook_lock);
 		spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);
 
 		oa_sample_rate_hard_limit =
-- 
2.12.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* ✓ Fi.CI.BAT: success for Enable OA unit for Gen 8 and 9 in i915 perf (rev6)
  2017-04-12 15:55 [PATCH v4 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Robert Bragg
                   ` (14 preceding siblings ...)
  2017-04-12 15:55 ` [PATCH v4 15/15] drm/i915/perf: remove perf.hook_lock Robert Bragg
@ 2017-04-12 17:33 ` Patchwork
  2017-04-24  7:17 ` [PATCH 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                   ` (2 subsequent siblings)
  18 siblings, 0 replies; 87+ messages in thread
From: Patchwork @ 2017-04-12 17:33 UTC (permalink / raw)
  To: Robert Bragg; +Cc: intel-gfx

== Series Details ==

Series: Enable OA unit for Gen 8 and 9 in i915 perf (rev6)
URL   : https://patchwork.freedesktop.org/series/20084/
State : success

== Summary ==

Series 20084v6 Enable OA unit for Gen 8 and 9 in i915 perf
https://patchwork.freedesktop.org/api/1.0/series/20084/revisions/6/mbox/

Test gem_exec_flush:
        Subgroup basic-batch-kernel-default-uc:
                fail       -> PASS       (fi-snb-2600) fdo#100007
Test gem_exec_suspend:
        Subgroup basic-s4-devices:
                dmesg-warn -> PASS       (fi-kbl-7560u) fdo#100125

fdo#100007 https://bugs.freedesktop.org/show_bug.cgi?id=100007
fdo#100125 https://bugs.freedesktop.org/show_bug.cgi?id=100125

fi-bdw-5557u     total:278  pass:267  dwarn:0   dfail:0   fail:0   skip:11  time:428s
fi-bdw-gvtdvm    total:278  pass:256  dwarn:8   dfail:0   fail:0   skip:14  time:428s
fi-bsw-n3050     total:278  pass:242  dwarn:0   dfail:0   fail:0   skip:36  time:590s
fi-bxt-t5700     total:278  pass:258  dwarn:0   dfail:0   fail:0   skip:20  time:549s
fi-byt-n2820     total:278  pass:250  dwarn:0   dfail:0   fail:0   skip:28  time:479s
fi-hsw-4770      total:278  pass:262  dwarn:0   dfail:0   fail:0   skip:16  time:411s
fi-hsw-4770r     total:278  pass:262  dwarn:0   dfail:0   fail:0   skip:16  time:403s
fi-ilk-650       total:278  pass:228  dwarn:0   dfail:0   fail:0   skip:50  time:422s
fi-ivb-3520m     total:278  pass:260  dwarn:0   dfail:0   fail:0   skip:18  time:492s
fi-ivb-3770      total:278  pass:260  dwarn:0   dfail:0   fail:0   skip:18  time:466s
fi-kbl-7500u     total:278  pass:260  dwarn:0   dfail:0   fail:0   skip:18  time:454s
fi-kbl-7560u     total:278  pass:268  dwarn:0   dfail:0   fail:0   skip:10  time:564s
fi-skl-6260u     total:278  pass:268  dwarn:0   dfail:0   fail:0   skip:10  time:452s
fi-skl-6700hq    total:278  pass:261  dwarn:0   dfail:0   fail:0   skip:17  time:575s
fi-skl-6700k     total:278  pass:256  dwarn:4   dfail:0   fail:0   skip:18  time:456s
fi-skl-6770hq    total:278  pass:268  dwarn:0   dfail:0   fail:0   skip:10  time:494s
fi-skl-gvtdvm    total:278  pass:265  dwarn:0   dfail:0   fail:0   skip:13  time:429s
fi-snb-2520m     total:278  pass:250  dwarn:0   dfail:0   fail:0   skip:28  time:533s
fi-snb-2600      total:278  pass:249  dwarn:0   dfail:0   fail:0   skip:29  time:403s
fi-bxt-j4205 failed to collect. IGT log at Patchwork_4497/fi-bxt-j4205/igt.log
fi-byt-j1900 failed to collect. IGT log at Patchwork_4497/fi-byt-j1900/igt.log

57188aaf3b96d08e4095755fda0e577b1e0d8c60 drm-tip: 2017y-04m-12d-16h-12m-10s UTC integration manifest
19c9282 drm/i915/perf: remove perf.hook_lock
2f78071 drm/i915/perf: per-gen timebase for checking sample freq
953347e drm/i915/perf: Add more OA configs for BDW, CHV, SKL + BXT
96f8110 drm/i915/perf: Add OA unit support for Gen 8+
c5ccb72 drm/i915/perf: Add 'render basic' Gen8+ OA unit configs
8241dda drm/i915: expose _SUBSLICE_MASK GETPARM
0bae603 drm/i915: expose _SLICE_MASK GETPARM
29590b2 drm/i915/perf: rate limit spurious oa report notice
5ab4f33 drm/i915/perf: better pipeline aged/aging tail updates
818cc70 drm/i915/perf: improve invalid OA format debug message
9fe2f7e drm/i915/perf: improve tail race workaround
69ac693 drm/i915/perf: no head/tail ref in gen7_oa_read
00e051e drm/i915/perf: avoid read back of head register
372021d drm/i915/perf: avoid poll, read, EAGAIN busy loops
227c908 drm/i915/perf: fix gen7_append_oa_reports comment

== Logs ==

For more details see: https://intel-gfx-ci.01.org/CI/Patchwork_4497/
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v4 12/15] drm/i915/perf: Add OA unit support for Gen 8+
  2017-04-12 15:55 ` [PATCH v4 12/15] drm/i915/perf: Add OA unit support for Gen 8+ Robert Bragg
@ 2017-04-12 17:47   ` Matthew Auld
  0 siblings, 0 replies; 87+ messages in thread
From: Matthew Auld @ 2017-04-12 17:47 UTC (permalink / raw)
  To: Robert Bragg; +Cc: Intel Graphics Development

On 12 April 2017 at 16:55, Robert Bragg <robert@sixbynine.org> wrote:
> Enables access to OA unit metrics for BDW, CHV, SKL and BXT which all
> share (more-or-less) the same OA unit design.
>
> Of particular note in comparison to Haswell: some OA unit HW config
> state has become per-context state and as a consequence it is somewhat
> more complicated to manage synchronous state changes from the cpu while
> there's no guarantee of what context (if any) is currently actively
> running on the gpu.
>
> The periodic sampling frequency which can be particularly useful for
> system-wide analysis (as opposed to command stream synchronised
> MI_REPORT_PERF_COUNT commands) is perhaps the most surprising state to
> have become per-context save and restored (while the OABUFFER
> destination is still a shared, system-wide resource).
>
> This support for gen8+ takes care to consider a number of timing
> challenges involved in synchronously updating per-context state
> primarily by programming all config state from the cpu and updating all
> current and saved contexts synchronously while the OA unit is still
> disabled.
>
> The driver intentionally avoids depending on command streamer
> programming to update OA state considering the lack of synchronization
> between the automatic loading of OACTXCONTROL state (that includes the
> periodic sampling state and enable state) on context restore and the
> parsing of any general purpose BB the driver can control. I.e. this
> implementation is careful to avoid the possibility of a context restore
> temporarily enabling any out-of-date periodic sampling state. In
> addition to the risk of transiently-out-of-date state being loaded
> automatically; there are also internal HW latencies involved in the
> loading of MUX configurations which would be difficult to account for
> from the command streamer (and we only want to enable the unit when once
> the MUX configuration is complete).
>
> Since the Gen8+ OA unit design no longer supports clock gating the unit
> off for a single given context (which effectively stopped any progress
> of counters while any other context was running) and instead supports
> tagging OA reports with a context ID for filtering on the CPU, it means
> we can no longer hide the system-wide progress of counters from a
> non-privileged application only interested in metrics for its own
> context. Although we could theoretically try and subtract the progress
> of other contexts before forwarding reports via read() we aren't in a
> position to filter reports captured via MI_REPORT_PERF_COUNT commands.
> As a result, for Gen8+, we always require the
> dev.i915.perf_stream_paranoid to be unset for any access to OA metrics
> if not root.
>
> Signed-off-by: Robert Bragg <robert@sixbynine.org>
> Acked-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com> \o/
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* [PATCH 00/15] Enable OA unit for Gen 8 and 9 in i915 perf
  2017-04-12 15:55 [PATCH v4 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Robert Bragg
                   ` (15 preceding siblings ...)
  2017-04-12 17:33 ` ✓ Fi.CI.BAT: success for Enable OA unit for Gen 8 and 9 in i915 perf (rev6) Patchwork
@ 2017-04-24  7:17 ` Lionel Landwerlin
  2017-04-24  7:17   ` [PATCH v5 01/15] drm/i915/perf: fix gen7_append_oa_reports comment Lionel Landwerlin
                     ` (18 more replies)
  2017-04-24 18:52 ` ✗ Fi.CI.BAT: failure for Enable OA unit for Gen 8 and 9 in i915 perf (rev7) Patchwork
  2017-04-25 17:35 ` ✗ Fi.CI.BAT: failure for Enable OA unit for Gen 8 and 9 in i915 perf (rev8) Patchwork
  18 siblings, 19 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-04-24  7:17 UTC (permalink / raw)
  To: intel-gfx

Hi,

Taking over from Rob for this v5. This series only has the following
changes from v4 :

 - patch 9 & 10 : updated number for GETPARAM after rebase

 - patch 12 : drain the GPU before reconfiguring the OA unit to work
   around a race condition where the CPU & GPU update the context
   image at the same time (see gen8_configure_all_contexts)

The second item had been causing instabilities for a while in the IGT
tests. With that out of the way, tests are a lot more stable now.

Cheers,

Robert Bragg (15):
  drm/i915/perf: fix gen7_append_oa_reports comment
  drm/i915/perf: avoid poll, read, EAGAIN busy loops
  drm/i915/perf: avoid read back of head register
  drm/i915/perf: no head/tail ref in gen7_oa_read
  drm/i915/perf: improve tail race workaround
  drm/i915/perf: improve invalid OA format debug message
  drm/i915/perf: better pipeline aged/aging tail updates
  drm/i915/perf: rate limit spurious oa report notice
  drm/i915: expose _SLICE_MASK GETPARM
  drm/i915: expose _SUBSLICE_MASK GETPARM
  drm/i915/perf: Add 'render basic' Gen8+ OA unit configs
  drm/i915/perf: Add OA unit support for Gen 8+
  drm/i915/perf: Add more OA configs for BDW, CHV, SKL + BXT
  drm/i915/perf: per-gen timebase for checking sample freq
  drm/i915/perf: remove perf.hook_lock

 drivers/gpu/drm/i915/Makefile           |    8 +-
 drivers/gpu/drm/i915/i915_drv.c         |   10 +
 drivers/gpu/drm/i915/i915_drv.h         |  121 +-
 drivers/gpu/drm/i915/i915_gem_context.h |    1 +
 drivers/gpu/drm/i915/i915_oa_bdw.c      | 5154 +++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_bdw.h      |   38 +
 drivers/gpu/drm/i915/i915_oa_bxt.c      | 2541 +++++++++++++++
 drivers/gpu/drm/i915/i915_oa_bxt.h      |   38 +
 drivers/gpu/drm/i915/i915_oa_chv.c      | 2730 ++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_chv.h      |   38 +
 drivers/gpu/drm/i915/i915_oa_hsw.c      |   58 +-
 drivers/gpu/drm/i915/i915_oa_sklgt2.c   | 3302 ++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_sklgt2.h   |   38 +
 drivers/gpu/drm/i915/i915_oa_sklgt3.c   | 2856 +++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_sklgt3.h   |   38 +
 drivers/gpu/drm/i915/i915_oa_sklgt4.c   | 2910 +++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_sklgt4.h   |   38 +
 drivers/gpu/drm/i915/i915_perf.c        | 1361 ++++++--
 drivers/gpu/drm/i915/i915_reg.h         |   22 +
 drivers/gpu/drm/i915/intel_lrc.c        |    5 +
 include/uapi/drm/i915_drm.h             |   27 +-
 21 files changed, 21096 insertions(+), 238 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_oa_bdw.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_bdw.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_bxt.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_bxt.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_chv.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_chv.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt2.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt2.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt3.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt3.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt4.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt4.h

--
2.11.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* [PATCH v5 01/15] drm/i915/perf: fix gen7_append_oa_reports comment
  2017-04-24  7:17 ` [PATCH 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
@ 2017-04-24  7:17   ` Lionel Landwerlin
  2017-04-24  7:17   ` [PATCH v5 02/15] drm/i915/perf: avoid poll, read, EAGAIN busy loops Lionel Landwerlin
                     ` (17 subsequent siblings)
  18 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-04-24  7:17 UTC (permalink / raw)
  To: intel-gfx

From: Robert Bragg <robert@sixbynine.org>

If I'm going to complain about a back-to-front convention then the least
I can do is not muddle the comment up too.

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_perf.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 060b171480d5..78fef53b45c9 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -431,7 +431,7 @@ static int append_oa_sample(struct i915_perf_stream *stream,
  * userspace.
  *
  * Note: reports are consumed from the head, and appended to the
- * tail, so the head chases the tail?... If you think that's mad
+ * tail, so the tail chases the head?... If you think that's mad
  * and back-to-front you're not alone, but this follows the
  * Gen PRM naming convention.
  *
--
2.11.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v5 02/15] drm/i915/perf: avoid poll, read, EAGAIN busy loops
  2017-04-24  7:17 ` [PATCH 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
  2017-04-24  7:17   ` [PATCH v5 01/15] drm/i915/perf: fix gen7_append_oa_reports comment Lionel Landwerlin
@ 2017-04-24  7:17   ` Lionel Landwerlin
  2017-04-24  7:17   ` [PATCH v5 03/15] drm/i915/perf: avoid read back of head register Lionel Landwerlin
                     ` (16 subsequent siblings)
  18 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-04-24  7:17 UTC (permalink / raw)
  To: intel-gfx

From: Robert Bragg <robert@sixbynine.org>

If the function for checking whether there is OA buffer data available
(during a poll or blocking read) has false positives then we want to
avoid a situation where the subsequent read() returns EAGAIN (after
a more accurate check) followed by a poll() immediately reporting
the same false positive POLLIN event and effectively maintaining a
busy loop until there really is data.

This makes sure that we clear the .pollin event status whenever we
return EAGAIN to userspace which will throttle subsequent POLLIN events
and repeated attempts to read to the 5ms intervals of the hrtimer
callback we have.

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_perf.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 78fef53b45c9..f59f6dd20922 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -1352,7 +1352,15 @@ static ssize_t i915_perf_read(struct file *file,
 		mutex_unlock(&dev_priv->perf.lock);
 	}

-	if (ret >= 0) {
+	/* We allow the poll checking to sometimes report false positive POLLIN
+	 * events where we might actually report EAGAIN on read() if there's
+	 * not really any data available. In this situation though we don't
+	 * want to enter a busy loop between poll() reporting a POLLIN event
+	 * and read() returning -EAGAIN. Clearing the oa.pollin state here
+	 * effectively ensures we back off until the next hrtimer callback
+	 * before reporting another POLLIN event.
+	 */
+	if (ret >= 0 || ret == -EAGAIN) {
 		/* Maybe make ->pollin per-stream state if we support multiple
 		 * concurrent streams in the future.
 		 */
--
2.11.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v5 03/15] drm/i915/perf: avoid read back of head register
  2017-04-24  7:17 ` [PATCH 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
  2017-04-24  7:17   ` [PATCH v5 01/15] drm/i915/perf: fix gen7_append_oa_reports comment Lionel Landwerlin
  2017-04-24  7:17   ` [PATCH v5 02/15] drm/i915/perf: avoid poll, read, EAGAIN busy loops Lionel Landwerlin
@ 2017-04-24  7:17   ` Lionel Landwerlin
  2017-04-24  7:17   ` [PATCH v5 04/15] drm/i915/perf: no head/tail ref in gen7_oa_read Lionel Landwerlin
                     ` (15 subsequent siblings)
  18 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-04-24  7:17 UTC (permalink / raw)
  To: intel-gfx

From: Robert Bragg <robert@sixbynine.org>

There's no need for the driver to keep reading back the head pointer
from hardware since the hardware doesn't update it automatically. This
way we can treat any invalid head pointer value as a software/driver
bug instead of spurious hardware behaviour.

This change is also a small stepping stone towards re-working how
the head and tail state is managed as part of an improved workaround
for the tail register race condition.

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h  | 11 ++++++++++
 drivers/gpu/drm/i915/i915_perf.c | 46 ++++++++++++++++++----------------------
 2 files changed, 32 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 357b6c6c2f04..b618a8bc54f6 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2469,6 +2469,17 @@ struct drm_i915_private {
 				u8 *vaddr;
 				int format;
 				int format_size;
+
+				/**
+				 * Although we can always read back the head
+				 * pointer register, we prefer to avoid
+				 * trusting the HW state, just to avoid any
+				 * risk that some hardware condition could
+				 * somehow bump the head pointer unpredictably
+				 * and cause us to forward the wrong OA buffer
+				 * data to userspace.
+				 */
+				u32 head;
 			} oa_buffer;

 			u32 gen7_latched_oastatus1;
diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index f59f6dd20922..f47d1cc2144b 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -322,9 +322,8 @@ struct perf_open_properties {
 static bool gen7_oa_buffer_is_empty_fop_unlocked(struct drm_i915_private *dev_priv)
 {
 	int report_size = dev_priv->perf.oa.oa_buffer.format_size;
-	u32 oastatus2 = I915_READ(GEN7_OASTATUS2);
 	u32 oastatus1 = I915_READ(GEN7_OASTATUS1);
-	u32 head = oastatus2 & GEN7_OASTATUS2_HEAD_MASK;
+	u32 head = dev_priv->perf.oa.oa_buffer.head;
 	u32 tail = oastatus1 & GEN7_OASTATUS1_TAIL_MASK;

 	return OA_TAKEN(tail, head) <
@@ -458,16 +457,24 @@ static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 		return -EIO;

 	head = *head_ptr - gtt_offset;
+
+	/* An out of bounds or misaligned head pointer implies a driver bug
+	 * since we are in full control of head pointer which should only
+	 * be incremented by multiples of the report size (notably also
+	 * all a power of two).
+	 */
+	if (WARN_ONCE(head > OA_BUFFER_SIZE || head % report_size,
+		      "Inconsistent OA buffer head pointer = %u\n", head))
+		return -EIO;
+
 	tail -= gtt_offset;

 	/* The OA unit is expected to wrap the tail pointer according to the OA
-	 * buffer size and since we should never write a misaligned head
-	 * pointer we don't expect to read one back either...
+	 * buffer size
 	 */
-	if (tail > OA_BUFFER_SIZE || head > OA_BUFFER_SIZE ||
-	    head % report_size) {
-		DRM_ERROR("Inconsistent OA buffer pointer (head = %u, tail = %u): force restart\n",
-			  head, tail);
+	if (tail > OA_BUFFER_SIZE) {
+		DRM_ERROR("Inconsistent OA buffer tail pointer = %u: force restart\n",
+			  tail);
 		dev_priv->perf.oa.ops.oa_disable(dev_priv);
 		dev_priv->perf.oa.ops.oa_enable(dev_priv);
 		*head_ptr = I915_READ(GEN7_OASTATUS2) &
@@ -562,8 +569,6 @@ static int gen7_oa_read(struct i915_perf_stream *stream,
 			size_t *offset)
 {
 	struct drm_i915_private *dev_priv = stream->dev_priv;
-	int report_size = dev_priv->perf.oa.oa_buffer.format_size;
-	u32 oastatus2;
 	u32 oastatus1;
 	u32 head;
 	u32 tail;
@@ -572,10 +577,9 @@ static int gen7_oa_read(struct i915_perf_stream *stream,
 	if (WARN_ON(!dev_priv->perf.oa.oa_buffer.vaddr))
 		return -EIO;

-	oastatus2 = I915_READ(GEN7_OASTATUS2);
 	oastatus1 = I915_READ(GEN7_OASTATUS1);

-	head = oastatus2 & GEN7_OASTATUS2_HEAD_MASK;
+	head = dev_priv->perf.oa.oa_buffer.head;
 	tail = oastatus1 & GEN7_OASTATUS1_TAIL_MASK;

 	/* XXX: On Haswell we don't have a safe way to clear oastatus1
@@ -616,10 +620,9 @@ static int gen7_oa_read(struct i915_perf_stream *stream,
 		dev_priv->perf.oa.ops.oa_disable(dev_priv);
 		dev_priv->perf.oa.ops.oa_enable(dev_priv);

-		oastatus2 = I915_READ(GEN7_OASTATUS2);
 		oastatus1 = I915_READ(GEN7_OASTATUS1);

-		head = oastatus2 & GEN7_OASTATUS2_HEAD_MASK;
+		head = dev_priv->perf.oa.oa_buffer.head;
 		tail = oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
 	}

@@ -635,17 +638,6 @@ static int gen7_oa_read(struct i915_perf_stream *stream,
 	ret = gen7_append_oa_reports(stream, buf, count, offset,
 				     &head, tail);

-	/* All the report sizes are a power of two and the
-	 * head should always be incremented by some multiple
-	 * of the report size.
-	 *
-	 * A warning here, but notably if we later read back a
-	 * misaligned pointer we will treat that as a bug since
-	 * it could lead to a buffer overrun.
-	 */
-	WARN_ONCE(head & (report_size - 1),
-		  "i915: Writing misaligned OA head pointer");
-
 	/* Note: we update the head pointer here even if an error
 	 * was returned since the error may represent a short read
 	 * where some some reports were successfully copied.
@@ -653,6 +645,7 @@ static int gen7_oa_read(struct i915_perf_stream *stream,
 	I915_WRITE(GEN7_OASTATUS2,
 		   ((head & GEN7_OASTATUS2_HEAD_MASK) |
 		    OA_MEM_SELECT_GGTT));
+	dev_priv->perf.oa.oa_buffer.head = head;

 	return ret;
 }
@@ -834,7 +827,10 @@ static void gen7_init_oa_buffer(struct drm_i915_private *dev_priv)
 	 * before OASTATUS1, but after OASTATUS2
 	 */
 	I915_WRITE(GEN7_OASTATUS2, gtt_offset | OA_MEM_SELECT_GGTT); /* head */
+	dev_priv->perf.oa.oa_buffer.head = gtt_offset;
+
 	I915_WRITE(GEN7_OABUFFER, gtt_offset);
+
 	I915_WRITE(GEN7_OASTATUS1, gtt_offset | OABUFFER_SIZE_16M); /* tail */

 	/* On Haswell we have to track which OASTATUS1 flags we've
--
2.11.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v5 04/15] drm/i915/perf: no head/tail ref in gen7_oa_read
  2017-04-24  7:17 ` [PATCH 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                     ` (2 preceding siblings ...)
  2017-04-24  7:17   ` [PATCH v5 03/15] drm/i915/perf: avoid read back of head register Lionel Landwerlin
@ 2017-04-24  7:17   ` Lionel Landwerlin
  2017-04-24  7:17   ` [PATCH v5 05/15] drm/i915/perf: improve tail race workaround Lionel Landwerlin
                     ` (14 subsequent siblings)
  18 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-04-24  7:17 UTC (permalink / raw)
  To: intel-gfx

From: Robert Bragg <robert@sixbynine.org>

This avoids redundantly passing an (inout) head and tail pointer to
gen7_append_oa_reports() from gen7_oa_read which doesn't need to
reference either itself.

Moving the head/tail reads and writes into gen7_append_oa_reports should
have no functional effect except to avoid some redundant head pointer
writes in cases where nothing was copied to userspace.

This is a stepping stone towards updating how the head and tail pointer
state is managed to improve the workaround for the OA unit's tail
pointer race. It reduces the number of places we need to read/write the
head and tail pointers.

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_perf.c | 51 +++++++++++++++-------------------------
 1 file changed, 19 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index f47d1cc2144b..83dc67a635fb 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -420,8 +420,6 @@ static int append_oa_sample(struct i915_perf_stream *stream,
  * @buf: destination buffer given by userspace
  * @count: the number of bytes userspace wants to read
  * @offset: (inout): the current position for writing into @buf
- * @head_ptr: (inout): the current oa buffer cpu read position
- * @tail: the current oa buffer gpu write position
  *
  * Notably any error condition resulting in a short read (-%ENOSPC or
  * -%EFAULT) will be returned even though one or more records may
@@ -439,9 +437,7 @@ static int append_oa_sample(struct i915_perf_stream *stream,
 static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 				  char __user *buf,
 				  size_t count,
-				  size_t *offset,
-				  u32 *head_ptr,
-				  u32 tail)
+				  size_t *offset)
 {
 	struct drm_i915_private *dev_priv = stream->dev_priv;
 	int report_size = dev_priv->perf.oa.oa_buffer.format_size;
@@ -449,14 +445,15 @@ static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 	int tail_margin = dev_priv->perf.oa.tail_margin;
 	u32 gtt_offset = i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
 	u32 mask = (OA_BUFFER_SIZE - 1);
-	u32 head;
+	size_t start_offset = *offset;
+	u32 head, oastatus1, tail;
 	u32 taken;
 	int ret = 0;

 	if (WARN_ON(!stream->enabled))
 		return -EIO;

-	head = *head_ptr - gtt_offset;
+	head = dev_priv->perf.oa.oa_buffer.head - gtt_offset;

 	/* An out of bounds or misaligned head pointer implies a driver bug
 	 * since we are in full control of head pointer which should only
@@ -467,7 +464,8 @@ static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 		      "Inconsistent OA buffer head pointer = %u\n", head))
 		return -EIO;

-	tail -= gtt_offset;
+	oastatus1 = I915_READ(GEN7_OASTATUS1);
+	tail = (oastatus1 & GEN7_OASTATUS1_TAIL_MASK) - gtt_offset;

 	/* The OA unit is expected to wrap the tail pointer according to the OA
 	 * buffer size
@@ -477,8 +475,6 @@ static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 			  tail);
 		dev_priv->perf.oa.ops.oa_disable(dev_priv);
 		dev_priv->perf.oa.ops.oa_enable(dev_priv);
-		*head_ptr = I915_READ(GEN7_OASTATUS2) &
-			GEN7_OASTATUS2_HEAD_MASK;
 		return -EIO;
 	}

@@ -542,7 +538,18 @@ static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 		report32[0] = 0;
 	}

-	*head_ptr = gtt_offset + head;
+
+	if (start_offset != *offset) {
+		/* We removed the gtt_offset for the copy loop above, indexing
+		 * relative to oa_buf_base so put back here...
+		 */
+		head += gtt_offset;
+
+		I915_WRITE(GEN7_OASTATUS2,
+			   ((head & GEN7_OASTATUS2_HEAD_MASK) |
+			    OA_MEM_SELECT_GGTT));
+		dev_priv->perf.oa.oa_buffer.head = head;
+	}

 	return ret;
 }
@@ -570,8 +577,6 @@ static int gen7_oa_read(struct i915_perf_stream *stream,
 {
 	struct drm_i915_private *dev_priv = stream->dev_priv;
 	u32 oastatus1;
-	u32 head;
-	u32 tail;
 	int ret;

 	if (WARN_ON(!dev_priv->perf.oa.oa_buffer.vaddr))
@@ -579,9 +584,6 @@ static int gen7_oa_read(struct i915_perf_stream *stream,

 	oastatus1 = I915_READ(GEN7_OASTATUS1);

-	head = dev_priv->perf.oa.oa_buffer.head;
-	tail = oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
-
 	/* XXX: On Haswell we don't have a safe way to clear oastatus1
 	 * bits while the OA unit is enabled (while the tail pointer
 	 * may be updated asynchronously) so we ignore status bits
@@ -621,9 +623,6 @@ static int gen7_oa_read(struct i915_perf_stream *stream,
 		dev_priv->perf.oa.ops.oa_enable(dev_priv);

 		oastatus1 = I915_READ(GEN7_OASTATUS1);
-
-		head = dev_priv->perf.oa.oa_buffer.head;
-		tail = oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
 	}

 	if (unlikely(oastatus1 & GEN7_OASTATUS1_REPORT_LOST)) {
@@ -635,19 +634,7 @@ static int gen7_oa_read(struct i915_perf_stream *stream,
 			GEN7_OASTATUS1_REPORT_LOST;
 	}

-	ret = gen7_append_oa_reports(stream, buf, count, offset,
-				     &head, tail);
-
-	/* Note: we update the head pointer here even if an error
-	 * was returned since the error may represent a short read
-	 * where some some reports were successfully copied.
-	 */
-	I915_WRITE(GEN7_OASTATUS2,
-		   ((head & GEN7_OASTATUS2_HEAD_MASK) |
-		    OA_MEM_SELECT_GGTT));
-	dev_priv->perf.oa.oa_buffer.head = head;
-
-	return ret;
+	return gen7_append_oa_reports(stream, buf, count, offset);
 }

 /**
--
2.11.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v5 05/15] drm/i915/perf: improve tail race workaround
  2017-04-24  7:17 ` [PATCH 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                     ` (3 preceding siblings ...)
  2017-04-24  7:17   ` [PATCH v5 04/15] drm/i915/perf: no head/tail ref in gen7_oa_read Lionel Landwerlin
@ 2017-04-24  7:17   ` Lionel Landwerlin
  2017-04-24  7:17   ` [PATCH v5 06/15] drm/i915/perf: improve invalid OA format debug message Lionel Landwerlin
                     ` (13 subsequent siblings)
  18 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-04-24  7:17 UTC (permalink / raw)
  To: intel-gfx

From: Robert Bragg <robert@sixbynine.org>

There's a HW race condition between OA unit tail pointer register
updates and writes to memory whereby the tail pointer can sometimes get
ahead of what's been written out to the OA buffer so far (in terms of
what's visible to the CPU).

Although this can be observed explicitly while copying reports to
userspace by checking for a zeroed report-id field in tail reports, we
want to account for this earlier, as part of the _oa_buffer_check to
avoid lots of redundant read() attempts.

Previously the driver used to define an effective tail pointer that
lagged the real pointer by a 'tail margin' measured in bytes derived
from OA_TAIL_MARGIN_NSEC and the configured sampling frequency.
Unfortunately this was flawed considering that the OA unit may also
automatically generate non-periodic reports (such as on context switch)
or the OA unit may be enabled without any periodic sampling.

This improves how we define a tail pointer for reading that lags the
real tail pointer by at least %OA_TAIL_MARGIN_NSEC nanoseconds, which
gives enough time for the corresponding reports to become visible to the
CPU.

The driver now maintains two tail pointers:
 1) An 'aging' tail with an associated timestamp that is tracked until we
    can trust the corresponding data is visible to the CPU; at which point
    it is considered 'aged'.
 2) An 'aged' tail that can be used for read()ing.

The two separate pointers let us decouple read()s from tail pointer aging.

The tail pointers are checked and updated at a limited rate within a
hrtimer callback (the same callback that is used for delivering POLLIN
events) and since we're now measuring the wall clock time elapsed since
a given tail pointer was read the mechanism no longer cares about
the OA unit's periodic sampling frequency.

The natural place to handle the tail pointer updates was in
gen7_oa_buffer_is_empty() which is called as part of blocking reads and
the hrtimer callback used for polling, and so this was renamed to
oa_buffer_check() considering the added side effect while checking
whether the buffer contains data.

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h  |  60 ++++++++-
 drivers/gpu/drm/i915/i915_perf.c | 277 ++++++++++++++++++++++++++-------------
 2 files changed, 241 insertions(+), 96 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index b618a8bc54f6..69a914870774 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2097,7 +2097,7 @@ struct i915_oa_ops {
 		    size_t *offset);

 	/**
-	 * @oa_buffer_is_empty: Check if OA buffer empty (false positives OK)
+	 * @oa_buffer_check: Check for OA buffer data + update tail
 	 *
 	 * This is either called via fops or the poll check hrtimer (atomic
 	 * ctx) without any locks taken.
@@ -2110,7 +2110,7 @@ struct i915_oa_ops {
 	 * here, which will be handled gracefully - likely resulting in an
 	 * %EAGAIN error for userspace.
 	 */
-	bool (*oa_buffer_is_empty)(struct drm_i915_private *dev_priv);
+	bool (*oa_buffer_check)(struct drm_i915_private *dev_priv);
 };

 struct intel_cdclk_state {
@@ -2453,9 +2453,6 @@ struct drm_i915_private {

 			bool periodic;
 			int period_exponent;
-			int timestamp_frequency;
-
-			int tail_margin;

 			int metrics_set;

@@ -2471,6 +2468,59 @@ struct drm_i915_private {
 				int format_size;

 				/**
+				 * Locks reads and writes to all head/tail state
+				 *
+				 * Consider: the head and tail pointer state
+				 * needs to be read consistently from a hrtimer
+				 * callback (atomic context) and read() fop
+				 * (user context) with tail pointer updates
+				 * happening in atomic context and head updates
+				 * in user context and the (unlikely)
+				 * possibility of read() errors needing to
+				 * reset all head/tail state.
+				 *
+				 * Note: Contention or performance aren't
+				 * currently a significant concern here
+				 * considering the relatively low frequency of
+				 * hrtimer callbacks (5ms period) and that
+				 * reads typically only happen in response to a
+				 * hrtimer event and likely complete before the
+				 * next callback.
+				 *
+				 * Note: This lock is not held *while* reading
+				 * and copying data to userspace so the value
+				 * of head observed in htrimer callbacks won't
+				 * represent any partial consumption of data.
+				 */
+				spinlock_t ptr_lock;
+
+				/**
+				 * One 'aging' tail pointer and one 'aged'
+				 * tail pointer ready to used for reading.
+				 *
+				 * Initial values of 0xffffffff are invalid
+				 * and imply that an update is required
+				 * (and should be ignored by an attempted
+				 * read)
+				 */
+				struct {
+					u32 offset;
+				} tails[2];
+
+				/**
+				 * Index for the aged tail ready to read()
+				 * data up to.
+				 */
+				unsigned int aged_tail_idx;
+
+				/**
+				 * A monotonic timestamp for when the current
+				 * aging tail pointer was read; used to
+				 * determine when it is old enough to trust.
+				 */
+				u64 aging_timestamp;
+
+				/**
 				 * Although we can always read back the head
 				 * pointer register, we prefer to avoid
 				 * trusting the HW state, just to avoid any
diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 83dc67a635fb..18734e1926b9 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -205,25 +205,49 @@

 #define OA_TAKEN(tail, head)	((tail - head) & (OA_BUFFER_SIZE - 1))

-/* There's a HW race condition between OA unit tail pointer register updates and
+/**
+ * DOC: OA Tail Pointer Race
+ *
+ * There's a HW race condition between OA unit tail pointer register updates and
  * writes to memory whereby the tail pointer can sometimes get ahead of what's
- * been written out to the OA buffer so far.
+ * been written out to the OA buffer so far (in terms of what's visible to the
+ * CPU).
+ *
+ * Although this can be observed explicitly while copying reports to userspace
+ * by checking for a zeroed report-id field in tail reports, we want to account
+ * for this earlier, as part of the _oa_buffer_check to avoid lots of redundant
+ * read() attempts.
+ *
+ * In effect we define a tail pointer for reading that lags the real tail
+ * pointer by at least %OA_TAIL_MARGIN_NSEC nanoseconds, which gives enough
+ * time for the corresponding reports to become visible to the CPU.
+ *
+ * To manage this we actually track two tail pointers:
+ *  1) An 'aging' tail with an associated timestamp that is tracked until we
+ *     can trust the corresponding data is visible to the CPU; at which point
+ *     it is considered 'aged'.
+ *  2) An 'aged' tail that can be used for read()ing.
  *
- * Although this can be observed explicitly by checking for a zeroed report-id
- * field in tail reports, it seems preferable to account for this earlier e.g.
- * as part of the _oa_buffer_is_empty checks to minimize -EAGAIN polling cycles
- * in this situation.
+ * The two separate pointers let us decouple read()s from tail pointer aging.
  *
- * To give time for the most recent reports to land before they may be copied to
- * userspace, the driver operates as if the tail pointer effectively lags behind
- * the HW tail pointer by 'tail_margin' bytes. The margin in bytes is calculated
- * based on this constant in nanoseconds, the current OA sampling exponent
- * and current report size.
+ * The tail pointers are checked and updated at a limited rate within a hrtimer
+ * callback (the same callback that is used for delivering POLLIN events)
  *
- * There is also a fallback check while reading to simply skip over reports with
- * a zeroed report-id.
+ * Initially the tails are marked invalid with %INVALID_TAIL_PTR which
+ * indicates that an updated tail pointer is needed.
+ *
+ * Most of the implementation details for this workaround are in
+ * gen7_oa_buffer_check_unlocked() and gen7_appand_oa_reports()
+ *
+ * Note for posterity: previously the driver used to define an effective tail
+ * pointer that lagged the real pointer by a 'tail margin' measured in bytes
+ * derived from %OA_TAIL_MARGIN_NSEC and the configured sampling frequency.
+ * This was flawed considering that the OA unit may also automatically generate
+ * non-periodic reports (such as on context switch) or the OA unit may be
+ * enabled without any periodic sampling.
  */
 #define OA_TAIL_MARGIN_NSEC	100000ULL
+#define INVALID_TAIL_PTR	0xffffffff

 /* frequency for checking whether the OA unit has written new reports to the
  * circular OA buffer...
@@ -308,26 +332,116 @@ struct perf_open_properties {
 	int oa_period_exponent;
 };

-/* NB: This is either called via fops or the poll check hrtimer (atomic ctx)
+/**
+ * gen7_oa_buffer_check_unlocked - check for data and update tail ptr state
+ * @dev_priv: i915 device instance
  *
- * It's safe to read OA config state here unlocked, assuming that this is only
- * called while the stream is enabled, while the global OA configuration can't
- * be modified.
+ * This is either called via fops (for blocking reads in user ctx) or the poll
+ * check hrtimer (atomic ctx) to check the OA buffer tail pointer and check
+ * if there is data available for userspace to read.
  *
- * Note: we don't lock around the head/tail reads even though there's the slim
- * possibility of read() fop errors forcing a re-init of the OA buffer
- * pointers.  A race here could result in a false positive !empty status which
- * is acceptable.
+ * This function is central to providing a workaround for the OA unit tail
+ * pointer having a race with respect to what data is visible to the CPU.
+ * It is responsible for reading tail pointers from the hardware and giving
+ * the pointers time to 'age' before they are made available for reading.
+ * (See description of OA_TAIL_MARGIN_NSEC above for further details.)
+ *
+ * Besides returning true when there is data available to read() this function
+ * also has the side effect of updating the oa_buffer.tails[], .aging_timestamp
+ * and .aged_tail_idx state used for reading.
+ *
+ * Note: It's safe to read OA config state here unlocked, assuming that this is
+ * only called while the stream is enabled, while the global OA configuration
+ * can't be modified.
+ *
+ * Returns: %true if the OA buffer contains data, else %false
  */
-static bool gen7_oa_buffer_is_empty_fop_unlocked(struct drm_i915_private *dev_priv)
+static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
 {
 	int report_size = dev_priv->perf.oa.oa_buffer.format_size;
-	u32 oastatus1 = I915_READ(GEN7_OASTATUS1);
-	u32 head = dev_priv->perf.oa.oa_buffer.head;
-	u32 tail = oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
+	unsigned long flags;
+	unsigned int aged_idx;
+	u32 oastatus1;
+	u32 head, hw_tail, aged_tail, aging_tail;
+	u64 now;
+
+	/* We have to consider the (unlikely) possibility that read() errors
+	 * could result in an OA buffer reset which might reset the head,
+	 * tails[] and aged_tail state.
+	 */
+	spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	/* NB: The head we observe here might effectively be a little out of
+	 * date (between head and tails[aged_idx].offset if there is currently
+	 * a read() in progress.
+	 */
+	head = dev_priv->perf.oa.oa_buffer.head;
+
+	aged_idx = dev_priv->perf.oa.oa_buffer.aged_tail_idx;
+	aged_tail = dev_priv->perf.oa.oa_buffer.tails[aged_idx].offset;
+	aging_tail = dev_priv->perf.oa.oa_buffer.tails[!aged_idx].offset;
+
+	oastatus1 = I915_READ(GEN7_OASTATUS1);
+	hw_tail = oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
+
+	/* The tail pointer increases in 64 byte increments,
+	 * not in report_size steps...
+	 */
+	hw_tail &= ~(report_size - 1);
+
+	now = ktime_get_mono_fast_ns();
+
+	/* Update the aging tail
+	 *
+	 * We throttle aging tail updates until we have a new tail that
+	 * represents >= one report more data than is already available for
+	 * reading. This ensures there will be enough data for a successful
+	 * read once this new pointer has aged and ensures we will give the new
+	 * pointer time to age.
+	 */
+	if (aging_tail == INVALID_TAIL_PTR &&
+	    (aged_tail == INVALID_TAIL_PTR ||
+	     OA_TAKEN(hw_tail, aged_tail) >= report_size)) {
+		struct i915_vma *vma = dev_priv->perf.oa.oa_buffer.vma;
+		u32 gtt_offset = i915_ggtt_offset(vma);
+
+		/* Be paranoid and do a bounds check on the pointer read back
+		 * from hardware, just in case some spurious hardware condition
+		 * could put the tail out of bounds...
+		 */
+		if (hw_tail >= gtt_offset &&
+		    hw_tail < (gtt_offset + OA_BUFFER_SIZE)) {
+			dev_priv->perf.oa.oa_buffer.tails[!aged_idx].offset =
+				aging_tail = hw_tail;
+			dev_priv->perf.oa.oa_buffer.aging_timestamp = now;
+		} else {
+			DRM_ERROR("Ignoring spurious out of range OA buffer tail pointer = %u\n",
+				  hw_tail);
+		}
+	}
+
+	/* Update the aged tail
+	 *
+	 * Flip the tail pointer available for read()s once the aging tail is
+	 * old enough to trust that the corresponding data will be visible to
+	 * the CPU...
+	 */
+	if (aging_tail != INVALID_TAIL_PTR &&
+	    ((now - dev_priv->perf.oa.oa_buffer.aging_timestamp) >
+	     OA_TAIL_MARGIN_NSEC)) {
+		aged_idx ^= 1;
+		dev_priv->perf.oa.oa_buffer.aged_tail_idx = aged_idx;
+
+		aged_tail = aging_tail;

-	return OA_TAKEN(tail, head) <
-		dev_priv->perf.oa.tail_margin + report_size;
+		/* Mark that we need a new pointer to start aging... */
+		dev_priv->perf.oa.oa_buffer.tails[!aged_idx].offset = INVALID_TAIL_PTR;
+	}
+
+	spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	return aged_tail == INVALID_TAIL_PTR ?
+		false : OA_TAKEN(aged_tail, head) >= report_size;
 }

 /**
@@ -442,58 +556,50 @@ static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 	struct drm_i915_private *dev_priv = stream->dev_priv;
 	int report_size = dev_priv->perf.oa.oa_buffer.format_size;
 	u8 *oa_buf_base = dev_priv->perf.oa.oa_buffer.vaddr;
-	int tail_margin = dev_priv->perf.oa.tail_margin;
 	u32 gtt_offset = i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
 	u32 mask = (OA_BUFFER_SIZE - 1);
 	size_t start_offset = *offset;
-	u32 head, oastatus1, tail;
+	unsigned long flags;
+	unsigned int aged_tail_idx;
+	u32 head, tail;
 	u32 taken;
 	int ret = 0;

 	if (WARN_ON(!stream->enabled))
 		return -EIO;

-	head = dev_priv->perf.oa.oa_buffer.head - gtt_offset;
+	spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);

-	/* An out of bounds or misaligned head pointer implies a driver bug
-	 * since we are in full control of head pointer which should only
-	 * be incremented by multiples of the report size (notably also
-	 * all a power of two).
-	 */
-	if (WARN_ONCE(head > OA_BUFFER_SIZE || head % report_size,
-		      "Inconsistent OA buffer head pointer = %u\n", head))
-		return -EIO;
+	head = dev_priv->perf.oa.oa_buffer.head;
+	aged_tail_idx = dev_priv->perf.oa.oa_buffer.aged_tail_idx;
+	tail = dev_priv->perf.oa.oa_buffer.tails[aged_tail_idx].offset;

-	oastatus1 = I915_READ(GEN7_OASTATUS1);
-	tail = (oastatus1 & GEN7_OASTATUS1_TAIL_MASK) - gtt_offset;
+	spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);

-	/* The OA unit is expected to wrap the tail pointer according to the OA
-	 * buffer size
+	/* An invalid tail pointer here means we're still waiting for the poll
+	 * hrtimer callback to give us a pointer
 	 */
-	if (tail > OA_BUFFER_SIZE) {
-		DRM_ERROR("Inconsistent OA buffer tail pointer = %u: force restart\n",
-			  tail);
-		dev_priv->perf.oa.ops.oa_disable(dev_priv);
-		dev_priv->perf.oa.ops.oa_enable(dev_priv);
-		return -EIO;
-	}
-
+	if (tail == INVALID_TAIL_PTR)
+		return -EAGAIN;

-	/* The tail pointer increases in 64 byte increments, not in report_size
-	 * steps...
+	/* NB: oa_buffer.head/tail include the gtt_offset which we don't want
+	 * while indexing relative to oa_buf_base.
 	 */
-	tail &= ~(report_size - 1);
+	head -= gtt_offset;
+	tail -= gtt_offset;

-	/* Move the tail pointer back by the current tail_margin to account for
-	 * the possibility that the latest reports may not have really landed
-	 * in memory yet...
+	/* An out of bounds or misaligned head or tail pointer implies a driver
+	 * bug since we validate + align the tail pointers we read from the
+	 * hardware and we are in full control of the head pointer which should
+	 * only be incremented by multiples of the report size (notably also
+	 * all a power of two).
 	 */
+	if (WARN_ONCE(head > OA_BUFFER_SIZE || head % report_size ||
+		      tail > OA_BUFFER_SIZE || tail % report_size,
+		      "Inconsistent OA buffer pointers: head = %u, tail = %u\n",
+		      head, tail))
+		return -EIO;

-	if (OA_TAKEN(tail, head) < report_size + tail_margin)
-		return -EAGAIN;
-
-	tail -= tail_margin;
-	tail &= mask;

 	for (/* none */;
 	     (taken = OA_TAKEN(tail, head));
@@ -540,6 +646,8 @@ static int gen7_append_oa_reports(struct i915_perf_stream *stream,


 	if (start_offset != *offset) {
+		spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
 		/* We removed the gtt_offset for the copy loop above, indexing
 		 * relative to oa_buf_base so put back here...
 		 */
@@ -549,6 +657,8 @@ static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 			   ((head & GEN7_OASTATUS2_HEAD_MASK) |
 			    OA_MEM_SELECT_GGTT));
 		dev_priv->perf.oa.oa_buffer.head = head;
+
+		spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
 	}

 	return ret;
@@ -659,14 +769,8 @@ static int i915_oa_wait_unlocked(struct i915_perf_stream *stream)
 	if (!dev_priv->perf.oa.periodic)
 		return -EIO;

-	/* Note: the oa_buffer_is_empty() condition is ok to run unlocked as it
-	 * just performs mmio reads of the OA buffer head + tail pointers and
-	 * it's assumed we're handling some operation that implies the stream
-	 * can't be destroyed until completion (such as a read()) that ensures
-	 * the device + OA buffer can't disappear
-	 */
 	return wait_event_interruptible(dev_priv->perf.oa.poll_wq,
-					!dev_priv->perf.oa.ops.oa_buffer_is_empty(dev_priv));
+					dev_priv->perf.oa.ops.oa_buffer_check(dev_priv));
 }

 /**
@@ -809,6 +913,9 @@ static void i915_oa_stream_destroy(struct i915_perf_stream *stream)
 static void gen7_init_oa_buffer(struct drm_i915_private *dev_priv)
 {
 	u32 gtt_offset = i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
+	unsigned long flags;
+
+	spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);

 	/* Pre-DevBDW: OABUFFER must be set with counters off,
 	 * before OASTATUS1, but after OASTATUS2
@@ -820,6 +927,12 @@ static void gen7_init_oa_buffer(struct drm_i915_private *dev_priv)

 	I915_WRITE(GEN7_OASTATUS1, gtt_offset | OABUFFER_SIZE_16M); /* tail */

+	/* Mark that we need updated tail pointers to read from... */
+	dev_priv->perf.oa.oa_buffer.tails[0].offset = INVALID_TAIL_PTR;
+	dev_priv->perf.oa.oa_buffer.tails[1].offset = INVALID_TAIL_PTR;
+
+	spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
 	/* On Haswell we have to track which OASTATUS1 flags we've
 	 * already seen since they can't be cleared while periodic
 	 * sampling is enabled.
@@ -1077,12 +1190,6 @@ static void i915_oa_stream_disable(struct i915_perf_stream *stream)
 		hrtimer_cancel(&dev_priv->perf.oa.poll_check_timer);
 }

-static u64 oa_exponent_to_ns(struct drm_i915_private *dev_priv, int exponent)
-{
-	return div_u64(1000000000ULL * (2ULL << exponent),
-		       dev_priv->perf.oa.timestamp_frequency);
-}
-
 static const struct i915_perf_stream_ops i915_oa_stream_ops = {
 	.destroy = i915_oa_stream_destroy,
 	.enable = i915_oa_stream_enable,
@@ -1173,20 +1280,9 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream,
 	dev_priv->perf.oa.metrics_set = props->metrics_set;

 	dev_priv->perf.oa.periodic = props->oa_periodic;
-	if (dev_priv->perf.oa.periodic) {
-		u32 tail;
-
+	if (dev_priv->perf.oa.periodic)
 		dev_priv->perf.oa.period_exponent = props->oa_period_exponent;

-		/* See comment for OA_TAIL_MARGIN_NSEC for details
-		 * about this tail_margin...
-		 */
-		tail = div64_u64(OA_TAIL_MARGIN_NSEC,
-				 oa_exponent_to_ns(dev_priv,
-						   props->oa_period_exponent));
-		dev_priv->perf.oa.tail_margin = (tail + 1) * format_size;
-	}
-
 	if (stream->ctx) {
 		ret = oa_get_render_ctx_id(stream);
 		if (ret)
@@ -1359,7 +1455,7 @@ static enum hrtimer_restart oa_poll_check_timer_cb(struct hrtimer *hrtimer)
 		container_of(hrtimer, typeof(*dev_priv),
 			     perf.oa.poll_check_timer);

-	if (!dev_priv->perf.oa.ops.oa_buffer_is_empty(dev_priv)) {
+	if (dev_priv->perf.oa.ops.oa_buffer_check(dev_priv)) {
 		dev_priv->perf.oa.pollin = true;
 		wake_up(&dev_priv->perf.oa.poll_wq);
 	}
@@ -2054,6 +2150,7 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
 	INIT_LIST_HEAD(&dev_priv->perf.streams);
 	mutex_init(&dev_priv->perf.lock);
 	spin_lock_init(&dev_priv->perf.hook_lock);
+	spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);

 	dev_priv->perf.oa.ops.init_oa_buffer = gen7_init_oa_buffer;
 	dev_priv->perf.oa.ops.enable_metric_set = hsw_enable_metric_set;
@@ -2061,10 +2158,8 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
 	dev_priv->perf.oa.ops.oa_enable = gen7_oa_enable;
 	dev_priv->perf.oa.ops.oa_disable = gen7_oa_disable;
 	dev_priv->perf.oa.ops.read = gen7_oa_read;
-	dev_priv->perf.oa.ops.oa_buffer_is_empty =
-		gen7_oa_buffer_is_empty_fop_unlocked;
-
-	dev_priv->perf.oa.timestamp_frequency = 12500000;
+	dev_priv->perf.oa.ops.oa_buffer_check =
+		gen7_oa_buffer_check_unlocked;

 	dev_priv->perf.oa.oa_formats = hsw_oa_formats;

--
2.11.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v5 06/15] drm/i915/perf: improve invalid OA format debug message
  2017-04-24  7:17 ` [PATCH 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                     ` (4 preceding siblings ...)
  2017-04-24  7:17   ` [PATCH v5 05/15] drm/i915/perf: improve tail race workaround Lionel Landwerlin
@ 2017-04-24  7:17   ` Lionel Landwerlin
  2017-04-24  7:17   ` [PATCH v5 07/15] drm/i915/perf: better pipeline aged/aging tail updates Lionel Landwerlin
                     ` (12 subsequent siblings)
  18 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-04-24  7:17 UTC (permalink / raw)
  To: intel-gfx

From: Robert Bragg <robert@sixbynine.org>

A minor improvement to debugging output

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_perf.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 18734e1926b9..08cc2b0dd734 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -1904,11 +1904,13 @@ static int read_properties_unlocked(struct drm_i915_private *dev_priv,
 			break;
 		case DRM_I915_PERF_PROP_OA_FORMAT:
 			if (value == 0 || value >= I915_OA_FORMAT_MAX) {
-				DRM_DEBUG("Invalid OA report format\n");
+				DRM_DEBUG("Out-of-range OA report format %llu\n",
+					  value);
 				return -EINVAL;
 			}
 			if (!dev_priv->perf.oa.oa_formats[value].size) {
-				DRM_DEBUG("Invalid OA report format\n");
+				DRM_DEBUG("Unsupported OA report format %llu\n",
+					  value);
 				return -EINVAL;
 			}
 			props->oa_format = value;
--
2.11.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v5 07/15] drm/i915/perf: better pipeline aged/aging tail updates
  2017-04-24  7:17 ` [PATCH 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                     ` (5 preceding siblings ...)
  2017-04-24  7:17   ` [PATCH v5 06/15] drm/i915/perf: improve invalid OA format debug message Lionel Landwerlin
@ 2017-04-24  7:17   ` Lionel Landwerlin
  2017-04-24  7:17   ` [PATCH v5 08/15] drm/i915/perf: rate limit spurious oa report notice Lionel Landwerlin
                     ` (11 subsequent siblings)
  18 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-04-24  7:17 UTC (permalink / raw)
  To: intel-gfx

From: Robert Bragg <robert@sixbynine.org>

This updates the tail pointer race workaround handling to updating the
'aged' pointer before looking to start aging a new one. There's the
possibility that there is already new data available and so we can
immediately start aging a new pointer without having to first wait for a
later hrtimer callback (and then another to age).

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_perf.c | 41 ++++++++++++++++++++++------------------
 1 file changed, 23 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 08cc2b0dd734..5738b99caa5b 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -391,6 +391,29 @@ static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)

 	now = ktime_get_mono_fast_ns();

+	/* Update the aged tail
+	 *
+	 * Flip the tail pointer available for read()s once the aging tail is
+	 * old enough to trust that the corresponding data will be visible to
+	 * the CPU...
+	 *
+	 * Do this before updating the aging pointer in case we may be able to
+	 * immediately start aging a new pointer too (if new data has become
+	 * available) without needing to wait for a later hrtimer callback.
+	 */
+	if (aging_tail != INVALID_TAIL_PTR &&
+	    ((now - dev_priv->perf.oa.oa_buffer.aging_timestamp) >
+	     OA_TAIL_MARGIN_NSEC)) {
+		aged_idx ^= 1;
+		dev_priv->perf.oa.oa_buffer.aged_tail_idx = aged_idx;
+
+		aged_tail = aging_tail;
+
+		/* Mark that we need a new pointer to start aging... */
+		dev_priv->perf.oa.oa_buffer.tails[!aged_idx].offset = INVALID_TAIL_PTR;
+		aging_tail = INVALID_TAIL_PTR;
+	}
+
 	/* Update the aging tail
 	 *
 	 * We throttle aging tail updates until we have a new tail that
@@ -420,24 +443,6 @@ static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
 		}
 	}

-	/* Update the aged tail
-	 *
-	 * Flip the tail pointer available for read()s once the aging tail is
-	 * old enough to trust that the corresponding data will be visible to
-	 * the CPU...
-	 */
-	if (aging_tail != INVALID_TAIL_PTR &&
-	    ((now - dev_priv->perf.oa.oa_buffer.aging_timestamp) >
-	     OA_TAIL_MARGIN_NSEC)) {
-		aged_idx ^= 1;
-		dev_priv->perf.oa.oa_buffer.aged_tail_idx = aged_idx;
-
-		aged_tail = aging_tail;
-
-		/* Mark that we need a new pointer to start aging... */
-		dev_priv->perf.oa.oa_buffer.tails[!aged_idx].offset = INVALID_TAIL_PTR;
-	}
-
 	spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);

 	return aged_tail == INVALID_TAIL_PTR ?
--
2.11.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v5 08/15] drm/i915/perf: rate limit spurious oa report notice
  2017-04-24  7:17 ` [PATCH 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                     ` (6 preceding siblings ...)
  2017-04-24  7:17   ` [PATCH v5 07/15] drm/i915/perf: better pipeline aged/aging tail updates Lionel Landwerlin
@ 2017-04-24  7:17   ` Lionel Landwerlin
  2017-04-24  7:17   ` [PATCH v5 09/15] drm/i915: expose _SLICE_MASK GETPARM Lionel Landwerlin
                     ` (10 subsequent siblings)
  18 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-04-24  7:17 UTC (permalink / raw)
  To: intel-gfx

From: Robert Bragg <robert@sixbynine.org>

This change is pre-emptively aiming to avoid a potential cause of kernel
logging noise in case some condition were to result in us seeing invalid
OA reports.

The workaround for the OA unit's tail pointer race condition is what
avoids the primary known cause of invalid reports being seen and with
that in place we aren't expecting to see this notice but it can't be
entirely ruled out.

Just in case some condition does lead to the notice then it's likely
that it will be triggered repeatedly while attempting to append a
sequence of reports and depending on the configured OA sampling
frequency that might be a large number of repeat notices.

v2: (Chris) avoid inconsistent warning on throttle with
    printk_ratelimit()
v3: (Matt) init and summarise with stream init/close not driver init/fini

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h  |  6 ++++++
 drivers/gpu/drm/i915/i915_perf.c | 28 +++++++++++++++++++++++++++-
 2 files changed, 33 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 69a914870774..fb2188c8d72f 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2451,6 +2451,12 @@ struct drm_i915_private {
 			wait_queue_head_t poll_wq;
 			bool pollin;

+			/**
+			 * For rate limiting any notifications of spurious
+			 * invalid OA reports
+			 */
+			struct ratelimit_state spurious_report_rs;
+
 			bool periodic;
 			int period_exponent;

diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 5738b99caa5b..3277a52ce98e 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -632,7 +632,8 @@ static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 		 * copying it to userspace...
 		 */
 		if (report32[0] == 0) {
-			DRM_NOTE("Skipping spurious, invalid OA report\n");
+			if (__ratelimit(&dev_priv->perf.oa.spurious_report_rs))
+				DRM_NOTE("Skipping spurious, invalid OA report\n");
 			continue;
 		}

@@ -913,6 +914,11 @@ static void i915_oa_stream_destroy(struct i915_perf_stream *stream)
 		oa_put_render_ctx_id(stream);

 	dev_priv->perf.oa.exclusive_stream = NULL;
+
+	if (dev_priv->perf.oa.spurious_report_rs.missed) {
+		DRM_NOTE("%d spurious OA report notices suppressed due to ratelimiting\n",
+			 dev_priv->perf.oa.spurious_report_rs.missed);
+	}
 }

 static void gen7_init_oa_buffer(struct drm_i915_private *dev_priv)
@@ -1268,6 +1274,26 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream,
 		return -EINVAL;
 	}

+	/* We set up some ratelimit state to potentially throttle any _NOTES
+	 * about spurious, invalid OA reports which we don't forward to
+	 * userspace.
+	 *
+	 * The initialization is associated with opening the stream (not driver
+	 * init) considering we print a _NOTE about any throttling when closing
+	 * the stream instead of waiting until driver _fini which no one would
+	 * ever see.
+	 *
+	 * Using the same limiting factors as printk_ratelimit()
+	 */
+	ratelimit_state_init(&dev_priv->perf.oa.spurious_report_rs,
+			     5 * HZ, 10);
+	/* Since we use a DRM_NOTE for spurious reports it would be
+	 * inconsistent to let __ratelimit() automatically print a warning for
+	 * throttling.
+	 */
+	ratelimit_set_flags(&dev_priv->perf.oa.spurious_report_rs,
+			    RATELIMIT_MSG_ON_RELEASE);
+
 	stream->sample_size = sizeof(struct drm_i915_perf_record_header);

 	format_size = dev_priv->perf.oa.oa_formats[props->oa_format].size;
--
2.11.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v5 09/15] drm/i915: expose _SLICE_MASK GETPARM
  2017-04-24  7:17 ` [PATCH 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                     ` (7 preceding siblings ...)
  2017-04-24  7:17   ` [PATCH v5 08/15] drm/i915/perf: rate limit spurious oa report notice Lionel Landwerlin
@ 2017-04-24  7:17   ` Lionel Landwerlin
  2017-04-24  7:17   ` [PATCH v5 10/15] drm/i915: expose _SUBSLICE_MASK GETPARM Lionel Landwerlin
                     ` (9 subsequent siblings)
  18 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-04-24  7:17 UTC (permalink / raw)
  To: intel-gfx

From: Robert Bragg <robert@sixbynine.org>

Enables userspace to determine the number of slices enabled and also
know what specific slices are enabled. This information is required, for
example, to be able to analyse some OA counter reports where the counter
configuration depends on the HW slice configuration.

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Acked-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.c | 5 +++++
 include/uapi/drm/i915_drm.h     | 3 +++
 2 files changed, 8 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index cc7393e65e99..f13b2c3ce6eb 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -358,6 +358,11 @@ static int i915_getparam(struct drm_device *dev, void *data,
 		 */
 		value = 1;
 		break;
+	case I915_PARAM_SLICE_MASK:
+		value = INTEL_INFO(dev_priv)->sseu.slice_mask;
+		if (!value)
+			return -ENODEV;
+		break;
 	default:
 		DRM_DEBUG("Unknown parameter %d\n", param->param);
 		return -EINVAL;
diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
index f24a80d2d42e..25695c3d9a76 100644
--- a/include/uapi/drm/i915_drm.h
+++ b/include/uapi/drm/i915_drm.h
@@ -418,6 +418,9 @@ typedef struct drm_i915_irq_wait {
  */
 #define I915_PARAM_HAS_EXEC_CAPTURE	 45

+/* Query the mask of slices available for this system */
+#define I915_PARAM_SLICE_MASK		 46
+
 typedef struct drm_i915_getparam {
 	__s32 param;
 	/*
--
2.11.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v5 10/15] drm/i915: expose _SUBSLICE_MASK GETPARM
  2017-04-24  7:17 ` [PATCH 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                     ` (8 preceding siblings ...)
  2017-04-24  7:17   ` [PATCH v5 09/15] drm/i915: expose _SLICE_MASK GETPARM Lionel Landwerlin
@ 2017-04-24  7:17   ` Lionel Landwerlin
  2017-04-24  7:17   ` [PATCH v5 11/15] drm/i915/perf: Add 'render basic' Gen8+ OA unit configs Lionel Landwerlin
                     ` (8 subsequent siblings)
  18 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-04-24  7:17 UTC (permalink / raw)
  To: intel-gfx

From: Robert Bragg <robert@sixbynine.org>

Assuming a uniform mask across all slices, this enables userspace to
determine the specific sub slices enabled. This information is required,
for example, to be able to analyse some OA counter reports where the
counter configuration depends on the HW sub slice configuration.

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Acked-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.c | 5 +++++
 include/uapi/drm/i915_drm.h     | 5 +++++
 2 files changed, 10 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index f13b2c3ce6eb..8769692b9408 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -363,6 +363,11 @@ static int i915_getparam(struct drm_device *dev, void *data,
 		if (!value)
 			return -ENODEV;
 		break;
+	case I915_PARAM_SUBSLICE_MASK:
+		value = INTEL_INFO(dev_priv)->sseu.subslice_mask;
+		if (!value)
+			return -ENODEV;
+		break;
 	default:
 		DRM_DEBUG("Unknown parameter %d\n", param->param);
 		return -EINVAL;
diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
index 25695c3d9a76..464547d08173 100644
--- a/include/uapi/drm/i915_drm.h
+++ b/include/uapi/drm/i915_drm.h
@@ -421,6 +421,11 @@ typedef struct drm_i915_irq_wait {
 /* Query the mask of slices available for this system */
 #define I915_PARAM_SLICE_MASK		 46

+/* Assuming it's uniform for each slice, this queries the mask of subslices
+ * per-slice for this system.
+ */
+#define I915_PARAM_SUBSLICE_MASK	 47
+
 typedef struct drm_i915_getparam {
 	__s32 param;
 	/*
--
2.11.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v5 11/15] drm/i915/perf: Add 'render basic' Gen8+ OA unit configs
  2017-04-24  7:17 ` [PATCH 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                     ` (9 preceding siblings ...)
  2017-04-24  7:17   ` [PATCH v5 10/15] drm/i915: expose _SUBSLICE_MASK GETPARM Lionel Landwerlin
@ 2017-04-24  7:17   ` Lionel Landwerlin
  2017-04-24  7:17   ` [PATCH v5 12/15] drm/i915/perf: Add OA unit support for Gen 8+ Lionel Landwerlin
                     ` (7 subsequent siblings)
  18 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-04-24  7:17 UTC (permalink / raw)
  To: intel-gfx

From: Robert Bragg <robert@sixbynine.org>

Adds a static OA unit, MUX, B Counter + Flex EU configurations for basic
render metrics on Broadwell, Cherryview, Skylake and Broxton. These are
auto generated from an XML description of metric sets, currently
maintained in gputop, ref:

 https://github.com/rib/gputop
 > gputop-data/oa-*.xml
 > scripts/i915-perf-kernelgen.py

 $ make -C gputop-data -f Makefile.xml WHITELIST=RenderBasic

v2: add newlines to debug messages + fix comment (Matthew Auld)

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Acked-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
---
 drivers/gpu/drm/i915/Makefile         |   8 +-
 drivers/gpu/drm/i915/i915_drv.h       |   2 +
 drivers/gpu/drm/i915/i915_oa_bdw.c    | 380 ++++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_bdw.h    |  38 ++++
 drivers/gpu/drm/i915/i915_oa_bxt.c    | 238 +++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_bxt.h    |  38 ++++
 drivers/gpu/drm/i915/i915_oa_chv.c    | 225 ++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_chv.h    |  38 ++++
 drivers/gpu/drm/i915/i915_oa_sklgt2.c | 228 ++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_sklgt2.h |  38 ++++
 drivers/gpu/drm/i915/i915_oa_sklgt3.c | 236 +++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_sklgt3.h |  38 ++++
 drivers/gpu/drm/i915/i915_oa_sklgt4.c | 247 ++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_sklgt4.h |  38 ++++
 14 files changed, 1791 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/i915/i915_oa_bdw.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_bdw.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_bxt.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_bxt.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_chv.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_chv.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt2.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt2.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt3.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt3.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt4.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt4.h

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index 2cf04504e494..41400a138a1e 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -127,7 +127,13 @@ i915-y += i915_vgpu.o

 # perf code
 i915-y += i915_perf.o \
-	  i915_oa_hsw.o
+	  i915_oa_hsw.o \
+	  i915_oa_bdw.o \
+	  i915_oa_chv.o \
+	  i915_oa_sklgt2.o \
+	  i915_oa_sklgt3.o \
+	  i915_oa_sklgt4.o \
+	  i915_oa_bxt.o

 ifeq ($(CONFIG_DRM_I915_GVT),y)
 i915-y += intel_gvt.o
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index fb2188c8d72f..ffa1fc5eddfd 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2466,6 +2466,8 @@ struct drm_i915_private {
 			int mux_regs_len;
 			const struct i915_oa_reg *b_counter_regs;
 			int b_counter_regs_len;
+			const struct i915_oa_reg *flex_regs;
+			int flex_regs_len;

 			struct {
 				struct i915_vma *vma;
diff --git a/drivers/gpu/drm/i915/i915_oa_bdw.c b/drivers/gpu/drm/i915/i915_oa_bdw.c
new file mode 100644
index 000000000000..b0b1b75fb431
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_bdw.c
@@ -0,0 +1,380 @@
+/*
+ * Autogenerated file, DO NOT EDIT manually!
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/sysfs.h>
+
+#include "i915_drv.h"
+#include "i915_oa_bdw.h"
+
+enum metric_set_id {
+	METRIC_SET_ID_RENDER_BASIC = 1,
+};
+
+int i915_oa_n_builtin_metric_sets_bdw = 1;
+
+static const struct i915_oa_reg b_counter_config_render_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic_0_slices_0x01[] = {
+	{ _MMIO(0x9888), 0x143f000f },
+	{ _MMIO(0x9888), 0x14110014 },
+	{ _MMIO(0x9888), 0x14310014 },
+	{ _MMIO(0x9888), 0x14bf000f },
+	{ _MMIO(0x9888), 0x118a0317 },
+	{ _MMIO(0x9888), 0x13837be0 },
+	{ _MMIO(0x9888), 0x3b800060 },
+	{ _MMIO(0x9888), 0x3d800005 },
+	{ _MMIO(0x9888), 0x005c4000 },
+	{ _MMIO(0x9888), 0x065c8000 },
+	{ _MMIO(0x9888), 0x085cc000 },
+	{ _MMIO(0x9888), 0x003d8000 },
+	{ _MMIO(0x9888), 0x183d0800 },
+	{ _MMIO(0x9888), 0x0a3f0023 },
+	{ _MMIO(0x9888), 0x103f0000 },
+	{ _MMIO(0x9888), 0x00584000 },
+	{ _MMIO(0x9888), 0x08584000 },
+	{ _MMIO(0x9888), 0x0a5a4000 },
+	{ _MMIO(0x9888), 0x005b4000 },
+	{ _MMIO(0x9888), 0x0e5b8000 },
+	{ _MMIO(0x9888), 0x185b2400 },
+	{ _MMIO(0x9888), 0x0a1d4000 },
+	{ _MMIO(0x9888), 0x0c1f0800 },
+	{ _MMIO(0x9888), 0x0e1faa00 },
+	{ _MMIO(0x9888), 0x00384000 },
+	{ _MMIO(0x9888), 0x0e384000 },
+	{ _MMIO(0x9888), 0x16384000 },
+	{ _MMIO(0x9888), 0x18380001 },
+	{ _MMIO(0x9888), 0x00392000 },
+	{ _MMIO(0x9888), 0x06398000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a391000 },
+	{ _MMIO(0x9888), 0x00104000 },
+	{ _MMIO(0x9888), 0x08104000 },
+	{ _MMIO(0x9888), 0x00110030 },
+	{ _MMIO(0x9888), 0x08110031 },
+	{ _MMIO(0x9888), 0x10110000 },
+	{ _MMIO(0x9888), 0x00134000 },
+	{ _MMIO(0x9888), 0x16130020 },
+	{ _MMIO(0x9888), 0x06308000 },
+	{ _MMIO(0x9888), 0x08308000 },
+	{ _MMIO(0x9888), 0x06311800 },
+	{ _MMIO(0x9888), 0x08311880 },
+	{ _MMIO(0x9888), 0x10310000 },
+	{ _MMIO(0x9888), 0x0e334000 },
+	{ _MMIO(0x9888), 0x16330080 },
+	{ _MMIO(0x9888), 0x0abf1180 },
+	{ _MMIO(0x9888), 0x10bf0000 },
+	{ _MMIO(0x9888), 0x0ada8000 },
+	{ _MMIO(0x9888), 0x0a9d8000 },
+	{ _MMIO(0x9888), 0x109f0002 },
+	{ _MMIO(0x9888), 0x0ab94000 },
+	{ _MMIO(0x9888), 0x0d888000 },
+	{ _MMIO(0x9888), 0x038a0380 },
+	{ _MMIO(0x9888), 0x058a000e },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8a00a0 },
+	{ _MMIO(0x9888), 0x078a0000 },
+	{ _MMIO(0x9888), 0x098a0000 },
+	{ _MMIO(0x9888), 0x238b2820 },
+	{ _MMIO(0x9888), 0x258b2550 },
+	{ _MMIO(0x9888), 0x198c1000 },
+	{ _MMIO(0x9888), 0x0b8d8000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa0 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x0d831021 },
+	{ _MMIO(0x9888), 0x0f83572f },
+	{ _MMIO(0x9888), 0x01835680 },
+	{ _MMIO(0x9888), 0x0383002c },
+	{ _MMIO(0x9888), 0x11830000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830001 },
+	{ _MMIO(0x9888), 0x05830000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x05844000 },
+	{ _MMIO(0x9888), 0x1b80c137 },
+	{ _MMIO(0x9888), 0x1d80c147 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x15804000 },
+	{ _MMIO(0x9888), 0x4d801110 },
+	{ _MMIO(0x9888), 0x4f800331 },
+	{ _MMIO(0x9888), 0x43800802 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45801465 },
+	{ _MMIO(0x9888), 0x53801111 },
+	{ _MMIO(0x9888), 0x478014a5 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800ca5 },
+	{ _MMIO(0x9888), 0x41800003 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic_1_slices_0x02[] = {
+	{ _MMIO(0x9888), 0x143f000f },
+	{ _MMIO(0x9888), 0x14bf000f },
+	{ _MMIO(0x9888), 0x14910014 },
+	{ _MMIO(0x9888), 0x14b10014 },
+	{ _MMIO(0x9888), 0x118a0317 },
+	{ _MMIO(0x9888), 0x13837be0 },
+	{ _MMIO(0x9888), 0x3b800060 },
+	{ _MMIO(0x9888), 0x3d800005 },
+	{ _MMIO(0x9888), 0x0a3f0023 },
+	{ _MMIO(0x9888), 0x103f0000 },
+	{ _MMIO(0x9888), 0x0a5a4000 },
+	{ _MMIO(0x9888), 0x0a1d4000 },
+	{ _MMIO(0x9888), 0x0e1f8000 },
+	{ _MMIO(0x9888), 0x0a391000 },
+	{ _MMIO(0x9888), 0x00dc4000 },
+	{ _MMIO(0x9888), 0x06dc8000 },
+	{ _MMIO(0x9888), 0x08dcc000 },
+	{ _MMIO(0x9888), 0x00bd8000 },
+	{ _MMIO(0x9888), 0x18bd0800 },
+	{ _MMIO(0x9888), 0x0abf1180 },
+	{ _MMIO(0x9888), 0x10bf0000 },
+	{ _MMIO(0x9888), 0x00d84000 },
+	{ _MMIO(0x9888), 0x08d84000 },
+	{ _MMIO(0x9888), 0x0ada8000 },
+	{ _MMIO(0x9888), 0x00db4000 },
+	{ _MMIO(0x9888), 0x0edb8000 },
+	{ _MMIO(0x9888), 0x18db2400 },
+	{ _MMIO(0x9888), 0x0a9d8000 },
+	{ _MMIO(0x9888), 0x0c9f0800 },
+	{ _MMIO(0x9888), 0x0e9f2a00 },
+	{ _MMIO(0x9888), 0x109f0002 },
+	{ _MMIO(0x9888), 0x00b84000 },
+	{ _MMIO(0x9888), 0x0eb84000 },
+	{ _MMIO(0x9888), 0x16b84000 },
+	{ _MMIO(0x9888), 0x18b80001 },
+	{ _MMIO(0x9888), 0x00b92000 },
+	{ _MMIO(0x9888), 0x06b98000 },
+	{ _MMIO(0x9888), 0x08b9a000 },
+	{ _MMIO(0x9888), 0x0ab94000 },
+	{ _MMIO(0x9888), 0x00904000 },
+	{ _MMIO(0x9888), 0x08904000 },
+	{ _MMIO(0x9888), 0x00910030 },
+	{ _MMIO(0x9888), 0x08910031 },
+	{ _MMIO(0x9888), 0x10910000 },
+	{ _MMIO(0x9888), 0x00934000 },
+	{ _MMIO(0x9888), 0x16930020 },
+	{ _MMIO(0x9888), 0x06b08000 },
+	{ _MMIO(0x9888), 0x08b08000 },
+	{ _MMIO(0x9888), 0x06b11800 },
+	{ _MMIO(0x9888), 0x08b11880 },
+	{ _MMIO(0x9888), 0x10b10000 },
+	{ _MMIO(0x9888), 0x0eb34000 },
+	{ _MMIO(0x9888), 0x16b30080 },
+	{ _MMIO(0x9888), 0x01888000 },
+	{ _MMIO(0x9888), 0x0d88b800 },
+	{ _MMIO(0x9888), 0x038a0380 },
+	{ _MMIO(0x9888), 0x058a000e },
+	{ _MMIO(0x9888), 0x1b8a0080 },
+	{ _MMIO(0x9888), 0x078a0000 },
+	{ _MMIO(0x9888), 0x098a0000 },
+	{ _MMIO(0x9888), 0x238b2840 },
+	{ _MMIO(0x9888), 0x258b26a0 },
+	{ _MMIO(0x9888), 0x018c4000 },
+	{ _MMIO(0x9888), 0x0f8c4000 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c1100 },
+	{ _MMIO(0x9888), 0x018d2000 },
+	{ _MMIO(0x9888), 0x078d8000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8d8000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa0 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x0d831021 },
+	{ _MMIO(0x9888), 0x0f83572f },
+	{ _MMIO(0x9888), 0x01835680 },
+	{ _MMIO(0x9888), 0x0383002c },
+	{ _MMIO(0x9888), 0x11830000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830001 },
+	{ _MMIO(0x9888), 0x05830000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x05844000 },
+	{ _MMIO(0x9888), 0x1b80c137 },
+	{ _MMIO(0x9888), 0x1d80c147 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x15804000 },
+	{ _MMIO(0x9888), 0x4d801550 },
+	{ _MMIO(0x9888), 0x4f800331 },
+	{ _MMIO(0x9888), 0x43800802 },
+	{ _MMIO(0x9888), 0x51800400 },
+	{ _MMIO(0x9888), 0x458004a1 },
+	{ _MMIO(0x9888), 0x53805555 },
+	{ _MMIO(0x9888), 0x47800421 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f801421 },
+	{ _MMIO(0x9888), 0x41800845 },
+};
+
+static const struct i915_oa_reg *
+get_render_basic_mux_config(struct drm_i915_private *dev_priv,
+			    int *len)
+{
+	if (INTEL_INFO(dev_priv)->sseu.slice_mask & 0x01) {
+		*len = ARRAY_SIZE(mux_config_render_basic_0_slices_0x01);
+		return mux_config_render_basic_0_slices_0x01;
+	} else if (INTEL_INFO(dev_priv)->sseu.slice_mask & 0x02) {
+		*len = ARRAY_SIZE(mux_config_render_basic_1_slices_0x02);
+		return mux_config_render_basic_1_slices_0x02;
+	}
+
+	*len = 0;
+	return NULL;
+}
+
+int i915_oa_select_metric_set_bdw(struct drm_i915_private *dev_priv)
+{
+	dev_priv->perf.oa.mux_regs = NULL;
+	dev_priv->perf.oa.mux_regs_len = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_render_basic_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "b541bd57-0e0f-4154-b4c0-5858010a2bf7",
+	.attrs =  attrs_render_basic,
+};
+
+int
+i915_perf_register_sysfs_bdw(struct drm_i915_private *dev_priv)
+{
+	int mux_len;
+	int ret = 0;
+
+	if (get_render_basic_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+		if (ret)
+			goto error_render_basic;
+	}
+
+	return 0;
+
+error_render_basic:
+	return ret;
+}
+
+void
+i915_perf_unregister_sysfs_bdw(struct drm_i915_private *dev_priv)
+{
+	int mux_len;
+
+	if (get_render_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+}
diff --git a/drivers/gpu/drm/i915/i915_oa_bdw.h b/drivers/gpu/drm/i915/i915_oa_bdw.h
new file mode 100644
index 000000000000..ea6969aaf91a
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_bdw.h
@@ -0,0 +1,38 @@
+/*
+ * Autogenerated file, DO NOT EDIT manually!
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_OA_BDW_H__
+#define __I915_OA_BDW_H__
+
+extern int i915_oa_n_builtin_metric_sets_bdw;
+
+extern int i915_oa_select_metric_set_bdw(struct drm_i915_private *dev_priv);
+
+extern int i915_perf_register_sysfs_bdw(struct drm_i915_private *dev_priv);
+
+extern void i915_perf_unregister_sysfs_bdw(struct drm_i915_private *dev_priv);
+
+#endif
diff --git a/drivers/gpu/drm/i915/i915_oa_bxt.c b/drivers/gpu/drm/i915/i915_oa_bxt.c
new file mode 100644
index 000000000000..b10e44643bc1
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_bxt.c
@@ -0,0 +1,238 @@
+/*
+ * Autogenerated file, DO NOT EDIT manually!
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/sysfs.h>
+
+#include "i915_drv.h"
+#include "i915_oa_bxt.h"
+
+enum metric_set_id {
+	METRIC_SET_ID_RENDER_BASIC = 1,
+};
+
+int i915_oa_n_builtin_metric_sets_bxt = 1;
+
+static const struct i915_oa_reg b_counter_config_render_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic_0_sku_gte_0x03[] = {
+	{ _MMIO(0x9888), 0x166c00f0 },
+	{ _MMIO(0x9888), 0x12120280 },
+	{ _MMIO(0x9888), 0x12320280 },
+	{ _MMIO(0x9888), 0x11930317 },
+	{ _MMIO(0x9888), 0x159303df },
+	{ _MMIO(0x9888), 0x3f900c00 },
+	{ _MMIO(0x9888), 0x419000a0 },
+	{ _MMIO(0x9888), 0x002d1000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x082d5000 },
+	{ _MMIO(0x9888), 0x0a2d1000 },
+	{ _MMIO(0x9888), 0x0c2e0800 },
+	{ _MMIO(0x9888), 0x0e2e5900 },
+	{ _MMIO(0x9888), 0x0a4c8000 },
+	{ _MMIO(0x9888), 0x0c4c8000 },
+	{ _MMIO(0x9888), 0x0e4c4000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e2000 },
+	{ _MMIO(0x9888), 0x1c4f0010 },
+	{ _MMIO(0x9888), 0x0a6c0053 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1a0fcc00 },
+	{ _MMIO(0x9888), 0x1c0f0002 },
+	{ _MMIO(0x9888), 0x1c2c0040 },
+	{ _MMIO(0x9888), 0x00101000 },
+	{ _MMIO(0x9888), 0x04101000 },
+	{ _MMIO(0x9888), 0x00114000 },
+	{ _MMIO(0x9888), 0x08114000 },
+	{ _MMIO(0x9888), 0x00120020 },
+	{ _MMIO(0x9888), 0x08120021 },
+	{ _MMIO(0x9888), 0x00141000 },
+	{ _MMIO(0x9888), 0x08141000 },
+	{ _MMIO(0x9888), 0x02308000 },
+	{ _MMIO(0x9888), 0x04302000 },
+	{ _MMIO(0x9888), 0x06318000 },
+	{ _MMIO(0x9888), 0x08318000 },
+	{ _MMIO(0x9888), 0x06320800 },
+	{ _MMIO(0x9888), 0x08320840 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x06344000 },
+	{ _MMIO(0x9888), 0x08344000 },
+	{ _MMIO(0x9888), 0x0d931831 },
+	{ _MMIO(0x9888), 0x0f939f3f },
+	{ _MMIO(0x9888), 0x01939e80 },
+	{ _MMIO(0x9888), 0x039303bc },
+	{ _MMIO(0x9888), 0x0593000e },
+	{ _MMIO(0x9888), 0x1993002a },
+	{ _MMIO(0x9888), 0x07930000 },
+	{ _MMIO(0x9888), 0x09930000 },
+	{ _MMIO(0x9888), 0x1d900177 },
+	{ _MMIO(0x9888), 0x1f900187 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25904000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x53901110 },
+	{ _MMIO(0x9888), 0x43900423 },
+	{ _MMIO(0x9888), 0x55900111 },
+	{ _MMIO(0x9888), 0x47900c02 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900020 },
+	{ _MMIO(0x9888), 0x59901111 },
+	{ _MMIO(0x9888), 0x4b900421 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900001 },
+	{ _MMIO(0x9888), 0x45900821 },
+};
+
+static const struct i915_oa_reg *
+get_render_basic_mux_config(struct drm_i915_private *dev_priv,
+			    int *len)
+{
+	if (dev_priv->drm.pdev->revision >= 0x03) {
+		*len = ARRAY_SIZE(mux_config_render_basic_0_sku_gte_0x03);
+		return mux_config_render_basic_0_sku_gte_0x03;
+	}
+
+	*len = 0;
+	return NULL;
+}
+
+int i915_oa_select_metric_set_bxt(struct drm_i915_private *dev_priv)
+{
+	dev_priv->perf.oa.mux_regs = NULL;
+	dev_priv->perf.oa.mux_regs_len = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_render_basic_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "22b9519a-e9ba-4c41-8b54-f4f8ca14fa0a",
+	.attrs =  attrs_render_basic,
+};
+
+int
+i915_perf_register_sysfs_bxt(struct drm_i915_private *dev_priv)
+{
+	int mux_len;
+	int ret = 0;
+
+	if (get_render_basic_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+		if (ret)
+			goto error_render_basic;
+	}
+
+	return 0;
+
+error_render_basic:
+	return ret;
+}
+
+void
+i915_perf_unregister_sysfs_bxt(struct drm_i915_private *dev_priv)
+{
+	int mux_len;
+
+	if (get_render_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+}
diff --git a/drivers/gpu/drm/i915/i915_oa_bxt.h b/drivers/gpu/drm/i915/i915_oa_bxt.h
new file mode 100644
index 000000000000..9e86662c891d
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_bxt.h
@@ -0,0 +1,38 @@
+/*
+ * Autogenerated file, DO NOT EDIT manually!
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_OA_BXT_H__
+#define __I915_OA_BXT_H__
+
+extern int i915_oa_n_builtin_metric_sets_bxt;
+
+extern int i915_oa_select_metric_set_bxt(struct drm_i915_private *dev_priv);
+
+extern int i915_perf_register_sysfs_bxt(struct drm_i915_private *dev_priv);
+
+extern void i915_perf_unregister_sysfs_bxt(struct drm_i915_private *dev_priv);
+
+#endif
diff --git a/drivers/gpu/drm/i915/i915_oa_chv.c b/drivers/gpu/drm/i915/i915_oa_chv.c
new file mode 100644
index 000000000000..1c2889e819c8
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_chv.c
@@ -0,0 +1,225 @@
+/*
+ * Autogenerated file, DO NOT EDIT manually!
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/sysfs.h>
+
+#include "i915_drv.h"
+#include "i915_oa_chv.h"
+
+enum metric_set_id {
+	METRIC_SET_ID_RENDER_BASIC = 1,
+};
+
+int i915_oa_n_builtin_metric_sets_chv = 1;
+
+static const struct i915_oa_reg b_counter_config_render_basic[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic[] = {
+	{ _MMIO(0x9888), 0x59800000 },
+	{ _MMIO(0x9888), 0x59800001 },
+	{ _MMIO(0x9888), 0x285a0006 },
+	{ _MMIO(0x9888), 0x2c110014 },
+	{ _MMIO(0x9888), 0x2e110000 },
+	{ _MMIO(0x9888), 0x2c310014 },
+	{ _MMIO(0x9888), 0x2e310000 },
+	{ _MMIO(0x9888), 0x2b8303df },
+	{ _MMIO(0x9888), 0x3580024f },
+	{ _MMIO(0x9888), 0x00580888 },
+	{ _MMIO(0x9888), 0x1e5a0015 },
+	{ _MMIO(0x9888), 0x205a0014 },
+	{ _MMIO(0x9888), 0x045a0000 },
+	{ _MMIO(0x9888), 0x025a0000 },
+	{ _MMIO(0x9888), 0x02180500 },
+	{ _MMIO(0x9888), 0x00190555 },
+	{ _MMIO(0x9888), 0x021d0500 },
+	{ _MMIO(0x9888), 0x021f0a00 },
+	{ _MMIO(0x9888), 0x00380444 },
+	{ _MMIO(0x9888), 0x02390500 },
+	{ _MMIO(0x9888), 0x003a0666 },
+	{ _MMIO(0x9888), 0x00100111 },
+	{ _MMIO(0x9888), 0x06110030 },
+	{ _MMIO(0x9888), 0x0a110031 },
+	{ _MMIO(0x9888), 0x0e110046 },
+	{ _MMIO(0x9888), 0x04110000 },
+	{ _MMIO(0x9888), 0x00110000 },
+	{ _MMIO(0x9888), 0x00130111 },
+	{ _MMIO(0x9888), 0x00300444 },
+	{ _MMIO(0x9888), 0x08310030 },
+	{ _MMIO(0x9888), 0x0c310031 },
+	{ _MMIO(0x9888), 0x10310046 },
+	{ _MMIO(0x9888), 0x04310000 },
+	{ _MMIO(0x9888), 0x00310000 },
+	{ _MMIO(0x9888), 0x00330444 },
+	{ _MMIO(0x9888), 0x038a0a00 },
+	{ _MMIO(0x9888), 0x018b0fff },
+	{ _MMIO(0x9888), 0x038b0a00 },
+	{ _MMIO(0x9888), 0x01855000 },
+	{ _MMIO(0x9888), 0x03850055 },
+	{ _MMIO(0x9888), 0x13830021 },
+	{ _MMIO(0x9888), 0x15830020 },
+	{ _MMIO(0x9888), 0x1783002f },
+	{ _MMIO(0x9888), 0x1983002e },
+	{ _MMIO(0x9888), 0x1b83002d },
+	{ _MMIO(0x9888), 0x1d83002c },
+	{ _MMIO(0x9888), 0x05830000 },
+	{ _MMIO(0x9888), 0x01840555 },
+	{ _MMIO(0x9888), 0x03840500 },
+	{ _MMIO(0x9888), 0x23800074 },
+	{ _MMIO(0x9888), 0x2580007d },
+	{ _MMIO(0x9888), 0x05800000 },
+	{ _MMIO(0x9888), 0x01805000 },
+	{ _MMIO(0x9888), 0x03800055 },
+	{ _MMIO(0x9888), 0x01865000 },
+	{ _MMIO(0x9888), 0x03860055 },
+	{ _MMIO(0x9888), 0x01875000 },
+	{ _MMIO(0x9888), 0x03870055 },
+	{ _MMIO(0x9888), 0x418000aa },
+	{ _MMIO(0x9888), 0x4380000a },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x4780000a },
+	{ _MMIO(0x9888), 0x49800000 },
+	{ _MMIO(0x9888), 0x4b800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x55800000 },
+	{ _MMIO(0x9888), 0x57800000 },
+	{ _MMIO(0x9888), 0x59800000 },
+};
+
+static const struct i915_oa_reg *
+get_render_basic_mux_config(struct drm_i915_private *dev_priv,
+			    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_render_basic);
+	return mux_config_render_basic;
+}
+
+int i915_oa_select_metric_set_chv(struct drm_i915_private *dev_priv)
+{
+	dev_priv->perf.oa.mux_regs = NULL;
+	dev_priv->perf.oa.mux_regs_len = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_render_basic_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "9d8a3af5-c02c-4a4a-b947-f1672469e0fb",
+	.attrs =  attrs_render_basic,
+};
+
+int
+i915_perf_register_sysfs_chv(struct drm_i915_private *dev_priv)
+{
+	int mux_len;
+	int ret = 0;
+
+	if (get_render_basic_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+		if (ret)
+			goto error_render_basic;
+	}
+
+	return 0;
+
+error_render_basic:
+	return ret;
+}
+
+void
+i915_perf_unregister_sysfs_chv(struct drm_i915_private *dev_priv)
+{
+	int mux_len;
+
+	if (get_render_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+}
diff --git a/drivers/gpu/drm/i915/i915_oa_chv.h b/drivers/gpu/drm/i915/i915_oa_chv.h
new file mode 100644
index 000000000000..e494200ff04f
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_chv.h
@@ -0,0 +1,38 @@
+/*
+ * Autogenerated file, DO NOT EDIT manually!
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_OA_CHV_H__
+#define __I915_OA_CHV_H__
+
+extern int i915_oa_n_builtin_metric_sets_chv;
+
+extern int i915_oa_select_metric_set_chv(struct drm_i915_private *dev_priv);
+
+extern int i915_perf_register_sysfs_chv(struct drm_i915_private *dev_priv);
+
+extern void i915_perf_unregister_sysfs_chv(struct drm_i915_private *dev_priv);
+
+#endif
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt2.c b/drivers/gpu/drm/i915/i915_oa_sklgt2.c
new file mode 100644
index 000000000000..ea0317a4781c
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt2.c
@@ -0,0 +1,228 @@
+/*
+ * Autogenerated file, DO NOT EDIT manually!
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/sysfs.h>
+
+#include "i915_drv.h"
+#include "i915_oa_sklgt2.h"
+
+enum metric_set_id {
+	METRIC_SET_ID_RENDER_BASIC = 1,
+};
+
+int i915_oa_n_builtin_metric_sets_sklgt2 = 1;
+
+static const struct i915_oa_reg b_counter_config_render_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic_1_sku_gte_0x02[] = {
+	{ _MMIO(0x9888), 0x166c01e0 },
+	{ _MMIO(0x9888), 0x12170280 },
+	{ _MMIO(0x9888), 0x12370280 },
+	{ _MMIO(0x9888), 0x11930317 },
+	{ _MMIO(0x9888), 0x159303df },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x1a4e0080 },
+	{ _MMIO(0x9888), 0x0a6c0053 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x0a1b4000 },
+	{ _MMIO(0x9888), 0x1c1c0001 },
+	{ _MMIO(0x9888), 0x002f1000 },
+	{ _MMIO(0x9888), 0x042f1000 },
+	{ _MMIO(0x9888), 0x004c4000 },
+	{ _MMIO(0x9888), 0x0a4c8400 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0d2000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f6600 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x162c2200 },
+	{ _MMIO(0x9888), 0x062d8000 },
+	{ _MMIO(0x9888), 0x082d8000 },
+	{ _MMIO(0x9888), 0x00133000 },
+	{ _MMIO(0x9888), 0x08133000 },
+	{ _MMIO(0x9888), 0x00170020 },
+	{ _MMIO(0x9888), 0x08170021 },
+	{ _MMIO(0x9888), 0x10170000 },
+	{ _MMIO(0x9888), 0x0633c000 },
+	{ _MMIO(0x9888), 0x0833c000 },
+	{ _MMIO(0x9888), 0x06370800 },
+	{ _MMIO(0x9888), 0x08370840 },
+	{ _MMIO(0x9888), 0x10370000 },
+	{ _MMIO(0x9888), 0x0d933031 },
+	{ _MMIO(0x9888), 0x0f933e3f },
+	{ _MMIO(0x9888), 0x01933d00 },
+	{ _MMIO(0x9888), 0x0393073c },
+	{ _MMIO(0x9888), 0x0593000e },
+	{ _MMIO(0x9888), 0x1d930000 },
+	{ _MMIO(0x9888), 0x19930000 },
+	{ _MMIO(0x9888), 0x1b930000 },
+	{ _MMIO(0x9888), 0x1d900157 },
+	{ _MMIO(0x9888), 0x1f900158 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d908000 },
+	{ _MMIO(0x9888), 0x2f908000 },
+	{ _MMIO(0x9888), 0x31908000 },
+	{ _MMIO(0x9888), 0x15908000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1190001f },
+	{ _MMIO(0x9888), 0x51904400 },
+	{ _MMIO(0x9888), 0x41900020 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c21 },
+	{ _MMIO(0x9888), 0x47900061 },
+	{ _MMIO(0x9888), 0x57904440 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900004 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x53904444 },
+};
+
+static const struct i915_oa_reg *
+get_render_basic_mux_config(struct drm_i915_private *dev_priv,
+			    int *len)
+{
+	if (dev_priv->drm.pdev->revision >= 0x02) {
+		*len = ARRAY_SIZE(mux_config_render_basic_1_sku_gte_0x02);
+		return mux_config_render_basic_1_sku_gte_0x02;
+	}
+
+	*len = 0;
+	return NULL;
+}
+
+int i915_oa_select_metric_set_sklgt2(struct drm_i915_private *dev_priv)
+{
+	dev_priv->perf.oa.mux_regs = NULL;
+	dev_priv->perf.oa.mux_regs_len = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_render_basic_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "f519e481-24d2-4d42-87c9-3fdd12c00202",
+	.attrs =  attrs_render_basic,
+};
+
+int
+i915_perf_register_sysfs_sklgt2(struct drm_i915_private *dev_priv)
+{
+	int mux_len;
+	int ret = 0;
+
+	if (get_render_basic_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+		if (ret)
+			goto error_render_basic;
+	}
+
+	return 0;
+
+error_render_basic:
+	return ret;
+}
+
+void
+i915_perf_unregister_sysfs_sklgt2(struct drm_i915_private *dev_priv)
+{
+	int mux_len;
+
+	if (get_render_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+}
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt2.h b/drivers/gpu/drm/i915/i915_oa_sklgt2.h
new file mode 100644
index 000000000000..e99b03f1bf9e
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt2.h
@@ -0,0 +1,38 @@
+/*
+ * Autogenerated file, DO NOT EDIT manually!
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_OA_SKLGT2_H__
+#define __I915_OA_SKLGT2_H__
+
+extern int i915_oa_n_builtin_metric_sets_sklgt2;
+
+extern int i915_oa_select_metric_set_sklgt2(struct drm_i915_private *dev_priv);
+
+extern int i915_perf_register_sysfs_sklgt2(struct drm_i915_private *dev_priv);
+
+extern void i915_perf_unregister_sysfs_sklgt2(struct drm_i915_private *dev_priv);
+
+#endif
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt3.c b/drivers/gpu/drm/i915/i915_oa_sklgt3.c
new file mode 100644
index 000000000000..a21211136191
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt3.c
@@ -0,0 +1,236 @@
+/*
+ * Autogenerated file, DO NOT EDIT manually!
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/sysfs.h>
+
+#include "i915_drv.h"
+#include "i915_oa_sklgt3.h"
+
+enum metric_set_id {
+	METRIC_SET_ID_RENDER_BASIC = 1,
+};
+
+int i915_oa_n_builtin_metric_sets_sklgt3 = 1;
+
+static const struct i915_oa_reg b_counter_config_render_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic[] = {
+	{ _MMIO(0x9888), 0x166c01e0 },
+	{ _MMIO(0x9888), 0x12170280 },
+	{ _MMIO(0x9888), 0x12370280 },
+	{ _MMIO(0x9888), 0x16ec01e0 },
+	{ _MMIO(0x9888), 0x11930317 },
+	{ _MMIO(0x9888), 0x159303df },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x1a4e0380 },
+	{ _MMIO(0x9888), 0x0a6c0053 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x0a1b4000 },
+	{ _MMIO(0x9888), 0x1c1c0001 },
+	{ _MMIO(0x9888), 0x002f1000 },
+	{ _MMIO(0x9888), 0x042f1000 },
+	{ _MMIO(0x9888), 0x004c4000 },
+	{ _MMIO(0x9888), 0x0a4c8400 },
+	{ _MMIO(0x9888), 0x0c4c0002 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f6600 },
+	{ _MMIO(0x9888), 0x100f0001 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x162ca200 },
+	{ _MMIO(0x9888), 0x062d8000 },
+	{ _MMIO(0x9888), 0x082d8000 },
+	{ _MMIO(0x9888), 0x00133000 },
+	{ _MMIO(0x9888), 0x08133000 },
+	{ _MMIO(0x9888), 0x00170020 },
+	{ _MMIO(0x9888), 0x08170021 },
+	{ _MMIO(0x9888), 0x10170000 },
+	{ _MMIO(0x9888), 0x0633c000 },
+	{ _MMIO(0x9888), 0x0833c000 },
+	{ _MMIO(0x9888), 0x06370800 },
+	{ _MMIO(0x9888), 0x08370840 },
+	{ _MMIO(0x9888), 0x10370000 },
+	{ _MMIO(0x9888), 0x1ace0200 },
+	{ _MMIO(0x9888), 0x0aec5300 },
+	{ _MMIO(0x9888), 0x10ec0000 },
+	{ _MMIO(0x9888), 0x1cec0000 },
+	{ _MMIO(0x9888), 0x0a9b8000 },
+	{ _MMIO(0x9888), 0x1c9c0002 },
+	{ _MMIO(0x9888), 0x0ccc0002 },
+	{ _MMIO(0x9888), 0x0a8d8000 },
+	{ _MMIO(0x9888), 0x108f0001 },
+	{ _MMIO(0x9888), 0x16ac8000 },
+	{ _MMIO(0x9888), 0x0d933031 },
+	{ _MMIO(0x9888), 0x0f933e3f },
+	{ _MMIO(0x9888), 0x01933d00 },
+	{ _MMIO(0x9888), 0x0393073c },
+	{ _MMIO(0x9888), 0x0593000e },
+	{ _MMIO(0x9888), 0x1d930000 },
+	{ _MMIO(0x9888), 0x19930000 },
+	{ _MMIO(0x9888), 0x1b930000 },
+	{ _MMIO(0x9888), 0x1d900157 },
+	{ _MMIO(0x9888), 0x1f900158 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d908000 },
+	{ _MMIO(0x9888), 0x2f908000 },
+	{ _MMIO(0x9888), 0x31908000 },
+	{ _MMIO(0x9888), 0x15908000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1190003f },
+	{ _MMIO(0x9888), 0x51907710 },
+	{ _MMIO(0x9888), 0x419020a0 },
+	{ _MMIO(0x9888), 0x55901515 },
+	{ _MMIO(0x9888), 0x45900529 },
+	{ _MMIO(0x9888), 0x47901025 },
+	{ _MMIO(0x9888), 0x57907770 },
+	{ _MMIO(0x9888), 0x49902100 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900108 },
+	{ _MMIO(0x9888), 0x59900007 },
+	{ _MMIO(0x9888), 0x43902108 },
+	{ _MMIO(0x9888), 0x53907777 },
+};
+
+static const struct i915_oa_reg *
+get_render_basic_mux_config(struct drm_i915_private *dev_priv,
+			    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_render_basic);
+	return mux_config_render_basic;
+}
+
+int i915_oa_select_metric_set_sklgt3(struct drm_i915_private *dev_priv)
+{
+	dev_priv->perf.oa.mux_regs = NULL;
+	dev_priv->perf.oa.mux_regs_len = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_render_basic_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "4616d450-2393-4836-8146-53c5ed84d359",
+	.attrs =  attrs_render_basic,
+};
+
+int
+i915_perf_register_sysfs_sklgt3(struct drm_i915_private *dev_priv)
+{
+	int mux_len;
+	int ret = 0;
+
+	if (get_render_basic_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+		if (ret)
+			goto error_render_basic;
+	}
+
+	return 0;
+
+error_render_basic:
+	return ret;
+}
+
+void
+i915_perf_unregister_sysfs_sklgt3(struct drm_i915_private *dev_priv)
+{
+	int mux_len;
+
+	if (get_render_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+}
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt3.h b/drivers/gpu/drm/i915/i915_oa_sklgt3.h
new file mode 100644
index 000000000000..ff7ec3206bf2
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt3.h
@@ -0,0 +1,38 @@
+/*
+ * Autogenerated file, DO NOT EDIT manually!
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_OA_SKLGT3_H__
+#define __I915_OA_SKLGT3_H__
+
+extern int i915_oa_n_builtin_metric_sets_sklgt3;
+
+extern int i915_oa_select_metric_set_sklgt3(struct drm_i915_private *dev_priv);
+
+extern int i915_perf_register_sysfs_sklgt3(struct drm_i915_private *dev_priv);
+
+extern void i915_perf_unregister_sysfs_sklgt3(struct drm_i915_private *dev_priv);
+
+#endif
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt4.c b/drivers/gpu/drm/i915/i915_oa_sklgt4.c
new file mode 100644
index 000000000000..1af4ebe7d51f
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt4.c
@@ -0,0 +1,247 @@
+/*
+ * Autogenerated file, DO NOT EDIT manually!
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/sysfs.h>
+
+#include "i915_drv.h"
+#include "i915_oa_sklgt4.h"
+
+enum metric_set_id {
+	METRIC_SET_ID_RENDER_BASIC = 1,
+};
+
+int i915_oa_n_builtin_metric_sets_sklgt4 = 1;
+
+static const struct i915_oa_reg b_counter_config_render_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic[] = {
+	{ _MMIO(0x9888), 0x166c01e0 },
+	{ _MMIO(0x9888), 0x12170280 },
+	{ _MMIO(0x9888), 0x12370280 },
+	{ _MMIO(0x9888), 0x16ec01e0 },
+	{ _MMIO(0x9888), 0x176c01e0 },
+	{ _MMIO(0x9888), 0x11930317 },
+	{ _MMIO(0x9888), 0x159303df },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x1a4e03b0 },
+	{ _MMIO(0x9888), 0x0a6c0053 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x0a1b4000 },
+	{ _MMIO(0x9888), 0x1c1c0001 },
+	{ _MMIO(0x9888), 0x002f1000 },
+	{ _MMIO(0x9888), 0x042f1000 },
+	{ _MMIO(0x9888), 0x004c4000 },
+	{ _MMIO(0x9888), 0x0a4ca400 },
+	{ _MMIO(0x9888), 0x0c4c0002 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f5600 },
+	{ _MMIO(0x9888), 0x100f0001 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x162caa00 },
+	{ _MMIO(0x9888), 0x062d8000 },
+	{ _MMIO(0x9888), 0x00133000 },
+	{ _MMIO(0x9888), 0x08133000 },
+	{ _MMIO(0x9888), 0x00170020 },
+	{ _MMIO(0x9888), 0x08170021 },
+	{ _MMIO(0x9888), 0x10170000 },
+	{ _MMIO(0x9888), 0x0633c000 },
+	{ _MMIO(0x9888), 0x06370800 },
+	{ _MMIO(0x9888), 0x10370000 },
+	{ _MMIO(0x9888), 0x1ace0230 },
+	{ _MMIO(0x9888), 0x0aec5300 },
+	{ _MMIO(0x9888), 0x10ec0000 },
+	{ _MMIO(0x9888), 0x1cec0000 },
+	{ _MMIO(0x9888), 0x0a9b8000 },
+	{ _MMIO(0x9888), 0x1c9c0002 },
+	{ _MMIO(0x9888), 0x0acc2000 },
+	{ _MMIO(0x9888), 0x0ccc0002 },
+	{ _MMIO(0x9888), 0x088d8000 },
+	{ _MMIO(0x9888), 0x0a8d8000 },
+	{ _MMIO(0x9888), 0x0e8f1000 },
+	{ _MMIO(0x9888), 0x108f0001 },
+	{ _MMIO(0x9888), 0x16ac8800 },
+	{ _MMIO(0x9888), 0x1b4e0020 },
+	{ _MMIO(0x9888), 0x096c5300 },
+	{ _MMIO(0x9888), 0x116c0000 },
+	{ _MMIO(0x9888), 0x1d6c0000 },
+	{ _MMIO(0x9888), 0x091b8000 },
+	{ _MMIO(0x9888), 0x1b1c8000 },
+	{ _MMIO(0x9888), 0x0b4c2000 },
+	{ _MMIO(0x9888), 0x090d8000 },
+	{ _MMIO(0x9888), 0x0f0f1000 },
+	{ _MMIO(0x9888), 0x172c0800 },
+	{ _MMIO(0x9888), 0x0d933031 },
+	{ _MMIO(0x9888), 0x0f933e3f },
+	{ _MMIO(0x9888), 0x01933d00 },
+	{ _MMIO(0x9888), 0x0393073c },
+	{ _MMIO(0x9888), 0x0593000e },
+	{ _MMIO(0x9888), 0x1d930000 },
+	{ _MMIO(0x9888), 0x19930000 },
+	{ _MMIO(0x9888), 0x1b930000 },
+	{ _MMIO(0x9888), 0x1d900157 },
+	{ _MMIO(0x9888), 0x1f900158 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d908000 },
+	{ _MMIO(0x9888), 0x2f908000 },
+	{ _MMIO(0x9888), 0x31908000 },
+	{ _MMIO(0x9888), 0x15908000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1190003f },
+	{ _MMIO(0x9888), 0x5190ff30 },
+	{ _MMIO(0x9888), 0x41900060 },
+	{ _MMIO(0x9888), 0x55903033 },
+	{ _MMIO(0x9888), 0x45901421 },
+	{ _MMIO(0x9888), 0x47900803 },
+	{ _MMIO(0x9888), 0x5790fff1 },
+	{ _MMIO(0x9888), 0x49900001 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x5990000f },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x5390ffff },
+};
+
+static const struct i915_oa_reg *
+get_render_basic_mux_config(struct drm_i915_private *dev_priv,
+			    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_render_basic);
+	return mux_config_render_basic;
+}
+
+int i915_oa_select_metric_set_sklgt4(struct drm_i915_private *dev_priv)
+{
+	dev_priv->perf.oa.mux_regs = NULL;
+	dev_priv->perf.oa.mux_regs_len = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_render_basic_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "bad77c24-cc64-480d-99bf-e7b740713800",
+	.attrs =  attrs_render_basic,
+};
+
+int
+i915_perf_register_sysfs_sklgt4(struct drm_i915_private *dev_priv)
+{
+	int mux_len;
+	int ret = 0;
+
+	if (get_render_basic_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+		if (ret)
+			goto error_render_basic;
+	}
+
+	return 0;
+
+error_render_basic:
+	return ret;
+}
+
+void
+i915_perf_unregister_sysfs_sklgt4(struct drm_i915_private *dev_priv)
+{
+	int mux_len;
+
+	if (get_render_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+}
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt4.h b/drivers/gpu/drm/i915/i915_oa_sklgt4.h
new file mode 100644
index 000000000000..4faa8c4579cd
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt4.h
@@ -0,0 +1,38 @@
+/*
+ * Autogenerated file, DO NOT EDIT manually!
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_OA_SKLGT4_H__
+#define __I915_OA_SKLGT4_H__
+
+extern int i915_oa_n_builtin_metric_sets_sklgt4;
+
+extern int i915_oa_select_metric_set_sklgt4(struct drm_i915_private *dev_priv);
+
+extern int i915_perf_register_sysfs_sklgt4(struct drm_i915_private *dev_priv);
+
+extern void i915_perf_unregister_sysfs_sklgt4(struct drm_i915_private *dev_priv);
+
+#endif
--
2.11.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v5 12/15] drm/i915/perf: Add OA unit support for Gen 8+
  2017-04-24  7:17 ` [PATCH 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                     ` (10 preceding siblings ...)
  2017-04-24  7:17   ` [PATCH v5 11/15] drm/i915/perf: Add 'render basic' Gen8+ OA unit configs Lionel Landwerlin
@ 2017-04-24  7:17   ` Lionel Landwerlin
  2017-04-24 10:35     ` Chris Wilson
  2017-04-24  7:17   ` [PATCH v5 13/15] drm/i915/perf: Add more OA configs for BDW, CHV, SKL + BXT Lionel Landwerlin
                     ` (6 subsequent siblings)
  18 siblings, 1 reply; 87+ messages in thread
From: Lionel Landwerlin @ 2017-04-24  7:17 UTC (permalink / raw)
  To: intel-gfx

From: Robert Bragg <robert@sixbynine.org>

Enables access to OA unit metrics for BDW, CHV, SKL and BXT which all
share (more-or-less) the same OA unit design.

Of particular note in comparison to Haswell: some OA unit HW config
state has become per-context state and as a consequence it is somewhat
more complicated to manage synchronous state changes from the cpu while
there's no guarantee of what context (if any) is currently actively
running on the gpu.

The periodic sampling frequency which can be particularly useful for
system-wide analysis (as opposed to command stream synchronised
MI_REPORT_PERF_COUNT commands) is perhaps the most surprising state to
have become per-context save and restored (while the OABUFFER
destination is still a shared, system-wide resource).

This support for gen8+ takes care to consider a number of timing
challenges involved in synchronously updating per-context state
primarily by programming all config state from the cpu and updating all
current and saved contexts synchronously while the OA unit is still
disabled.

The driver intentionally avoids depending on command streamer
programming to update OA state considering the lack of synchronization
between the automatic loading of OACTXCONTROL state (that includes the
periodic sampling state and enable state) on context restore and the
parsing of any general purpose BB the driver can control. I.e. this
implementation is careful to avoid the possibility of a context restore
temporarily enabling any out-of-date periodic sampling state. In
addition to the risk of transiently-out-of-date state being loaded
automatically; there are also internal HW latencies involved in the
loading of MUX configurations which would be difficult to account for
from the command streamer (and we only want to enable the unit when once
the MUX configuration is complete).

Since the Gen8+ OA unit design no longer supports clock gating the unit
off for a single given context (which effectively stopped any progress
of counters while any other context was running) and instead supports
tagging OA reports with a context ID for filtering on the CPU, it means
we can no longer hide the system-wide progress of counters from a
non-privileged application only interested in metrics for its own
context. Although we could theoretically try and subtract the progress
of other contexts before forwarding reports via read() we aren't in a
position to filter reports captured via MI_REPORT_PERF_COUNT commands.
As a result, for Gen8+, we always require the
dev.i915.perf_stream_paranoid to be unset for any access to OA metrics
if not root.

v5: Drain submitted requests when enabling metric set to ensure no
    lite-restore erases the context image we just updated (Lionel)

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com> \o/
---
 drivers/gpu/drm/i915/i915_drv.h         |  45 +-
 drivers/gpu/drm/i915/i915_gem_context.h |   1 +
 drivers/gpu/drm/i915/i915_perf.c        | 969 +++++++++++++++++++++++++++++---
 drivers/gpu/drm/i915/i915_reg.h         |  22 +
 drivers/gpu/drm/i915/intel_lrc.c        |   5 +
 include/uapi/drm/i915_drm.h             |  19 +-
 6 files changed, 968 insertions(+), 93 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index ffa1fc5eddfd..41a97ade39ea 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2064,9 +2064,17 @@ struct i915_oa_ops {
 	void (*init_oa_buffer)(struct drm_i915_private *dev_priv);

 	/**
-	 * @enable_metric_set: Applies any MUX configuration to set up the
-	 * Boolean and Custom (B/C) counters that are part of the counter
-	 * reports being sampled. May apply system constraints such as
+	 * @select_metric_set: The auto generated code that checks whether a
+	 * requested OA config is applicable to the system and if so sets up
+	 * the mux, oa and flex eu register config pointers according to the
+	 * current dev_priv->perf.oa.metrics_set.
+	 */
+	int (*select_metric_set)(struct drm_i915_private *dev_priv);
+
+	/**
+	 * @enable_metric_set: Selects and applies any MUX configuration to set
+	 * up the Boolean and Custom (B/C) counters that are part of the
+	 * counter reports being sampled. May apply system constraints such as
 	 * disabling EU clock gating as required.
 	 */
 	int (*enable_metric_set)(struct drm_i915_private *dev_priv);
@@ -2097,20 +2105,13 @@ struct i915_oa_ops {
 		    size_t *offset);

 	/**
-	 * @oa_buffer_check: Check for OA buffer data + update tail
-	 *
-	 * This is either called via fops or the poll check hrtimer (atomic
-	 * ctx) without any locks taken.
+	 * @oa_hw_tail_read: read the OA tail pointer register
 	 *
-	 * It's safe to read OA config state here unlocked, assuming that this
-	 * is only called while the stream is enabled, while the global OA
-	 * configuration can't be modified.
-	 *
-	 * Efficiency is more important than avoiding some false positives
-	 * here, which will be handled gracefully - likely resulting in an
-	 * %EAGAIN error for userspace.
+	 * In particular this enables us to share all the fiddly code for
+	 * handling the OA unit tail pointer race that affects multiple
+	 * generations.
 	 */
-	bool (*oa_buffer_check)(struct drm_i915_private *dev_priv);
+	u32 (*oa_hw_tail_read)(struct drm_i915_private *dev_priv);
 };

 struct intel_cdclk_state {
@@ -2472,6 +2473,7 @@ struct drm_i915_private {
 			struct {
 				struct i915_vma *vma;
 				u8 *vaddr;
+				u32 last_ctx_id;
 				int format;
 				int format_size;

@@ -2541,6 +2543,14 @@ struct drm_i915_private {
 			} oa_buffer;

 			u32 gen7_latched_oastatus1;
+			u32 ctx_oactxctrl_offset;
+			u32 ctx_flexeu0_offset;
+
+			/* The RPT_ID/reason field for Gen8+ includes a bit
+			 * to determine if the CTX ID in the report is valid
+			 * but the specific bit differs between Gen 8 and 9
+			 */
+			u32 gen8_valid_ctx_bit;

 			struct i915_oa_ops ops;
 			const struct i915_oa_format *oa_formats;
@@ -2851,6 +2861,8 @@ intel_info(const struct drm_i915_private *dev_priv)
 #define IS_KBL_ULX(dev_priv)	(INTEL_DEVID(dev_priv) == 0x590E || \
 				 INTEL_DEVID(dev_priv) == 0x5915 || \
 				 INTEL_DEVID(dev_priv) == 0x591E)
+#define IS_SKL_GT2(dev_priv)	(IS_SKYLAKE(dev_priv) && \
+				 (INTEL_DEVID(dev_priv) & 0x00F0) == 0x0010)
 #define IS_SKL_GT3(dev_priv)	(IS_SKYLAKE(dev_priv) && \
 				 (INTEL_DEVID(dev_priv) & 0x00F0) == 0x0020)
 #define IS_SKL_GT4(dev_priv)	(IS_SKYLAKE(dev_priv) && \
@@ -3615,6 +3627,9 @@ i915_gem_context_lookup_timeline(struct i915_gem_context *ctx,

 int i915_perf_open_ioctl(struct drm_device *dev, void *data,
 			 struct drm_file *file);
+void i915_oa_update_reg_state(struct intel_engine_cs *engine,
+			      struct i915_gem_context *ctx,
+			      uint32_t *reg_state);

 /* i915_gem_evict.c */
 int __must_check i915_gem_evict_something(struct i915_address_space *vm,
diff --git a/drivers/gpu/drm/i915/i915_gem_context.h b/drivers/gpu/drm/i915/i915_gem_context.h
index 4af2ab94558b..3f4ce73bea43 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.h
+++ b/drivers/gpu/drm/i915/i915_gem_context.h
@@ -151,6 +151,7 @@ struct i915_gem_context {
 		u64 lrc_desc;
 		int pin_count;
 		bool initialised;
+		atomic_t oa_state_dirty;
 	} engine[I915_NUM_ENGINES];

 	/** ring_size: size for allocating the per-engine ring buffer */
diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 3277a52ce98e..96fd59359109 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -196,6 +196,12 @@

 #include "i915_drv.h"
 #include "i915_oa_hsw.h"
+#include "i915_oa_bdw.h"
+#include "i915_oa_chv.h"
+#include "i915_oa_sklgt2.h"
+#include "i915_oa_sklgt3.h"
+#include "i915_oa_sklgt4.h"
+#include "i915_oa_bxt.h"

 /* HW requires this to be a power of two, between 128k and 16M, though driver
  * is currently generally designed assuming the largest 16M size is used such
@@ -215,7 +221,7 @@
  *
  * Although this can be observed explicitly while copying reports to userspace
  * by checking for a zeroed report-id field in tail reports, we want to account
- * for this earlier, as part of the _oa_buffer_check to avoid lots of redundant
+ * for this earlier, as part of the oa_buffer_check to avoid lots of redundant
  * read() attempts.
  *
  * In effect we define a tail pointer for reading that lags the real tail
@@ -237,7 +243,7 @@
  * indicates that an updated tail pointer is needed.
  *
  * Most of the implementation details for this workaround are in
- * gen7_oa_buffer_check_unlocked() and gen7_appand_oa_reports()
+ * oa_buffer_check_unlocked() and _append_oa_reports()
  *
  * Note for posterity: previously the driver used to define an effective tail
  * pointer that lagged the real pointer by a 'tail margin' measured in bytes
@@ -272,6 +278,13 @@ static u32 i915_perf_stream_paranoid = true;

 #define INVALID_CTX_ID 0xffffffff

+/* On Gen8+ automatically triggered OA reports include a 'reason' field... */
+#define OAREPORT_REASON_MASK           0x3f
+#define OAREPORT_REASON_SHIFT          19
+#define OAREPORT_REASON_TIMER          (1<<0)
+#define OAREPORT_REASON_CTX_SWITCH     (1<<3)
+#define OAREPORT_REASON_CLK_RATIO      (1<<5)
+

 /* For sysctl proc_dointvec_minmax of i915_oa_max_sample_rate
  *
@@ -303,6 +316,13 @@ static struct i915_oa_format hsw_oa_formats[I915_OA_FORMAT_MAX] = {
 	[I915_OA_FORMAT_C4_B8]	    = { 7, 64 },
 };

+static struct i915_oa_format gen8_plus_oa_formats[I915_OA_FORMAT_MAX] = {
+	[I915_OA_FORMAT_A12]		    = { 0, 64 },
+	[I915_OA_FORMAT_A12_B8_C8]	    = { 2, 128 },
+	[I915_OA_FORMAT_A32u40_A4u32_B8_C8] = { 5, 256 },
+	[I915_OA_FORMAT_C4_B8]		    = { 7, 64 },
+};
+
 #define SAMPLE_OA_REPORT      (1<<0)

 /**
@@ -332,8 +352,20 @@ struct perf_open_properties {
 	int oa_period_exponent;
 };

+static u32 gen8_oa_hw_tail_read(struct drm_i915_private *dev_priv)
+{
+	return I915_READ(GEN8_OATAILPTR) & GEN8_OATAILPTR_MASK;
+}
+
+static u32 gen7_oa_hw_tail_read(struct drm_i915_private *dev_priv)
+{
+	u32 oastatus1 = I915_READ(GEN7_OASTATUS1);
+
+	return oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
+}
+
 /**
- * gen7_oa_buffer_check_unlocked - check for data and update tail ptr state
+ * oa_buffer_check_unlocked - check for data and update tail ptr state
  * @dev_priv: i915 device instance
  *
  * This is either called via fops (for blocking reads in user ctx) or the poll
@@ -356,12 +388,11 @@ struct perf_open_properties {
  *
  * Returns: %true if the OA buffer contains data, else %false
  */
-static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
+static bool oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
 {
 	int report_size = dev_priv->perf.oa.oa_buffer.format_size;
 	unsigned long flags;
 	unsigned int aged_idx;
-	u32 oastatus1;
 	u32 head, hw_tail, aged_tail, aging_tail;
 	u64 now;

@@ -381,8 +412,7 @@ static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
 	aged_tail = dev_priv->perf.oa.oa_buffer.tails[aged_idx].offset;
 	aging_tail = dev_priv->perf.oa.oa_buffer.tails[!aged_idx].offset;

-	oastatus1 = I915_READ(GEN7_OASTATUS1);
-	hw_tail = oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
+	hw_tail = dev_priv->perf.oa.ops.oa_hw_tail_read(dev_priv);

 	/* The tail pointer increases in 64 byte increments,
 	 * not in report_size steps...
@@ -404,6 +434,7 @@ static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
 	if (aging_tail != INVALID_TAIL_PTR &&
 	    ((now - dev_priv->perf.oa.oa_buffer.aging_timestamp) >
 	     OA_TAIL_MARGIN_NSEC)) {
+
 		aged_idx ^= 1;
 		dev_priv->perf.oa.oa_buffer.aged_tail_idx = aged_idx;

@@ -553,6 +584,287 @@ static int append_oa_sample(struct i915_perf_stream *stream,
  *
  * Returns: 0 on success, negative error code on failure.
  */
+static int gen8_append_oa_reports(struct i915_perf_stream *stream,
+				  char __user *buf,
+				  size_t count,
+				  size_t *offset)
+{
+	struct drm_i915_private *dev_priv = stream->dev_priv;
+	int report_size = dev_priv->perf.oa.oa_buffer.format_size;
+	u8 *oa_buf_base = dev_priv->perf.oa.oa_buffer.vaddr;
+	u32 gtt_offset = i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
+	u32 mask = (OA_BUFFER_SIZE - 1);
+	size_t start_offset = *offset;
+	unsigned long flags;
+	unsigned int aged_tail_idx;
+	u32 head, tail;
+	u32 taken;
+	int ret = 0;
+
+	if (WARN_ON(!stream->enabled))
+		return -EIO;
+
+	spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	head = dev_priv->perf.oa.oa_buffer.head;
+	aged_tail_idx = dev_priv->perf.oa.oa_buffer.aged_tail_idx;
+	tail = dev_priv->perf.oa.oa_buffer.tails[aged_tail_idx].offset;
+
+	spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	/* An invalid tail pointer here means we're still waiting for the poll
+	 * hrtimer callback to give us a pointer
+	 */
+	if (tail == INVALID_TAIL_PTR)
+		return -EAGAIN;
+
+	/* NB: oa_buffer.head/tail include the gtt_offset which we don't want
+	 * while indexing relative to oa_buf_base.
+	 */
+	head -= gtt_offset;
+	tail -= gtt_offset;
+
+	/* An out of bounds or misaligned head or tail pointer implies a driver
+	 * bug since we validate + align the tail pointers we read from the
+	 * hardware and we are in full control of the head pointer which should
+	 * only be incremented by multiples of the report size (notably also
+	 * all a power of two).
+	 */
+	if (WARN_ONCE(head > OA_BUFFER_SIZE || head % report_size ||
+		      tail > OA_BUFFER_SIZE || tail % report_size,
+		      "Inconsistent OA buffer pointers: head = %u, tail = %u\n",
+		      head, tail))
+		return -EIO;
+
+
+	for (/* none */;
+	     (taken = OA_TAKEN(tail, head));
+	     head = (head + report_size) & mask) {
+		u8 *report = oa_buf_base + head;
+		u32 *report32 = (void *)report;
+		u32 ctx_id;
+		u32 reason;
+
+		/* All the report sizes factor neatly into the buffer
+		 * size so we never expect to see a report split
+		 * between the beginning and end of the buffer.
+		 *
+		 * Given the initial alignment check a misalignment
+		 * here would imply a driver bug that would result
+		 * in an overrun.
+		 */
+		if (WARN_ON((OA_BUFFER_SIZE - head) < report_size)) {
+			DRM_ERROR("Spurious OA head ptr: non-integral report offset\n");
+			break;
+		}
+
+		/* The reason field includes flags identifying what
+		 * triggered this specific report (mostly timer
+		 * triggered or e.g. due to a context switch).
+		 *
+		 * This field is never expected to be zero so we can
+		 * check that the report isn't invalid before copying
+		 * it to userspace...
+		 */
+		reason = ((report32[0] >> OAREPORT_REASON_SHIFT) &
+			  OAREPORT_REASON_MASK);
+		if (reason == 0) {
+			if (__ratelimit(&dev_priv->perf.oa.spurious_report_rs))
+				DRM_NOTE("Skipping spurious, invalid OA report\n");
+			continue;
+		}
+
+		/* XXX: Just keep the lower 21 bits for now since I'm not
+		 * entirely sure if the HW touches any of the higher bits in
+		 * this field
+		 */
+		ctx_id = report32[2] & 0x1fffff;
+
+		/* Squash whatever is in the CTX_ID field if it's marked as
+		 * invalid to be sure we avoid false-positive, single-context
+		 * filtering below...
+		 *
+		 * Note: that we don't clear the valid_ctx_bit so userspace can
+		 * understand that the ID has been squashed by the kernel.
+		 */
+		if (!(report32[0] & dev_priv->perf.oa.gen8_valid_ctx_bit))
+			ctx_id = report32[2] = INVALID_CTX_ID;
+
+		/* NB: For Gen 8 the OA unit no longer supports clock gating
+		 * off for a specific context and the kernel can't securely
+		 * stop the counters from updating as system-wide / global
+		 * values.
+		 *
+		 * Automatic reports now include a context ID so reports can be
+		 * filtered on the cpu but it's not worth trying to
+		 * automatically subtract/hide counter progress for other
+		 * contexts while filtering since we can't stop userspace
+		 * issuing MI_REPORT_PERF_COUNT commands which would still
+		 * provide a side-band view of the real values.
+		 *
+		 * To allow userspace (such as Mesa/GL_INTEL_performance_query)
+		 * to normalize counters for a single filtered context then it
+		 * needs be forwarded bookend context-switch reports so that it
+		 * can track switches in between MI_REPORT_PERF_COUNT commands
+		 * and can itself subtract/ignore the progress of counters
+		 * associated with other contexts. Note that the hardware
+		 * automatically triggers reports when switching to a new
+		 * context which are tagged with the ID of the newly active
+		 * context. To avoid the complexity (and likely fragility) of
+		 * reading ahead while parsing reports to try and minimize
+		 * forwarding redundant context switch reports (i.e. between
+		 * other, unrelated contexts) we simply elect to forward them
+		 * all.
+		 *
+		 * We don't rely solely on the reason field to identify context
+		 * switches since it's not-uncommon for periodic samples to
+		 * identify a switch before any 'context switch' report.
+		 */
+		if (!dev_priv->perf.oa.exclusive_stream->ctx ||
+		    dev_priv->perf.oa.specific_ctx_id == ctx_id ||
+		    (dev_priv->perf.oa.oa_buffer.last_ctx_id ==
+		     dev_priv->perf.oa.specific_ctx_id) ||
+		    reason & OAREPORT_REASON_CTX_SWITCH) {
+
+			/* While filtering for a single context we avoid
+			 * leaking the IDs of other contexts.
+			 */
+			if (dev_priv->perf.oa.exclusive_stream->ctx &&
+			    dev_priv->perf.oa.specific_ctx_id != ctx_id) {
+				report32[2] = INVALID_CTX_ID;
+			}
+
+			ret = append_oa_sample(stream, buf, count, offset,
+					       report);
+			if (ret)
+				break;
+
+			dev_priv->perf.oa.oa_buffer.last_ctx_id = ctx_id;
+		}
+
+		/* The above reason field sanity check is based on
+		 * the assumption that the OA buffer is initially
+		 * zeroed and we reset the field after copying so the
+		 * check is still meaningful once old reports start
+		 * being overwritten.
+		 */
+		report32[0] = 0;
+	}
+
+	if (start_offset != *offset) {
+		spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+		/* We removed the gtt_offset for the copy loop above, indexing
+		 * relative to oa_buf_base so put back here...
+		 */
+		head += gtt_offset;
+
+		I915_WRITE(GEN8_OAHEADPTR, head & GEN8_OAHEADPTR_MASK);
+		dev_priv->perf.oa.oa_buffer.head = head;
+
+		spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+	}
+
+	return ret;
+}
+
+/**
+ * gen8_oa_read - copy status records then buffered OA reports
+ * @stream: An i915-perf stream opened for OA metrics
+ * @buf: destination buffer given by userspace
+ * @count: the number of bytes userspace wants to read
+ * @offset: (inout): the current position for writing into @buf
+ *
+ * Checks OA unit status registers and if necessary appends corresponding
+ * status records for userspace (such as for a buffer full condition) and then
+ * initiate appending any buffered OA reports.
+ *
+ * Updates @offset according to the number of bytes successfully copied into
+ * the userspace buffer.
+ *
+ * NB: some data may be successfully copied to the userspace buffer
+ * even if an error is returned, and this is reflected in the
+ * updated @offset.
+ *
+ * Returns: zero on success or a negative error code
+ */
+static int gen8_oa_read(struct i915_perf_stream *stream,
+			char __user *buf,
+			size_t count,
+			size_t *offset)
+{
+	struct drm_i915_private *dev_priv = stream->dev_priv;
+	u32 oastatus;
+	int ret;
+
+	if (WARN_ON(!dev_priv->perf.oa.oa_buffer.vaddr))
+		return -EIO;
+
+	oastatus = I915_READ(GEN8_OASTATUS);
+
+	/* We treat OABUFFER_OVERFLOW as a significant error:
+	 *
+	 * Although theoretically we could handle this more gracefully
+	 * sometimes, some Gens don't correctly suppress certain
+	 * automatically triggered reports in this condition and so we
+	 * have to assume that old reports are now being trampled
+	 * over.
+	 *
+	 * Considering how we don't currently give userspace control
+	 * over the OA buffer size and always configure a large 16MB
+	 * buffer, then a buffer overflow does anyway likely indicate
+	 * that something has gone quite badly wrong.
+	 */
+	if (oastatus & GEN8_OASTATUS_OABUFFER_OVERFLOW) {
+		ret = append_oa_status(stream, buf, count, offset,
+				       DRM_I915_PERF_RECORD_OA_BUFFER_LOST);
+		if (ret)
+			return ret;
+
+		DRM_DEBUG("OA buffer overflow (exponent = %d): force restart\n",
+			  dev_priv->perf.oa.period_exponent);
+
+		dev_priv->perf.oa.ops.oa_disable(dev_priv);
+		dev_priv->perf.oa.ops.oa_enable(dev_priv);
+
+		/* Note: .oa_enable() is expected to re-init the oabuffer
+		 * and reset GEN8_OASTATUS for us
+		 */
+		oastatus = I915_READ(GEN8_OASTATUS);
+	}
+
+	if (oastatus & GEN8_OASTATUS_REPORT_LOST) {
+		ret = append_oa_status(stream, buf, count, offset,
+				       DRM_I915_PERF_RECORD_OA_REPORT_LOST);
+		if (ret)
+			return ret;
+		I915_WRITE(GEN8_OASTATUS,
+			   oastatus & ~GEN8_OASTATUS_REPORT_LOST);
+	}
+
+	return gen8_append_oa_reports(stream, buf, count, offset);
+}
+
+/**
+ * Copies all buffered OA reports into userspace read() buffer.
+ * @stream: An i915-perf stream opened for OA metrics
+ * @buf: destination buffer given by userspace
+ * @count: the number of bytes userspace wants to read
+ * @offset: (inout): the current position for writing into @buf
+ *
+ * Notably any error condition resulting in a short read (-%ENOSPC or
+ * -%EFAULT) will be returned even though one or more records may
+ * have been successfully copied. In this case it's up to the caller
+ * to decide if the error should be squashed before returning to
+ * userspace.
+ *
+ * Note: reports are consumed from the head, and appended to the
+ * tail, so the tail chases the head?... If you think that's mad
+ * and back-to-front you're not alone, but this follows the
+ * Gen PRM naming convention.
+ *
+ * Returns: 0 on success, negative error code on failure.
+ */
 static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 				  char __user *buf,
 				  size_t count,
@@ -733,7 +1045,8 @@ static int gen7_oa_read(struct i915_perf_stream *stream,
 		if (ret)
 			return ret;

-		DRM_DEBUG("OA buffer overflow: force restart\n");
+		DRM_DEBUG("OA buffer overflow (exponent = %d): force restart\n",
+			  dev_priv->perf.oa.period_exponent);

 		dev_priv->perf.oa.ops.oa_disable(dev_priv);
 		dev_priv->perf.oa.ops.oa_enable(dev_priv);
@@ -776,7 +1089,7 @@ static int i915_oa_wait_unlocked(struct i915_perf_stream *stream)
 		return -EIO;

 	return wait_event_interruptible(dev_priv->perf.oa.poll_wq,
-					dev_priv->perf.oa.ops.oa_buffer_check(dev_priv));
+					oa_buffer_check_unlocked(dev_priv));
 }

 /**
@@ -833,33 +1146,37 @@ static int i915_oa_read(struct i915_perf_stream *stream,
 static int oa_get_render_ctx_id(struct i915_perf_stream *stream)
 {
 	struct drm_i915_private *dev_priv = stream->dev_priv;
-	struct intel_engine_cs *engine = dev_priv->engine[RCS];
-	int ret;

-	ret = i915_mutex_lock_interruptible(&dev_priv->drm);
-	if (ret)
-		return ret;
+	if (i915.enable_execlists)
+		dev_priv->perf.oa.specific_ctx_id = stream->ctx->hw_id;
+	else {
+		struct intel_engine_cs *engine = dev_priv->engine[RCS];
+		int ret;

-	/* As the ID is the gtt offset of the context's vma we pin
-	 * the vma to ensure the ID remains fixed.
-	 *
-	 * NB: implied RCS engine...
-	 */
-	ret = engine->context_pin(engine, stream->ctx);
-	if (ret)
-		goto unlock;
+		ret = i915_mutex_lock_interruptible(&dev_priv->drm);
+		if (ret)
+			return ret;

-	/* Explicitly track the ID (instead of calling i915_ggtt_offset()
-	 * on the fly) considering the difference with gen8+ and
-	 * execlists
-	 */
-	dev_priv->perf.oa.specific_ctx_id =
-		i915_ggtt_offset(stream->ctx->engine[engine->id].state);
+		/* As the ID is the gtt offset of the context's vma we pin
+		 * the vma to ensure the ID remains fixed.
+		 */
+		ret = engine->context_pin(engine, stream->ctx);
+		if (ret) {
+			mutex_unlock(&dev_priv->drm.struct_mutex);
+			return ret;
+		}

-unlock:
-	mutex_unlock(&dev_priv->drm.struct_mutex);
+		/* Explicitly track the ID (instead of calling
+		 * i915_ggtt_offset() on the fly) considering the difference
+		 * with gen8+ and execlists
+		 */
+		dev_priv->perf.oa.specific_ctx_id =
+			i915_ggtt_offset(stream->ctx->engine[engine->id].state);

-	return ret;
+		mutex_unlock(&dev_priv->drm.struct_mutex);
+	}
+
+	return 0;
 }

 /**
@@ -874,12 +1191,16 @@ static void oa_put_render_ctx_id(struct i915_perf_stream *stream)
 	struct drm_i915_private *dev_priv = stream->dev_priv;
 	struct intel_engine_cs *engine = dev_priv->engine[RCS];

-	mutex_lock(&dev_priv->drm.struct_mutex);
+	if (i915.enable_execlists) {
+		dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
+	} else {
+		mutex_lock(&dev_priv->drm.struct_mutex);

-	dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
-	engine->context_unpin(engine, stream->ctx);
+		dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
+		engine->context_unpin(engine, stream->ctx);

-	mutex_unlock(&dev_priv->drm.struct_mutex);
+		mutex_unlock(&dev_priv->drm.struct_mutex);
+	}
 }

 static void
@@ -969,6 +1290,61 @@ static void gen7_init_oa_buffer(struct drm_i915_private *dev_priv)
 	dev_priv->perf.oa.pollin = false;
 }

+static void gen8_init_oa_buffer(struct drm_i915_private *dev_priv)
+{
+	u32 gtt_offset = i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
+	unsigned long flags;
+
+	spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	I915_WRITE(GEN8_OASTATUS, 0);
+	I915_WRITE(GEN8_OAHEADPTR, gtt_offset);
+	dev_priv->perf.oa.oa_buffer.head = gtt_offset;
+
+	I915_WRITE(GEN8_OABUFFER_UDW, 0);
+
+	/* PRM says:
+	 *
+	 *  "This MMIO must be set before the OATAILPTR
+	 *  register and after the OAHEADPTR register. This is
+	 *  to enable proper functionality of the overflow
+	 *  bit."
+	 */
+	I915_WRITE(GEN8_OABUFFER, gtt_offset |
+		   OABUFFER_SIZE_16M | OA_MEM_SELECT_GGTT);
+	I915_WRITE(GEN8_OATAILPTR, gtt_offset & GEN8_OATAILPTR_MASK);
+
+	/* Mark that we need updated tail pointers to read from... */
+	dev_priv->perf.oa.oa_buffer.tails[0].offset = INVALID_TAIL_PTR;
+	dev_priv->perf.oa.oa_buffer.tails[1].offset = INVALID_TAIL_PTR;
+
+	/* Reset state used to recognise context switches, affecting which
+	 * reports we will forward to userspace while filtering for a single
+	 * context.
+	 */
+	dev_priv->perf.oa.oa_buffer.last_ctx_id = INVALID_CTX_ID;
+
+	spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	/* NB: although the OA buffer will initially be allocated
+	 * zeroed via shmfs (and so this memset is redundant when
+	 * first allocating), we may re-init the OA buffer, either
+	 * when re-enabling a stream or in error/reset paths.
+	 *
+	 * The reason we clear the buffer for each re-init is for the
+	 * sanity check in gen8_append_oa_reports() that looks at the
+	 * reason field to make sure it's non-zero which relies on
+	 * the assumption that new reports are being written to zeroed
+	 * memory...
+	 */
+	memset(dev_priv->perf.oa.oa_buffer.vaddr, 0, OA_BUFFER_SIZE);
+
+	/* Maybe make ->pollin per-stream state if we support multiple
+	 * concurrent streams in the future.
+	 */
+	dev_priv->perf.oa.pollin = false;
+}
+
 static int alloc_oa_buffer(struct drm_i915_private *dev_priv)
 {
 	struct drm_i915_gem_object *bo;
@@ -1113,6 +1489,225 @@ static void hsw_disable_metric_set(struct drm_i915_private *dev_priv)
 				      ~GT_NOA_ENABLE));
 }

+/*
+ * From Broadwell PRM, 3D-Media-GPGPU -> Register State Context
+ *
+ * MMIO reads or writes to any of the registers listed in the
+ * “Register State Context image” subsections through HOST/IA
+ * MMIO interface for debug purposes must follow the steps below:
+ *
+ * - SW should set the Force Wakeup bit to prevent GT from entering C6.
+ * - Write 0x2050[31:0] = 0x00010001 (disable sequence).
+ * - Disable IDLE messaging in CS (Write 0x2050[31:0] = 0x00010001).
+ * - BDW:  Poll/Wait for register bits of 0x22AC[6:0] turn to 0x30 value.
+ * - SKL+: Poll/Wait for register bits of 0x22A4[6:0] turn to 0x30 value.
+ * - Read/Write to desired MMIO registers.
+ * - Enable IDLE messaging in CS (Write 0x2050[31:0] = 0x00010000).
+ * - Force Wakeup bit should be reset to enable C6 entry.
+ *
+ * XXX: don't nest or overlap calls to this API, it has no ref
+ * counting to track how many entities require the RCS to be
+ * blocked from being idle.
+ */
+static int gen8_begin_ctx_mmio(struct drm_i915_private *dev_priv)
+{
+	i915_reg_t fsm_reg = INTEL_GEN(dev_priv) > 8 ?
+		GEN9_RCS_FE_FSM2 : GEN6_RCS_PWR_FSM;
+	int ret = 0;
+
+	/* There's only no active context while idle in execlist mode
+	 * (though we shouldn't be using this in any other case)
+	 */
+	if (WARN_ON(!i915.enable_execlists))
+		return -EINVAL;
+
+	intel_uncore_forcewake_get(dev_priv, FORCEWAKE_RENDER);
+
+	I915_WRITE(GEN6_RC_SLEEP_PSMI_CONTROL,
+		   _MASKED_BIT_ENABLE(GEN6_PSMI_SLEEP_MSG_DISABLE));
+
+	/* Note: we don't currently have a good handle on the maximum
+	 * latency for this wake up so while we only need to hold rcs
+	 * busy from process context we at least keep the waiting
+	 * interruptible...
+	 */
+	ret = intel_wait_for_register(dev_priv, fsm_reg, 0x3f, 0x30, 50);
+	if (ret == -ETIMEDOUT)
+		DRM_ERROR("Timeout waiting to be able to begin per-ctx mmio\n");
+
+	if (ret)
+		intel_uncore_forcewake_put(dev_priv, FORCEWAKE_RENDER);
+
+	return ret;
+}
+
+static void gen8_end_ctx_mmio(struct drm_i915_private *dev_priv)
+{
+	if (WARN_ON(!i915.enable_execlists))
+		return;
+
+	I915_WRITE(GEN6_RC_SLEEP_PSMI_CONTROL,
+		   _MASKED_BIT_DISABLE(GEN6_PSMI_SLEEP_MSG_DISABLE));
+	intel_uncore_forcewake_put(dev_priv, FORCEWAKE_RENDER);
+}
+
+/* Manages updating the per-context aspects of the OA stream
+ * configuration across all contexts.
+ *
+ * The awkward consideration here is that OACTXCONTROL controls the
+ * exponent for periodic sampling which is primarily used for system
+ * wide profiling where we'd like a consistent sampling period even in
+ * the face of context switches.
+ *
+ * Our approach of updating the register state context (as opposed to
+ * say using a workaround batch buffer) ensures that the hardware
+ * won't automatically reload an out-of-date timer exponent even
+ * transiently before a WA BB could be parsed.
+ *
+ * This function needs to:
+ * - Ensure the currently running context's per-context OA state is
+ *   updated
+ * - Ensure that all existing contexts will have the correct per-context
+ *   OA state if they are scheduled for use.
+ * - Ensure any new contexts will be initialized with the correct
+ *   per-context OA state.
+ *
+ * Note: it's only the RCS/Render context that has any OA state.
+ */
+static int gen8_configure_all_contexts(struct drm_i915_private *dev_priv)
+{
+	struct i915_gem_context *ctx;
+	int ret;
+
+	ret = i915_mutex_lock_interruptible(&dev_priv->drm);
+	if (ret)
+		return ret;
+
+	/* The OA register config is setup through the context image. This image
+	 * might be written to by the GPU on context switch (in particular on
+	 * lite-restore). This means we can't safely update a context's image,
+	 * if this context is scheduled/submitted to run on the GPU.
+	 *
+	 * We could emit the OA register config through the batch buffer but
+	 * this might leave small interval of time where the OA unit is
+	 * configured at an invalid sampling period.
+	 *
+	 * So far the best way to work around this issue seems to be draining
+	 * the GPU from any submitted work.
+	 */
+        ret = i915_gem_wait_for_idle(dev_priv,
+                                     I915_WAIT_INTERRUPTIBLE |
+                                     I915_WAIT_LOCKED);
+        if (ret) {
+		mutex_unlock(&dev_priv->drm.struct_mutex);
+                return ret;
+	}
+
+	/* Since execlist submission may be happening asynchronously here then
+	 * we first mark existing contexts dirty before we update the current
+	 * context so if any switches happen in the middle we can expect
+	 * that the act of scheduling will have itself ensured a consistent
+	 * OA state update.
+	 */
+	list_for_each_entry(ctx, &dev_priv->context_list, link) {
+		/* The actual update of the register state context will happen
+		 * the next time this logical ring is submitted. (See
+		 * i915_oa_update_reg_state() which hooks into
+		 * execlists_update_context())
+		 */
+		atomic_set(&ctx->engine[RCS].oa_state_dirty, 1);
+	}
+
+	mutex_unlock(&dev_priv->drm.struct_mutex);
+
+	/* Now update the current context.
+	 *
+	 * Note: Using MMIO to update per-context registers requires
+	 * some extra care...
+	 */
+	ret = gen8_begin_ctx_mmio(dev_priv);
+	if (ret) {
+		DRM_ERROR("Failed to bring RCS out of idle to update current ctx OA state\n");
+		return ret;
+	}
+
+	I915_WRITE(GEN8_OACTXCONTROL, ((dev_priv->perf.oa.period_exponent <<
+					GEN8_OA_TIMER_PERIOD_SHIFT) |
+				      (dev_priv->perf.oa.periodic ?
+				       GEN8_OA_TIMER_ENABLE : 0) |
+				      GEN8_OA_COUNTER_RESUME));
+
+	config_oa_regs(dev_priv, dev_priv->perf.oa.flex_regs,
+			dev_priv->perf.oa.flex_regs_len);
+
+	gen8_end_ctx_mmio(dev_priv);
+
+	return 0;
+}
+
+static int gen8_enable_metric_set(struct drm_i915_private *dev_priv)
+{
+	int ret = dev_priv->perf.oa.ops.select_metric_set(dev_priv);
+
+	if (ret)
+		return ret;
+
+	/* We disable slice/unslice clock ratio change reports on SKL since
+	 * they are too noisy. The HW generates a lot of redundant reports
+	 * where the ratio hasn't really changed causing a lot of redundant
+	 * work to processes and increasing the chances we'll hit buffer
+	 * overruns.
+	 *
+	 * Although we don't currently use the 'disable overrun' OABUFFER
+	 * feature it's worth noting that clock ratio reports have to be
+	 * disabled before considering to use that feature since the HW doesn't
+	 * correctly block these reports.
+	 *
+	 * Currently none of the high-level metrics we have depend on knowing
+	 * this ratio to normalize.
+	 *
+	 * Note: This register is not power context saved and restored, but
+	 * that's OK considering that we disable RC6 while the OA unit is
+	 * enabled.
+	 *
+	 * The _INCLUDE_CLK_RATIO bit allows the slice/unslice frequency to
+	 * be read back from automatically triggered reports, as part of the
+	 * RPT_ID field.
+	 */
+	if (IS_SKYLAKE(dev_priv) || IS_BROXTON(dev_priv)) {
+		I915_WRITE(GEN8_OA_DEBUG,
+			   _MASKED_BIT_ENABLE(GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS |
+					      GEN9_OA_DEBUG_INCLUDE_CLK_RATIO));
+	}
+
+	I915_WRITE(GDT_CHICKEN_BITS, 0xA0);
+	config_oa_regs(dev_priv, dev_priv->perf.oa.mux_regs,
+		       dev_priv->perf.oa.mux_regs_len);
+	I915_WRITE(GDT_CHICKEN_BITS, 0x80);
+
+	/* It takes a fairly long time for a new MUX configuration to
+	 * be be applied after these register writes. This delay
+	 * duration is take from Haswell (derived empirically based on
+	 * the render_basic config) but hopefully it covers the
+	 * maximum configuration latency for Gen8+ too...
+	 */
+	usleep_range(15000, 20000);
+
+	config_oa_regs(dev_priv, dev_priv->perf.oa.b_counter_regs,
+		       dev_priv->perf.oa.b_counter_regs_len);
+
+	ret = gen8_configure_all_contexts(dev_priv);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+static void gen8_disable_metric_set(struct drm_i915_private *dev_priv)
+{
+	/* NOP */
+}
+
 static void gen7_update_oacontrol_locked(struct drm_i915_private *dev_priv)
 {
 	lockdep_assert_held(&dev_priv->perf.hook_lock);
@@ -1157,6 +1752,29 @@ static void gen7_oa_enable(struct drm_i915_private *dev_priv)
 	spin_unlock_irqrestore(&dev_priv->perf.hook_lock, flags);
 }

+static void gen8_oa_enable(struct drm_i915_private *dev_priv)
+{
+	u32 report_format = dev_priv->perf.oa.oa_buffer.format;
+
+	/* Reset buf pointers so we don't forward reports from before now.
+	 *
+	 * Think carefully if considering trying to avoid this, since it
+	 * also ensures status flags and the buffer itself are cleared
+	 * in error paths, and we have checks for invalid reports based
+	 * on the assumption that certain fields are written to zeroed
+	 * memory which this helps maintains.
+	 */
+	gen8_init_oa_buffer(dev_priv);
+
+	/* Note: we don't rely on the hardware to perform single context
+	 * filtering and instead filter on the cpu based on the context-id
+	 * field of reports
+	 */
+	I915_WRITE(GEN8_OACONTROL, (report_format <<
+				    GEN8_OA_REPORT_FORMAT_SHIFT) |
+				   GEN8_OA_COUNTER_ENABLE);
+}
+
 /**
  * i915_oa_stream_enable - handle `I915_PERF_IOCTL_ENABLE` for OA stream
  * @stream: An i915 perf stream opened for OA metrics
@@ -1183,6 +1801,11 @@ static void gen7_oa_disable(struct drm_i915_private *dev_priv)
 	I915_WRITE(GEN7_OACONTROL, 0);
 }

+static void gen8_oa_disable(struct drm_i915_private *dev_priv)
+{
+	I915_WRITE(GEN8_OACONTROL, 0);
+}
+
 /**
  * i915_oa_stream_disable - handle `I915_PERF_IOCTL_DISABLE` for OA stream
  * @stream: An i915 perf stream opened for OA metrics
@@ -1361,6 +1984,88 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream,
 	return ret;
 }

+/* NB: It must always remain pointer safe to run this even if the OA unit
+ * has been disabled.
+ *
+ * It's fine to put out-of-date values into these per-context registers
+ * in the case that the OA unit has been disabled.
+ */
+static void gen8_update_reg_state_unlocked(struct intel_engine_cs *engine,
+					   struct i915_gem_context *ctx,
+					   u32 *reg_state)
+{
+	struct drm_i915_private *dev_priv = ctx->i915;
+	const struct i915_oa_reg *flex_regs = dev_priv->perf.oa.flex_regs;
+	int n_flex_regs = dev_priv->perf.oa.flex_regs_len;
+	u32 ctx_oactxctrl = dev_priv->perf.oa.ctx_oactxctrl_offset;
+	u32 ctx_flexeu0 = dev_priv->perf.oa.ctx_flexeu0_offset;
+	/* The MMIO offsets for Flex EU registers aren't contiguous */
+	u32 flex_mmio[] = {
+		i915_mmio_reg_offset(EU_PERF_CNTL0),
+		i915_mmio_reg_offset(EU_PERF_CNTL1),
+		i915_mmio_reg_offset(EU_PERF_CNTL2),
+		i915_mmio_reg_offset(EU_PERF_CNTL3),
+		i915_mmio_reg_offset(EU_PERF_CNTL4),
+		i915_mmio_reg_offset(EU_PERF_CNTL5),
+		i915_mmio_reg_offset(EU_PERF_CNTL6),
+	};
+	int i;
+
+	reg_state[ctx_oactxctrl] = i915_mmio_reg_offset(GEN8_OACTXCONTROL);
+	reg_state[ctx_oactxctrl+1] = (dev_priv->perf.oa.period_exponent <<
+				      GEN8_OA_TIMER_PERIOD_SHIFT) |
+				     (dev_priv->perf.oa.periodic ?
+				      GEN8_OA_TIMER_ENABLE : 0) |
+				     GEN8_OA_COUNTER_RESUME;
+
+	for (i = 0; i < ARRAY_SIZE(flex_mmio); i++) {
+		u32 state_offset = ctx_flexeu0 + i * 2;
+		u32 mmio = flex_mmio[i];
+
+		/* This arbitrary default will select the 'EU FPU0 Pipeline
+		 * Active' event. In the future it's anticipated that there
+		 * will be an explicit 'No Event' we can select, but not yet...
+		 */
+		u32 value = 0;
+		int j;
+
+		for (j = 0; j < n_flex_regs; j++) {
+			if (i915_mmio_reg_offset(flex_regs[j].addr) == mmio) {
+				value = flex_regs[j].value;
+				break;
+			}
+		}
+
+		reg_state[state_offset] = mmio;
+		reg_state[state_offset+1] = value;
+	}
+}
+
+void i915_oa_update_reg_state(struct intel_engine_cs *engine,
+			      struct i915_gem_context *ctx,
+			      u32 *reg_state)
+{
+	struct drm_i915_private *dev_priv = engine->i915;
+
+	if (engine->id != RCS)
+		return;
+
+	if (!dev_priv->perf.initialized)
+		return;
+
+	/* XXX: We don't take a lock here and this may run async with
+	 * respect to stream methods. Notably we don't want to block
+	 * context switches by long i915 perf read() operations.
+	 *
+	 * It's expected to always be safe to read the dev_priv->perf
+	 * state needed here, and expected to be benign to redundantly
+	 * update the state if the OA unit has been disabled since
+	 * oa_state_dirty was last set.
+	 */
+	if (atomic_cmpxchg(&ctx->engine[engine->id].oa_state_dirty, 1, 0))
+		gen8_update_reg_state_unlocked(engine, ctx, reg_state);
+}
+
 /**
  * i915_perf_read_locked - &i915_perf_stream_ops->read with error normalisation
  * @stream: An i915 perf stream
@@ -1486,7 +2191,7 @@ static enum hrtimer_restart oa_poll_check_timer_cb(struct hrtimer *hrtimer)
 		container_of(hrtimer, typeof(*dev_priv),
 			     perf.oa.poll_check_timer);

-	if (dev_priv->perf.oa.ops.oa_buffer_check(dev_priv)) {
+	if (oa_buffer_check_unlocked(dev_priv)) {
 		dev_priv->perf.oa.pollin = true;
 		wake_up(&dev_priv->perf.oa.poll_wq);
 	}
@@ -1775,6 +2480,7 @@ i915_perf_open_ioctl_locked(struct drm_i915_private *dev_priv,
 	struct i915_gem_context *specific_ctx = NULL;
 	struct i915_perf_stream *stream = NULL;
 	unsigned long f_flags = 0;
+	bool privileged_op = true;
 	int stream_fd;
 	int ret;

@@ -1792,12 +2498,28 @@ i915_perf_open_ioctl_locked(struct drm_i915_private *dev_priv,
 		}
 	}

+	/* On Haswell the OA unit supports clock gating off for a specific
+	 * context and in this mode there's no visibility of metrics for the
+	 * rest of the system, which we consider acceptable for a
+	 * non-privileged client.
+	 *
+	 * For Gen8+ the OA unit no longer supports clock gating off for a
+	 * specific context and the kernel can't securely stop the counters
+	 * from updating as system-wide / global values. Even though we can
+	 * filter reports based on the included context ID we can't block
+	 * clients from seeing the raw / global counter values via
+	 * MI_REPORT_PERF_COUNT commands and so consider it a privileged op to
+	 * enable the OA unit by default.
+	 */
+	if (IS_HASWELL(dev_priv) && specific_ctx)
+		privileged_op = false;
+
 	/* Similar to perf's kernel.perf_paranoid_cpu sysctl option
 	 * we check a dev.i915.perf_stream_paranoid sysctl option
 	 * to determine if it's ok to access system wide OA counters
 	 * without CAP_SYS_ADMIN privileges.
 	 */
-	if (!specific_ctx &&
+	if (privileged_op &&
 	    i915_perf_stream_paranoid && !capable(CAP_SYS_ADMIN)) {
 		DRM_DEBUG("Insufficient privileges to open system-wide i915 perf stream\n");
 		ret = -EACCES;
@@ -2069,9 +2791,6 @@ int i915_perf_open_ioctl(struct drm_device *dev, void *data,
  */
 void i915_perf_register(struct drm_i915_private *dev_priv)
 {
-	if (!IS_HASWELL(dev_priv))
-		return;
-
 	if (!dev_priv->perf.initialized)
 		return;

@@ -2087,11 +2806,38 @@ void i915_perf_register(struct drm_i915_private *dev_priv)
 	if (!dev_priv->perf.metrics_kobj)
 		goto exit;

-	if (i915_perf_register_sysfs_hsw(dev_priv)) {
-		kobject_put(dev_priv->perf.metrics_kobj);
-		dev_priv->perf.metrics_kobj = NULL;
+	if (IS_HASWELL(dev_priv)) {
+		if (i915_perf_register_sysfs_hsw(dev_priv))
+			goto sysfs_error;
+	} else if (IS_BROADWELL(dev_priv)) {
+		if (i915_perf_register_sysfs_bdw(dev_priv))
+			goto sysfs_error;
+	} else if (IS_CHERRYVIEW(dev_priv)) {
+		if (i915_perf_register_sysfs_chv(dev_priv))
+			goto sysfs_error;
+	} else if (IS_SKYLAKE(dev_priv)) {
+		if (IS_SKL_GT2(dev_priv)) {
+			if (i915_perf_register_sysfs_sklgt2(dev_priv))
+				goto sysfs_error;
+		} else if (IS_SKL_GT3(dev_priv)) {
+			if (i915_perf_register_sysfs_sklgt3(dev_priv))
+				goto sysfs_error;
+		} else if (IS_SKL_GT4(dev_priv)) {
+			if (i915_perf_register_sysfs_sklgt4(dev_priv))
+				goto sysfs_error;
+		} else
+			goto sysfs_error;
+	} else if (IS_BROXTON(dev_priv)) {
+		if (i915_perf_register_sysfs_bxt(dev_priv))
+			goto sysfs_error;
 	}

+	goto exit;
+
+sysfs_error:
+	kobject_put(dev_priv->perf.metrics_kobj);
+	dev_priv->perf.metrics_kobj = NULL;
+
 exit:
 	mutex_unlock(&dev_priv->perf.lock);
 }
@@ -2107,13 +2853,24 @@ void i915_perf_register(struct drm_i915_private *dev_priv)
  */
 void i915_perf_unregister(struct drm_i915_private *dev_priv)
 {
-	if (!IS_HASWELL(dev_priv))
-		return;
-
 	if (!dev_priv->perf.metrics_kobj)
 		return;

-	i915_perf_unregister_sysfs_hsw(dev_priv);
+        if (IS_HASWELL(dev_priv))
+                i915_perf_unregister_sysfs_hsw(dev_priv);
+        else if (IS_BROADWELL(dev_priv))
+                i915_perf_unregister_sysfs_bdw(dev_priv);
+        else if (IS_CHERRYVIEW(dev_priv))
+                i915_perf_unregister_sysfs_chv(dev_priv);
+        else if (IS_SKYLAKE(dev_priv)) {
+		if (IS_SKL_GT2(dev_priv))
+			i915_perf_unregister_sysfs_sklgt2(dev_priv);
+		else if (IS_SKL_GT3(dev_priv))
+			i915_perf_unregister_sysfs_sklgt3(dev_priv);
+		else if (IS_SKL_GT4(dev_priv))
+			i915_perf_unregister_sysfs_sklgt4(dev_priv);
+	} else if (IS_BROXTON(dev_priv))
+                i915_perf_unregister_sysfs_bxt(dev_priv);

 	kobject_put(dev_priv->perf.metrics_kobj);
 	dev_priv->perf.metrics_kobj = NULL;
@@ -2172,36 +2929,105 @@ static struct ctl_table dev_root[] = {
  */
 void i915_perf_init(struct drm_i915_private *dev_priv)
 {
-	if (!IS_HASWELL(dev_priv))
-		return;
-
-	hrtimer_init(&dev_priv->perf.oa.poll_check_timer,
-		     CLOCK_MONOTONIC, HRTIMER_MODE_REL);
-	dev_priv->perf.oa.poll_check_timer.function = oa_poll_check_timer_cb;
-	init_waitqueue_head(&dev_priv->perf.oa.poll_wq);
+	dev_priv->perf.oa.n_builtin_sets = 0;
+
+	if (IS_HASWELL(dev_priv)) {
+		dev_priv->perf.oa.ops.init_oa_buffer = gen7_init_oa_buffer;
+		dev_priv->perf.oa.ops.enable_metric_set = hsw_enable_metric_set;
+		dev_priv->perf.oa.ops.disable_metric_set = hsw_disable_metric_set;
+		dev_priv->perf.oa.ops.oa_enable = gen7_oa_enable;
+		dev_priv->perf.oa.ops.oa_disable = gen7_oa_disable;
+		dev_priv->perf.oa.ops.read = gen7_oa_read;
+		dev_priv->perf.oa.ops.oa_hw_tail_read =
+			gen7_oa_hw_tail_read;
+
+		dev_priv->perf.oa.oa_formats = hsw_oa_formats;
+
+		dev_priv->perf.oa.n_builtin_sets =
+			i915_oa_n_builtin_metric_sets_hsw;
+	} else if (i915.enable_execlists) {
+		/* Note: that although we could theoretically also support the
+		 * legacy ringbuffer mode on BDW (and earlier iterations of
+		 * this driver, before upstreaming did this) it didn't seem
+		 * worth the complexity to maintain now that BDW+ enable
+		 * execlist mode by default.
+		 */

-	INIT_LIST_HEAD(&dev_priv->perf.streams);
-	mutex_init(&dev_priv->perf.lock);
-	spin_lock_init(&dev_priv->perf.hook_lock);
-	spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);
+		if (IS_GEN8(dev_priv)) {
+			dev_priv->perf.oa.ctx_oactxctrl_offset = 0x120;
+			dev_priv->perf.oa.ctx_flexeu0_offset = 0x2ce;
+			dev_priv->perf.oa.gen8_valid_ctx_bit = (1<<25);
+
+			if (IS_BROADWELL(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_bdw;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_bdw;
+			} else if (IS_CHERRYVIEW(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_chv;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_chv;
+			}
+		} else if (IS_GEN9(dev_priv)) {
+			dev_priv->perf.oa.ctx_oactxctrl_offset = 0x128;
+			dev_priv->perf.oa.ctx_flexeu0_offset = 0x3de;
+			dev_priv->perf.oa.gen8_valid_ctx_bit = (1<<16);
+
+			if (IS_SKL_GT2(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_sklgt2;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_sklgt2;
+			} else if (IS_SKL_GT3(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_sklgt3;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_sklgt3;
+			} else if (IS_SKL_GT4(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_sklgt4;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_sklgt4;
+			} else if (IS_BROXTON(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_bxt;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_bxt;
+			}
+		}

-	dev_priv->perf.oa.ops.init_oa_buffer = gen7_init_oa_buffer;
-	dev_priv->perf.oa.ops.enable_metric_set = hsw_enable_metric_set;
-	dev_priv->perf.oa.ops.disable_metric_set = hsw_disable_metric_set;
-	dev_priv->perf.oa.ops.oa_enable = gen7_oa_enable;
-	dev_priv->perf.oa.ops.oa_disable = gen7_oa_disable;
-	dev_priv->perf.oa.ops.read = gen7_oa_read;
-	dev_priv->perf.oa.ops.oa_buffer_check =
-		gen7_oa_buffer_check_unlocked;
+		if (dev_priv->perf.oa.n_builtin_sets) {
+			dev_priv->perf.oa.ops.init_oa_buffer = gen8_init_oa_buffer;
+			dev_priv->perf.oa.ops.enable_metric_set =
+				gen8_enable_metric_set;
+			dev_priv->perf.oa.ops.disable_metric_set =
+				gen8_disable_metric_set;
+			dev_priv->perf.oa.ops.oa_enable = gen8_oa_enable;
+			dev_priv->perf.oa.ops.oa_disable = gen8_oa_disable;
+			dev_priv->perf.oa.ops.read = gen8_oa_read;
+			dev_priv->perf.oa.ops.oa_hw_tail_read =
+				gen8_oa_hw_tail_read;
+
+			dev_priv->perf.oa.oa_formats = gen8_plus_oa_formats;
+		}
+	}

-	dev_priv->perf.oa.oa_formats = hsw_oa_formats;
+	if (dev_priv->perf.oa.n_builtin_sets) {
+		hrtimer_init(&dev_priv->perf.oa.poll_check_timer,
+				CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+		dev_priv->perf.oa.poll_check_timer.function = oa_poll_check_timer_cb;
+		init_waitqueue_head(&dev_priv->perf.oa.poll_wq);

-	dev_priv->perf.oa.n_builtin_sets =
-		i915_oa_n_builtin_metric_sets_hsw;
+		INIT_LIST_HEAD(&dev_priv->perf.streams);
+		mutex_init(&dev_priv->perf.lock);
+		spin_lock_init(&dev_priv->perf.hook_lock);
+		spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);

-	dev_priv->perf.sysctl_header = register_sysctl_table(dev_root);
+		dev_priv->perf.sysctl_header = register_sysctl_table(dev_root);

-	dev_priv->perf.initialized = true;
+		dev_priv->perf.initialized = true;
+	}
 }

 /**
@@ -2216,5 +3042,6 @@ void i915_perf_fini(struct drm_i915_private *dev_priv)
 	unregister_sysctl_table(dev_priv->perf.sysctl_header);

 	memset(&dev_priv->perf.oa.ops, 0, sizeof(dev_priv->perf.oa.ops));
+
 	dev_priv->perf.initialized = false;
 }
diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index 4c72adae368b..c0bbba6d5b78 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -653,6 +653,12 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)

 #define GEN8_OACTXID _MMIO(0x2364)

+#define GEN8_OA_DEBUG _MMIO(0x2B04)
+#define  GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS    (1<<5)
+#define  GEN9_OA_DEBUG_INCLUDE_CLK_RATIO	    (1<<6)
+#define  GEN9_OA_DEBUG_DISABLE_GO_1_0_REPORTS	    (1<<2)
+#define  GEN9_OA_DEBUG_DISABLE_CTX_SWITCH_REPORTS   (1<<1)
+
 #define GEN8_OACONTROL _MMIO(0x2B00)
 #define  GEN8_OA_REPORT_FORMAT_A12	    (0<<2)
 #define  GEN8_OA_REPORT_FORMAT_A12_B8_C8    (2<<2)
@@ -674,6 +680,7 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 #define  GEN7_OABUFFER_STOP_RESUME_ENABLE   (1<<1)
 #define  GEN7_OABUFFER_RESUME		    (1<<0)

+#define GEN8_OABUFFER_UDW _MMIO(0x23b4)
 #define GEN8_OABUFFER _MMIO(0x2b14)

 #define GEN7_OASTATUS1 _MMIO(0x2364)
@@ -692,7 +699,9 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 #define  GEN8_OASTATUS_REPORT_LOST	    (1<<0)

 #define GEN8_OAHEADPTR _MMIO(0x2B0C)
+#define GEN8_OAHEADPTR_MASK    0xffffffc0
 #define GEN8_OATAILPTR _MMIO(0x2B10)
+#define GEN8_OATAILPTR_MASK    0xffffffc0

 #define OABUFFER_SIZE_128K  (0<<3)
 #define OABUFFER_SIZE_256K  (1<<3)
@@ -705,7 +714,17 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)

 #define OA_MEM_SELECT_GGTT  (1<<0)

+/*
+ * Flexible, Aggregate EU Counter Registers.
+ * Note: these aren't contiguous
+ */
 #define EU_PERF_CNTL0	    _MMIO(0xe458)
+#define EU_PERF_CNTL1	    _MMIO(0xe558)
+#define EU_PERF_CNTL2	    _MMIO(0xe658)
+#define EU_PERF_CNTL3	    _MMIO(0xe758)
+#define EU_PERF_CNTL4	    _MMIO(0xe45c)
+#define EU_PERF_CNTL5	    _MMIO(0xe55c)
+#define EU_PERF_CNTL6	    _MMIO(0xe65c)

 #define GDT_CHICKEN_BITS    _MMIO(0x9840)
 #define GT_NOA_ENABLE	    0x00000080
@@ -2325,6 +2344,9 @@ enum skl_disp_power_wells {
 #define   GEN8_RC_SEMA_IDLE_MSG_DISABLE	(1 << 12)
 #define   GEN8_FF_DOP_CLOCK_GATE_DISABLE	(1<<10)

+#define GEN6_RCS_PWR_FSM _MMIO(0x22ac)
+#define GEN9_RCS_FE_FSM2 _MMIO(0x22a4)
+
 /* Fuse readout registers for GT */
 #define CHV_FUSE_GT			_MMIO(VLV_DISPLAY_BASE + 0x2168)
 #define   CHV_FGT_DISABLE_SS0		(1 << 10)
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 7df278fe492e..e787656b630f 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -337,6 +337,8 @@ static u64 execlists_update_context(struct drm_i915_gem_request *rq)
 	if (ppgtt && !i915_vm_is_48bit(&ppgtt->base))
 		execlists_update_context_pdps(ppgtt, reg_state);

+	i915_oa_update_reg_state(rq->engine, rq->ctx, reg_state);
+
 	return ce->lrc_desc;
 }

@@ -1879,6 +1881,9 @@ static void execlists_init_reg_state(u32 *regs,
 		regs[CTX_LRI_HEADER_2] = MI_LOAD_REGISTER_IMM(1);
 		CTX_REG(regs, CTX_R_PWR_CLK_STATE, GEN8_R_PWR_CLK_STATE,
 			make_rpcs(dev_priv));
+
+		atomic_set(&ctx->engine[RCS].oa_state_dirty, 1);
+		i915_oa_update_reg_state(engine, ctx, regs);
 	}
 }

diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
index 464547d08173..15bc9f78ba4d 100644
--- a/include/uapi/drm/i915_drm.h
+++ b/include/uapi/drm/i915_drm.h
@@ -1316,13 +1316,18 @@ struct drm_i915_gem_context_param {
 };

 enum drm_i915_oa_format {
-	I915_OA_FORMAT_A13 = 1,
-	I915_OA_FORMAT_A29,
-	I915_OA_FORMAT_A13_B8_C8,
-	I915_OA_FORMAT_B4_C8,
-	I915_OA_FORMAT_A45_B8_C8,
-	I915_OA_FORMAT_B4_C8_A16,
-	I915_OA_FORMAT_C4_B8,
+	I915_OA_FORMAT_A13 = 1,	    /* HSW only */
+	I915_OA_FORMAT_A29,	    /* HSW only */
+	I915_OA_FORMAT_A13_B8_C8,   /* HSW only */
+	I915_OA_FORMAT_B4_C8,	    /* HSW only */
+	I915_OA_FORMAT_A45_B8_C8,   /* HSW only */
+	I915_OA_FORMAT_B4_C8_A16,   /* HSW only */
+	I915_OA_FORMAT_C4_B8,	    /* HSW+ */
+
+	/* Gen8+ */
+	I915_OA_FORMAT_A12,
+	I915_OA_FORMAT_A12_B8_C8,
+	I915_OA_FORMAT_A32u40_A4u32_B8_C8,

 	I915_OA_FORMAT_MAX	    /* non-ABI */
 };
--
2.11.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v5 13/15] drm/i915/perf: Add more OA configs for BDW, CHV, SKL + BXT
  2017-04-24  7:17 ` [PATCH 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                     ` (11 preceding siblings ...)
  2017-04-24  7:17   ` [PATCH v5 12/15] drm/i915/perf: Add OA unit support for Gen 8+ Lionel Landwerlin
@ 2017-04-24  7:17   ` Lionel Landwerlin
  2017-04-24  7:17   ` [PATCH v5 14/15] drm/i915/perf: per-gen timebase for checking sample freq Lionel Landwerlin
                     ` (5 subsequent siblings)
  18 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-04-24  7:17 UTC (permalink / raw)
  To: intel-gfx

From: Robert Bragg <robert@sixbynine.org>

These are auto generated from an XML description of metric sets,
currently maintained in gputop, ref:

 https://github.com/rib/gputop
 > gputop-data/oa-*.xml
 > scripts/i915-perf-kernelgen.py

 $ make -C gputop-data -f Makefile.xml

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Acked-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
---
 drivers/gpu/drm/i915/i915_oa_bdw.c    | 4856 ++++++++++++++++++++++++++++++++-
 drivers/gpu/drm/i915/i915_oa_bxt.c    | 2323 +++++++++++++++-
 drivers/gpu/drm/i915/i915_oa_chv.c    | 2525 ++++++++++++++++-
 drivers/gpu/drm/i915/i915_oa_hsw.c    |   58 +-
 drivers/gpu/drm/i915/i915_oa_sklgt2.c | 3156 ++++++++++++++++++++-
 drivers/gpu/drm/i915/i915_oa_sklgt3.c | 2702 +++++++++++++++++-
 drivers/gpu/drm/i915/i915_oa_sklgt4.c | 2745 ++++++++++++++++++-
 7 files changed, 18176 insertions(+), 189 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_oa_bdw.c b/drivers/gpu/drm/i915/i915_oa_bdw.c
index b0b1b75fb431..eff198ea639b 100644
--- a/drivers/gpu/drm/i915/i915_oa_bdw.c
+++ b/drivers/gpu/drm/i915/i915_oa_bdw.c
@@ -31,9 +31,30 @@

 enum metric_set_id {
 	METRIC_SET_ID_RENDER_BASIC = 1,
+	METRIC_SET_ID_COMPUTE_BASIC,
+	METRIC_SET_ID_RENDER_PIPE_PROFILE,
+	METRIC_SET_ID_MEMORY_READS,
+	METRIC_SET_ID_MEMORY_WRITES,
+	METRIC_SET_ID_COMPUTE_EXTENDED,
+	METRIC_SET_ID_COMPUTE_L3_CACHE,
+	METRIC_SET_ID_DATA_PORT_READS_COALESCING,
+	METRIC_SET_ID_DATA_PORT_WRITES_COALESCING,
+	METRIC_SET_ID_HDC_AND_SF,
+	METRIC_SET_ID_L3_1,
+	METRIC_SET_ID_L3_2,
+	METRIC_SET_ID_L3_3,
+	METRIC_SET_ID_L3_4,
+	METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND,
+	METRIC_SET_ID_SAMPLER_1,
+	METRIC_SET_ID_SAMPLER_2,
+	METRIC_SET_ID_TDL_1,
+	METRIC_SET_ID_TDL_2,
+	METRIC_SET_ID_COMPUTE_EXTRA,
+	METRIC_SET_ID_VME_PIPE,
+	METRIC_SET_ID_TEST_OA,
 };

-int i915_oa_n_builtin_metric_sets_bdw = 1;
+int i915_oa_n_builtin_metric_sets_bdw = 22;

 static const struct i915_oa_reg b_counter_config_render_basic[] = {
 	{ _MMIO(0x2710), 0x00000000 },
@@ -290,66 +311,4609 @@ get_render_basic_mux_config(struct drm_i915_private *dev_priv,
 	return NULL;
 }

+static const struct i915_oa_reg b_counter_config_compute_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic_0_slices_0x01[] = {
+	{ _MMIO(0x9888), 0x105c00e0 },
+	{ _MMIO(0x9888), 0x105800e0 },
+	{ _MMIO(0x9888), 0x103800e0 },
+	{ _MMIO(0x9888), 0x3580001a },
+	{ _MMIO(0x9888), 0x3b800060 },
+	{ _MMIO(0x9888), 0x3d800005 },
+	{ _MMIO(0x9888), 0x065c2100 },
+	{ _MMIO(0x9888), 0x0a5c0041 },
+	{ _MMIO(0x9888), 0x0c5c6600 },
+	{ _MMIO(0x9888), 0x005c6580 },
+	{ _MMIO(0x9888), 0x085c8000 },
+	{ _MMIO(0x9888), 0x0e5c8000 },
+	{ _MMIO(0x9888), 0x00580042 },
+	{ _MMIO(0x9888), 0x08582080 },
+	{ _MMIO(0x9888), 0x0c58004c },
+	{ _MMIO(0x9888), 0x0e582580 },
+	{ _MMIO(0x9888), 0x005b4000 },
+	{ _MMIO(0x9888), 0x185b1000 },
+	{ _MMIO(0x9888), 0x1a5b0104 },
+	{ _MMIO(0x9888), 0x0c1fa800 },
+	{ _MMIO(0x9888), 0x0e1faa00 },
+	{ _MMIO(0x9888), 0x101f02aa },
+	{ _MMIO(0x9888), 0x08380042 },
+	{ _MMIO(0x9888), 0x0a382080 },
+	{ _MMIO(0x9888), 0x0e38404c },
+	{ _MMIO(0x9888), 0x0238404b },
+	{ _MMIO(0x9888), 0x00384000 },
+	{ _MMIO(0x9888), 0x16380000 },
+	{ _MMIO(0x9888), 0x18381145 },
+	{ _MMIO(0x9888), 0x04380000 },
+	{ _MMIO(0x9888), 0x0039a000 },
+	{ _MMIO(0x9888), 0x06398000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0c39a000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x02392000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8aaaa0 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x238b02a0 },
+	{ _MMIO(0x9888), 0x258b5550 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x1f850a80 },
+	{ _MMIO(0x9888), 0x2185aaa0 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x03844000 },
+	{ _MMIO(0x9888), 0x17808137 },
+	{ _MMIO(0x9888), 0x1980c147 },
+	{ _MMIO(0x9888), 0x1b80c0e5 },
+	{ _MMIO(0x9888), 0x1d80c0e3 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x13804000 },
+	{ _MMIO(0x9888), 0x15800000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d801000 },
+	{ _MMIO(0x9888), 0x4f800111 },
+	{ _MMIO(0x9888), 0x43800062 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800062 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800062 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f801062 },
+	{ _MMIO(0x9888), 0x41801084 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic_2_slices_0x02[] = {
+	{ _MMIO(0x9888), 0x10dc00e0 },
+	{ _MMIO(0x9888), 0x10d800e0 },
+	{ _MMIO(0x9888), 0x10b800e0 },
+	{ _MMIO(0x9888), 0x3580001a },
+	{ _MMIO(0x9888), 0x3b800060 },
+	{ _MMIO(0x9888), 0x3d800005 },
+	{ _MMIO(0x9888), 0x06dc2100 },
+	{ _MMIO(0x9888), 0x0adc0041 },
+	{ _MMIO(0x9888), 0x0cdc6600 },
+	{ _MMIO(0x9888), 0x00dc6580 },
+	{ _MMIO(0x9888), 0x08dc8000 },
+	{ _MMIO(0x9888), 0x0edc8000 },
+	{ _MMIO(0x9888), 0x00d80042 },
+	{ _MMIO(0x9888), 0x08d82080 },
+	{ _MMIO(0x9888), 0x0cd8004c },
+	{ _MMIO(0x9888), 0x0ed82580 },
+	{ _MMIO(0x9888), 0x00db4000 },
+	{ _MMIO(0x9888), 0x18db1000 },
+	{ _MMIO(0x9888), 0x1adb0104 },
+	{ _MMIO(0x9888), 0x0c9fa800 },
+	{ _MMIO(0x9888), 0x0e9faa00 },
+	{ _MMIO(0x9888), 0x109f02aa },
+	{ _MMIO(0x9888), 0x08b80042 },
+	{ _MMIO(0x9888), 0x0ab82080 },
+	{ _MMIO(0x9888), 0x0eb8404c },
+	{ _MMIO(0x9888), 0x02b8404b },
+	{ _MMIO(0x9888), 0x00b84000 },
+	{ _MMIO(0x9888), 0x16b80000 },
+	{ _MMIO(0x9888), 0x18b81145 },
+	{ _MMIO(0x9888), 0x04b80000 },
+	{ _MMIO(0x9888), 0x00b9a000 },
+	{ _MMIO(0x9888), 0x06b98000 },
+	{ _MMIO(0x9888), 0x08b9a000 },
+	{ _MMIO(0x9888), 0x0ab9a000 },
+	{ _MMIO(0x9888), 0x0cb9a000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x02b92000 },
+	{ _MMIO(0x9888), 0x01888000 },
+	{ _MMIO(0x9888), 0x0d88f800 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x238b0540 },
+	{ _MMIO(0x9888), 0x258baaa0 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x018c4000 },
+	{ _MMIO(0x9888), 0x0f8c4000 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c5500 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x018da000 },
+	{ _MMIO(0x9888), 0x078d8000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x038d2000 },
+	{ _MMIO(0x9888), 0x1f850a80 },
+	{ _MMIO(0x9888), 0x2185aaa0 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x03844000 },
+	{ _MMIO(0x9888), 0x17808137 },
+	{ _MMIO(0x9888), 0x1980c147 },
+	{ _MMIO(0x9888), 0x1b80c0e5 },
+	{ _MMIO(0x9888), 0x1d80c0e3 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x13804000 },
+	{ _MMIO(0x9888), 0x15800000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d805000 },
+	{ _MMIO(0x9888), 0x4f800555 },
+	{ _MMIO(0x9888), 0x43800062 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800062 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800062 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800062 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static const struct i915_oa_reg *
+get_compute_basic_mux_config(struct drm_i915_private *dev_priv,
+			     int *len)
+{
+	if (INTEL_INFO(dev_priv)->sseu.slice_mask & 0x01) {
+		*len = ARRAY_SIZE(mux_config_compute_basic_0_slices_0x01);
+		return mux_config_compute_basic_0_slices_0x01;
+	} else if (INTEL_INFO(dev_priv)->sseu.slice_mask & 0x02) {
+		*len = ARRAY_SIZE(mux_config_compute_basic_2_slices_0x02);
+		return mux_config_compute_basic_2_slices_0x02;
+	}
+
+	*len = 0;
+	return NULL;
+}
+
+static const struct i915_oa_reg b_counter_config_render_pipe_profile[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ffea },
+	{ _MMIO(0x2774), 0x00007ffc },
+	{ _MMIO(0x2778), 0x0007affa },
+	{ _MMIO(0x277c), 0x0000f5fd },
+	{ _MMIO(0x2780), 0x00079ffa },
+	{ _MMIO(0x2784), 0x0000f3fb },
+	{ _MMIO(0x2788), 0x0007bf7a },
+	{ _MMIO(0x278c), 0x0000f7e7 },
+	{ _MMIO(0x2790), 0x0007fefa },
+	{ _MMIO(0x2794), 0x0000f7cf },
+	{ _MMIO(0x2798), 0x00077ffa },
+	{ _MMIO(0x279c), 0x0000efdf },
+	{ _MMIO(0x27a0), 0x0006fffa },
+	{ _MMIO(0x27a4), 0x0000cfbf },
+	{ _MMIO(0x27a8), 0x0003fffa },
+	{ _MMIO(0x27ac), 0x00005f7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_pipe_profile[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile[] = {
+	{ _MMIO(0x9888), 0x0a1e0000 },
+	{ _MMIO(0x9888), 0x0c1f000f },
+	{ _MMIO(0x9888), 0x10176800 },
+	{ _MMIO(0x9888), 0x1191001f },
+	{ _MMIO(0x9888), 0x0b880320 },
+	{ _MMIO(0x9888), 0x01890c40 },
+	{ _MMIO(0x9888), 0x118a1c00 },
+	{ _MMIO(0x9888), 0x118d7c00 },
+	{ _MMIO(0x9888), 0x118e0020 },
+	{ _MMIO(0x9888), 0x118f4c00 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x13900001 },
+	{ _MMIO(0x9888), 0x065c4000 },
+	{ _MMIO(0x9888), 0x0c3d8000 },
+	{ _MMIO(0x9888), 0x06584000 },
+	{ _MMIO(0x9888), 0x0c5b4000 },
+	{ _MMIO(0x9888), 0x081e0040 },
+	{ _MMIO(0x9888), 0x0e1e0000 },
+	{ _MMIO(0x9888), 0x021f5400 },
+	{ _MMIO(0x9888), 0x001f0000 },
+	{ _MMIO(0x9888), 0x101f0010 },
+	{ _MMIO(0x9888), 0x0e1f0080 },
+	{ _MMIO(0x9888), 0x0c384000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x0c13c000 },
+	{ _MMIO(0x9888), 0x06164000 },
+	{ _MMIO(0x9888), 0x06170012 },
+	{ _MMIO(0x9888), 0x00170000 },
+	{ _MMIO(0x9888), 0x01910005 },
+	{ _MMIO(0x9888), 0x07880002 },
+	{ _MMIO(0x9888), 0x01880c00 },
+	{ _MMIO(0x9888), 0x0f880000 },
+	{ _MMIO(0x9888), 0x0d880000 },
+	{ _MMIO(0x9888), 0x05880000 },
+	{ _MMIO(0x9888), 0x09890032 },
+	{ _MMIO(0x9888), 0x078a0800 },
+	{ _MMIO(0x9888), 0x0f8a0a00 },
+	{ _MMIO(0x9888), 0x198a4000 },
+	{ _MMIO(0x9888), 0x1b8a2000 },
+	{ _MMIO(0x9888), 0x1d8a0000 },
+	{ _MMIO(0x9888), 0x038a4000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x238b54c0 },
+	{ _MMIO(0x9888), 0x258baa55 },
+	{ _MMIO(0x9888), 0x278b0019 },
+	{ _MMIO(0x9888), 0x198c0100 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x0f8d0015 },
+	{ _MMIO(0x9888), 0x018d1000 },
+	{ _MMIO(0x9888), 0x098d8000 },
+	{ _MMIO(0x9888), 0x0b8df000 },
+	{ _MMIO(0x9888), 0x0d8d3000 },
+	{ _MMIO(0x9888), 0x038de000 },
+	{ _MMIO(0x9888), 0x058d3000 },
+	{ _MMIO(0x9888), 0x0d8e0004 },
+	{ _MMIO(0x9888), 0x058e000c },
+	{ _MMIO(0x9888), 0x098e0000 },
+	{ _MMIO(0x9888), 0x078e0000 },
+	{ _MMIO(0x9888), 0x038e0000 },
+	{ _MMIO(0x9888), 0x0b8f0020 },
+	{ _MMIO(0x9888), 0x198f0c00 },
+	{ _MMIO(0x9888), 0x078f8000 },
+	{ _MMIO(0x9888), 0x098f4000 },
+	{ _MMIO(0x9888), 0x0b900980 },
+	{ _MMIO(0x9888), 0x03900d80 },
+	{ _MMIO(0x9888), 0x01900000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d801111 },
+	{ _MMIO(0x9888), 0x3d800800 },
+	{ _MMIO(0x9888), 0x4f801011 },
+	{ _MMIO(0x9888), 0x43800443 },
+	{ _MMIO(0x9888), 0x51801111 },
+	{ _MMIO(0x9888), 0x45800422 },
+	{ _MMIO(0x9888), 0x53801111 },
+	{ _MMIO(0x9888), 0x47800c60 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800422 },
+	{ _MMIO(0x9888), 0x41800021 },
+};
+
+static const struct i915_oa_reg *
+get_render_pipe_profile_mux_config(struct drm_i915_private *dev_priv,
+				   int *len)
+{
+	*len = ARRAY_SIZE(mux_config_render_pipe_profile);
+	return mux_config_render_pipe_profile;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_reads[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f872 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_reads[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads[] = {
+	{ _MMIO(0x9888), 0x198b0343 },
+	{ _MMIO(0x9888), 0x13845800 },
+	{ _MMIO(0x9888), 0x15840018 },
+	{ _MMIO(0x9888), 0x3580001a },
+	{ _MMIO(0x9888), 0x038b6300 },
+	{ _MMIO(0x9888), 0x058b6b62 },
+	{ _MMIO(0x9888), 0x078b006a },
+	{ _MMIO(0x9888), 0x118b0000 },
+	{ _MMIO(0x9888), 0x238b0000 },
+	{ _MMIO(0x9888), 0x258b0000 },
+	{ _MMIO(0x9888), 0x1f85a080 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385000a },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x01840018 },
+	{ _MMIO(0x9888), 0x07844c80 },
+	{ _MMIO(0x9888), 0x09840d9a },
+	{ _MMIO(0x9888), 0x0b840e9c },
+	{ _MMIO(0x9888), 0x0d840f9e },
+	{ _MMIO(0x9888), 0x0f840010 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x03848000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x2f8000e5 },
+	{ _MMIO(0x9888), 0x138080e3 },
+	{ _MMIO(0x9888), 0x1580c0e1 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x11804000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f804000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3d800800 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800842 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800842 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47801042 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800084 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static const struct i915_oa_reg *
+get_memory_reads_mux_config(struct drm_i915_private *dev_priv,
+			    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_memory_reads);
+	return mux_config_memory_reads;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_writes[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f822 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_writes[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes[] = {
+	{ _MMIO(0x9888), 0x198b0343 },
+	{ _MMIO(0x9888), 0x13845400 },
+	{ _MMIO(0x9888), 0x3580001a },
+	{ _MMIO(0x9888), 0x3d800805 },
+	{ _MMIO(0x9888), 0x038b6300 },
+	{ _MMIO(0x9888), 0x058b6b62 },
+	{ _MMIO(0x9888), 0x078b006a },
+	{ _MMIO(0x9888), 0x118b0000 },
+	{ _MMIO(0x9888), 0x238b0000 },
+	{ _MMIO(0x9888), 0x258b0000 },
+	{ _MMIO(0x9888), 0x1f85a080 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x23850002 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x01840010 },
+	{ _MMIO(0x9888), 0x07844880 },
+	{ _MMIO(0x9888), 0x09840992 },
+	{ _MMIO(0x9888), 0x0b840a94 },
+	{ _MMIO(0x9888), 0x0d840b96 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x03848000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x2d800147 },
+	{ _MMIO(0x9888), 0x2f8000e5 },
+	{ _MMIO(0x9888), 0x138080e3 },
+	{ _MMIO(0x9888), 0x1580c0e1 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x11804000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f800000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800842 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800842 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47801082 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800084 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static const struct i915_oa_reg *
+get_memory_writes_mux_config(struct drm_i915_private *dev_priv,
+			     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_memory_writes);
+	return mux_config_memory_writes;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extended[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fc2a },
+	{ _MMIO(0x2774), 0x0000bf00 },
+	{ _MMIO(0x2778), 0x0007fc6a },
+	{ _MMIO(0x277c), 0x0000bf00 },
+	{ _MMIO(0x2780), 0x0007fc92 },
+	{ _MMIO(0x2784), 0x0000bf00 },
+	{ _MMIO(0x2788), 0x0007fca2 },
+	{ _MMIO(0x278c), 0x0000bf00 },
+	{ _MMIO(0x2790), 0x0007fc32 },
+	{ _MMIO(0x2794), 0x0000bf00 },
+	{ _MMIO(0x2798), 0x0007fc9a },
+	{ _MMIO(0x279c), 0x0000bf00 },
+	{ _MMIO(0x27a0), 0x0007fe6a },
+	{ _MMIO(0x27a4), 0x0000bf00 },
+	{ _MMIO(0x27a8), 0x0007fe7a },
+	{ _MMIO(0x27ac), 0x0000bf00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extended[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended_0_subslices_0x01[] = {
+	{ _MMIO(0x9888), 0x143d0160 },
+	{ _MMIO(0x9888), 0x163d2800 },
+	{ _MMIO(0x9888), 0x183d0120 },
+	{ _MMIO(0x9888), 0x105800e0 },
+	{ _MMIO(0x9888), 0x005cc000 },
+	{ _MMIO(0x9888), 0x065c8000 },
+	{ _MMIO(0x9888), 0x085cc000 },
+	{ _MMIO(0x9888), 0x0a5cc000 },
+	{ _MMIO(0x9888), 0x0c5cc000 },
+	{ _MMIO(0x9888), 0x0e5cc000 },
+	{ _MMIO(0x9888), 0x025cc000 },
+	{ _MMIO(0x9888), 0x045cc000 },
+	{ _MMIO(0x9888), 0x003d0011 },
+	{ _MMIO(0x9888), 0x063d0900 },
+	{ _MMIO(0x9888), 0x083d0a13 },
+	{ _MMIO(0x9888), 0x0a3d0b15 },
+	{ _MMIO(0x9888), 0x0c3d2317 },
+	{ _MMIO(0x9888), 0x043d21b7 },
+	{ _MMIO(0x9888), 0x103d0000 },
+	{ _MMIO(0x9888), 0x0e3d0000 },
+	{ _MMIO(0x9888), 0x1a3d0000 },
+	{ _MMIO(0x9888), 0x0e5825c1 },
+	{ _MMIO(0x9888), 0x00586100 },
+	{ _MMIO(0x9888), 0x0258204c },
+	{ _MMIO(0x9888), 0x06588000 },
+	{ _MMIO(0x9888), 0x0858c000 },
+	{ _MMIO(0x9888), 0x0a58c000 },
+	{ _MMIO(0x9888), 0x0c58c000 },
+	{ _MMIO(0x9888), 0x0458c000 },
+	{ _MMIO(0x9888), 0x005b4000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x185b5400 },
+	{ _MMIO(0x9888), 0x1a5b0155 },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x045b4000 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x085b4000 },
+	{ _MMIO(0x9888), 0x0a5b4000 },
+	{ _MMIO(0x9888), 0x0c1fa800 },
+	{ _MMIO(0x9888), 0x0e1faa2a },
+	{ _MMIO(0x9888), 0x101f02aa },
+	{ _MMIO(0x9888), 0x00384000 },
+	{ _MMIO(0x9888), 0x0e384000 },
+	{ _MMIO(0x9888), 0x16384000 },
+	{ _MMIO(0x9888), 0x18381555 },
+	{ _MMIO(0x9888), 0x02384000 },
+	{ _MMIO(0x9888), 0x04384000 },
+	{ _MMIO(0x9888), 0x06384000 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0039a000 },
+	{ _MMIO(0x9888), 0x06398000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0c39a000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8aaaa0 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x238b2aa0 },
+	{ _MMIO(0x9888), 0x258b5551 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa2 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800420 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800421 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended_2_subslices_0x02[] = {
+	{ _MMIO(0x9888), 0x105c00e0 },
+	{ _MMIO(0x9888), 0x145b0160 },
+	{ _MMIO(0x9888), 0x165b2800 },
+	{ _MMIO(0x9888), 0x185b0120 },
+	{ _MMIO(0x9888), 0x0e5c25c1 },
+	{ _MMIO(0x9888), 0x005c6100 },
+	{ _MMIO(0x9888), 0x025c204c },
+	{ _MMIO(0x9888), 0x065c8000 },
+	{ _MMIO(0x9888), 0x085cc000 },
+	{ _MMIO(0x9888), 0x0a5cc000 },
+	{ _MMIO(0x9888), 0x0c5cc000 },
+	{ _MMIO(0x9888), 0x045cc000 },
+	{ _MMIO(0x9888), 0x005b0011 },
+	{ _MMIO(0x9888), 0x065b0900 },
+	{ _MMIO(0x9888), 0x085b0a13 },
+	{ _MMIO(0x9888), 0x0a5b0b15 },
+	{ _MMIO(0x9888), 0x0c5b2317 },
+	{ _MMIO(0x9888), 0x045b21b7 },
+	{ _MMIO(0x9888), 0x105b0000 },
+	{ _MMIO(0x9888), 0x0e5b0000 },
+	{ _MMIO(0x9888), 0x1a5b0000 },
+	{ _MMIO(0x9888), 0x0c1fa800 },
+	{ _MMIO(0x9888), 0x0e1faa2a },
+	{ _MMIO(0x9888), 0x101f02aa },
+	{ _MMIO(0x9888), 0x00384000 },
+	{ _MMIO(0x9888), 0x0e384000 },
+	{ _MMIO(0x9888), 0x16384000 },
+	{ _MMIO(0x9888), 0x18381555 },
+	{ _MMIO(0x9888), 0x02384000 },
+	{ _MMIO(0x9888), 0x04384000 },
+	{ _MMIO(0x9888), 0x06384000 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0039a000 },
+	{ _MMIO(0x9888), 0x06398000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0c39a000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8aaaa0 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x238b2aa0 },
+	{ _MMIO(0x9888), 0x258b5551 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa2 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800420 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800421 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended_4_subslices_0x04[] = {
+	{ _MMIO(0x9888), 0x103800e0 },
+	{ _MMIO(0x9888), 0x143a0160 },
+	{ _MMIO(0x9888), 0x163a2800 },
+	{ _MMIO(0x9888), 0x183a0120 },
+	{ _MMIO(0x9888), 0x0c1fa800 },
+	{ _MMIO(0x9888), 0x0e1faa2a },
+	{ _MMIO(0x9888), 0x101f02aa },
+	{ _MMIO(0x9888), 0x0e38a5c1 },
+	{ _MMIO(0x9888), 0x0038a100 },
+	{ _MMIO(0x9888), 0x0238204c },
+	{ _MMIO(0x9888), 0x16388000 },
+	{ _MMIO(0x9888), 0x183802aa },
+	{ _MMIO(0x9888), 0x04380000 },
+	{ _MMIO(0x9888), 0x06380000 },
+	{ _MMIO(0x9888), 0x08388000 },
+	{ _MMIO(0x9888), 0x0a388000 },
+	{ _MMIO(0x9888), 0x0039a000 },
+	{ _MMIO(0x9888), 0x06398000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0c39a000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x003a0011 },
+	{ _MMIO(0x9888), 0x063a0900 },
+	{ _MMIO(0x9888), 0x083a0a13 },
+	{ _MMIO(0x9888), 0x0a3a0b15 },
+	{ _MMIO(0x9888), 0x0c3a2317 },
+	{ _MMIO(0x9888), 0x043a21b7 },
+	{ _MMIO(0x9888), 0x103a0000 },
+	{ _MMIO(0x9888), 0x0e3a0000 },
+	{ _MMIO(0x9888), 0x1a3a0000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8aaaa0 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x238b2aa0 },
+	{ _MMIO(0x9888), 0x258b5551 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa2 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800420 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800421 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended_1_subslices_0x08[] = {
+	{ _MMIO(0x9888), 0x14bd0160 },
+	{ _MMIO(0x9888), 0x16bd2800 },
+	{ _MMIO(0x9888), 0x18bd0120 },
+	{ _MMIO(0x9888), 0x10d800e0 },
+	{ _MMIO(0x9888), 0x00dcc000 },
+	{ _MMIO(0x9888), 0x06dc8000 },
+	{ _MMIO(0x9888), 0x08dcc000 },
+	{ _MMIO(0x9888), 0x0adcc000 },
+	{ _MMIO(0x9888), 0x0cdcc000 },
+	{ _MMIO(0x9888), 0x0edcc000 },
+	{ _MMIO(0x9888), 0x02dcc000 },
+	{ _MMIO(0x9888), 0x04dcc000 },
+	{ _MMIO(0x9888), 0x00bd0011 },
+	{ _MMIO(0x9888), 0x06bd0900 },
+	{ _MMIO(0x9888), 0x08bd0a13 },
+	{ _MMIO(0x9888), 0x0abd0b15 },
+	{ _MMIO(0x9888), 0x0cbd2317 },
+	{ _MMIO(0x9888), 0x04bd21b7 },
+	{ _MMIO(0x9888), 0x10bd0000 },
+	{ _MMIO(0x9888), 0x0ebd0000 },
+	{ _MMIO(0x9888), 0x1abd0000 },
+	{ _MMIO(0x9888), 0x0ed825c1 },
+	{ _MMIO(0x9888), 0x00d86100 },
+	{ _MMIO(0x9888), 0x02d8204c },
+	{ _MMIO(0x9888), 0x06d88000 },
+	{ _MMIO(0x9888), 0x08d8c000 },
+	{ _MMIO(0x9888), 0x0ad8c000 },
+	{ _MMIO(0x9888), 0x0cd8c000 },
+	{ _MMIO(0x9888), 0x04d8c000 },
+	{ _MMIO(0x9888), 0x00db4000 },
+	{ _MMIO(0x9888), 0x0edb4000 },
+	{ _MMIO(0x9888), 0x18db5400 },
+	{ _MMIO(0x9888), 0x1adb0155 },
+	{ _MMIO(0x9888), 0x02db4000 },
+	{ _MMIO(0x9888), 0x04db4000 },
+	{ _MMIO(0x9888), 0x06db4000 },
+	{ _MMIO(0x9888), 0x08db4000 },
+	{ _MMIO(0x9888), 0x0adb4000 },
+	{ _MMIO(0x9888), 0x0c9fa800 },
+	{ _MMIO(0x9888), 0x0e9faa2a },
+	{ _MMIO(0x9888), 0x109f02aa },
+	{ _MMIO(0x9888), 0x00b84000 },
+	{ _MMIO(0x9888), 0x0eb84000 },
+	{ _MMIO(0x9888), 0x16b84000 },
+	{ _MMIO(0x9888), 0x18b81555 },
+	{ _MMIO(0x9888), 0x02b84000 },
+	{ _MMIO(0x9888), 0x04b84000 },
+	{ _MMIO(0x9888), 0x06b84000 },
+	{ _MMIO(0x9888), 0x08b84000 },
+	{ _MMIO(0x9888), 0x0ab84000 },
+	{ _MMIO(0x9888), 0x00b9a000 },
+	{ _MMIO(0x9888), 0x06b98000 },
+	{ _MMIO(0x9888), 0x08b9a000 },
+	{ _MMIO(0x9888), 0x0ab9a000 },
+	{ _MMIO(0x9888), 0x0cb9a000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x02b9a000 },
+	{ _MMIO(0x9888), 0x04b9a000 },
+	{ _MMIO(0x9888), 0x01888000 },
+	{ _MMIO(0x9888), 0x0d88f800 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x238b5540 },
+	{ _MMIO(0x9888), 0x258baaa2 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x018c4000 },
+	{ _MMIO(0x9888), 0x0f8c4000 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c5500 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x018da000 },
+	{ _MMIO(0x9888), 0x078d8000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa2 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800420 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800421 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended_3_subslices_0x10[] = {
+	{ _MMIO(0x9888), 0x10dc00e0 },
+	{ _MMIO(0x9888), 0x14db0160 },
+	{ _MMIO(0x9888), 0x16db2800 },
+	{ _MMIO(0x9888), 0x18db0120 },
+	{ _MMIO(0x9888), 0x0edc25c1 },
+	{ _MMIO(0x9888), 0x00dc6100 },
+	{ _MMIO(0x9888), 0x02dc204c },
+	{ _MMIO(0x9888), 0x06dc8000 },
+	{ _MMIO(0x9888), 0x08dcc000 },
+	{ _MMIO(0x9888), 0x0adcc000 },
+	{ _MMIO(0x9888), 0x0cdcc000 },
+	{ _MMIO(0x9888), 0x04dcc000 },
+	{ _MMIO(0x9888), 0x00db0011 },
+	{ _MMIO(0x9888), 0x06db0900 },
+	{ _MMIO(0x9888), 0x08db0a13 },
+	{ _MMIO(0x9888), 0x0adb0b15 },
+	{ _MMIO(0x9888), 0x0cdb2317 },
+	{ _MMIO(0x9888), 0x04db21b7 },
+	{ _MMIO(0x9888), 0x10db0000 },
+	{ _MMIO(0x9888), 0x0edb0000 },
+	{ _MMIO(0x9888), 0x1adb0000 },
+	{ _MMIO(0x9888), 0x0c9fa800 },
+	{ _MMIO(0x9888), 0x0e9faa2a },
+	{ _MMIO(0x9888), 0x109f02aa },
+	{ _MMIO(0x9888), 0x00b84000 },
+	{ _MMIO(0x9888), 0x0eb84000 },
+	{ _MMIO(0x9888), 0x16b84000 },
+	{ _MMIO(0x9888), 0x18b81555 },
+	{ _MMIO(0x9888), 0x02b84000 },
+	{ _MMIO(0x9888), 0x04b84000 },
+	{ _MMIO(0x9888), 0x06b84000 },
+	{ _MMIO(0x9888), 0x08b84000 },
+	{ _MMIO(0x9888), 0x0ab84000 },
+	{ _MMIO(0x9888), 0x00b9a000 },
+	{ _MMIO(0x9888), 0x06b98000 },
+	{ _MMIO(0x9888), 0x08b9a000 },
+	{ _MMIO(0x9888), 0x0ab9a000 },
+	{ _MMIO(0x9888), 0x0cb9a000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x02b9a000 },
+	{ _MMIO(0x9888), 0x04b9a000 },
+	{ _MMIO(0x9888), 0x01888000 },
+	{ _MMIO(0x9888), 0x0d88f800 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x238b5540 },
+	{ _MMIO(0x9888), 0x258baaa2 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x018c4000 },
+	{ _MMIO(0x9888), 0x0f8c4000 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c5500 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x018da000 },
+	{ _MMIO(0x9888), 0x078d8000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa2 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800420 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800421 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended_5_subslices_0x20[] = {
+	{ _MMIO(0x9888), 0x10b800e0 },
+	{ _MMIO(0x9888), 0x14ba0160 },
+	{ _MMIO(0x9888), 0x16ba2800 },
+	{ _MMIO(0x9888), 0x18ba0120 },
+	{ _MMIO(0x9888), 0x0c9fa800 },
+	{ _MMIO(0x9888), 0x0e9faa2a },
+	{ _MMIO(0x9888), 0x109f02aa },
+	{ _MMIO(0x9888), 0x0eb8a5c1 },
+	{ _MMIO(0x9888), 0x00b8a100 },
+	{ _MMIO(0x9888), 0x02b8204c },
+	{ _MMIO(0x9888), 0x16b88000 },
+	{ _MMIO(0x9888), 0x18b802aa },
+	{ _MMIO(0x9888), 0x04b80000 },
+	{ _MMIO(0x9888), 0x06b80000 },
+	{ _MMIO(0x9888), 0x08b88000 },
+	{ _MMIO(0x9888), 0x0ab88000 },
+	{ _MMIO(0x9888), 0x00b9a000 },
+	{ _MMIO(0x9888), 0x06b98000 },
+	{ _MMIO(0x9888), 0x08b9a000 },
+	{ _MMIO(0x9888), 0x0ab9a000 },
+	{ _MMIO(0x9888), 0x0cb9a000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x02b9a000 },
+	{ _MMIO(0x9888), 0x04b9a000 },
+	{ _MMIO(0x9888), 0x00ba0011 },
+	{ _MMIO(0x9888), 0x06ba0900 },
+	{ _MMIO(0x9888), 0x08ba0a13 },
+	{ _MMIO(0x9888), 0x0aba0b15 },
+	{ _MMIO(0x9888), 0x0cba2317 },
+	{ _MMIO(0x9888), 0x04ba21b7 },
+	{ _MMIO(0x9888), 0x10ba0000 },
+	{ _MMIO(0x9888), 0x0eba0000 },
+	{ _MMIO(0x9888), 0x1aba0000 },
+	{ _MMIO(0x9888), 0x01888000 },
+	{ _MMIO(0x9888), 0x0d88f800 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x238b5540 },
+	{ _MMIO(0x9888), 0x258baaa2 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x018c4000 },
+	{ _MMIO(0x9888), 0x0f8c4000 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c5500 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x018da000 },
+	{ _MMIO(0x9888), 0x078d8000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa2 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800420 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800421 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static const struct i915_oa_reg *
+get_compute_extended_mux_config(struct drm_i915_private *dev_priv,
+				int *len)
+{
+	if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x01) {
+		*len = ARRAY_SIZE(mux_config_compute_extended_0_subslices_0x01);
+		return mux_config_compute_extended_0_subslices_0x01;
+	} else if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x02) {
+		*len = ARRAY_SIZE(mux_config_compute_extended_2_subslices_0x02);
+		return mux_config_compute_extended_2_subslices_0x02;
+	} else if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x04) {
+		*len = ARRAY_SIZE(mux_config_compute_extended_4_subslices_0x04);
+		return mux_config_compute_extended_4_subslices_0x04;
+	} else if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x08) {
+		*len = ARRAY_SIZE(mux_config_compute_extended_1_subslices_0x08);
+		return mux_config_compute_extended_1_subslices_0x08;
+	} else if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x10) {
+		*len = ARRAY_SIZE(mux_config_compute_extended_3_subslices_0x10);
+		return mux_config_compute_extended_3_subslices_0x10;
+	} else if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x20) {
+		*len = ARRAY_SIZE(mux_config_compute_extended_5_subslices_0x20);
+		return mux_config_compute_extended_5_subslices_0x20;
+	}
+
+	*len = 0;
+	return NULL;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_l3_cache[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fffa },
+	{ _MMIO(0x2774), 0x0000fefe },
+	{ _MMIO(0x2778), 0x0007fffa },
+	{ _MMIO(0x277c), 0x0000fefd },
+	{ _MMIO(0x2790), 0x0007fffa },
+	{ _MMIO(0x2794), 0x0000fbef },
+	{ _MMIO(0x2798), 0x0007fffa },
+	{ _MMIO(0x279c), 0x0000fbdf },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_l3_cache[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00101100 },
+	{ _MMIO(0xe45c), 0x00201200 },
+	{ _MMIO(0xe55c), 0x00301300 },
+	{ _MMIO(0xe65c), 0x00401400 },
+};
+
+static const struct i915_oa_reg mux_config_compute_l3_cache[] = {
+	{ _MMIO(0x9888), 0x143f00b3 },
+	{ _MMIO(0x9888), 0x14bf00b3 },
+	{ _MMIO(0x9888), 0x138303c0 },
+	{ _MMIO(0x9888), 0x3b800060 },
+	{ _MMIO(0x9888), 0x3d800805 },
+	{ _MMIO(0x9888), 0x003f0029 },
+	{ _MMIO(0x9888), 0x063f1400 },
+	{ _MMIO(0x9888), 0x083f1225 },
+	{ _MMIO(0x9888), 0x0e3f1327 },
+	{ _MMIO(0x9888), 0x103f0000 },
+	{ _MMIO(0x9888), 0x005a4000 },
+	{ _MMIO(0x9888), 0x065a8000 },
+	{ _MMIO(0x9888), 0x085ac000 },
+	{ _MMIO(0x9888), 0x0e5ac000 },
+	{ _MMIO(0x9888), 0x001d4000 },
+	{ _MMIO(0x9888), 0x061d8000 },
+	{ _MMIO(0x9888), 0x081dc000 },
+	{ _MMIO(0x9888), 0x0e1dc000 },
+	{ _MMIO(0x9888), 0x0c1f0800 },
+	{ _MMIO(0x9888), 0x0e1f2a00 },
+	{ _MMIO(0x9888), 0x101f0280 },
+	{ _MMIO(0x9888), 0x00391000 },
+	{ _MMIO(0x9888), 0x06394000 },
+	{ _MMIO(0x9888), 0x08395000 },
+	{ _MMIO(0x9888), 0x0e395000 },
+	{ _MMIO(0x9888), 0x0abf1429 },
+	{ _MMIO(0x9888), 0x0cbf1225 },
+	{ _MMIO(0x9888), 0x00bf1380 },
+	{ _MMIO(0x9888), 0x02bf0026 },
+	{ _MMIO(0x9888), 0x10bf0000 },
+	{ _MMIO(0x9888), 0x0adac000 },
+	{ _MMIO(0x9888), 0x0cdac000 },
+	{ _MMIO(0x9888), 0x00da8000 },
+	{ _MMIO(0x9888), 0x02da4000 },
+	{ _MMIO(0x9888), 0x0a9dc000 },
+	{ _MMIO(0x9888), 0x0c9dc000 },
+	{ _MMIO(0x9888), 0x009d8000 },
+	{ _MMIO(0x9888), 0x029d4000 },
+	{ _MMIO(0x9888), 0x0e9f8000 },
+	{ _MMIO(0x9888), 0x109f002a },
+	{ _MMIO(0x9888), 0x0c9fa000 },
+	{ _MMIO(0x9888), 0x0ab95000 },
+	{ _MMIO(0x9888), 0x0cb95000 },
+	{ _MMIO(0x9888), 0x00b94000 },
+	{ _MMIO(0x9888), 0x02b91000 },
+	{ _MMIO(0x9888), 0x0d88c000 },
+	{ _MMIO(0x9888), 0x0f880003 },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8a8020 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x238b0520 },
+	{ _MMIO(0x9888), 0x258ba950 },
+	{ _MMIO(0x9888), 0x278b0016 },
+	{ _MMIO(0x9888), 0x198c5400 },
+	{ _MMIO(0x9888), 0x1b8c0001 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x018d8000 },
+	{ _MMIO(0x9888), 0x038d2000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa0 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x03835180 },
+	{ _MMIO(0x9888), 0x05834022 },
+	{ _MMIO(0x9888), 0x11830000 },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x07830000 },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x05844000 },
+	{ _MMIO(0x9888), 0x1b80c137 },
+	{ _MMIO(0x9888), 0x1d80c147 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x15804000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d801000 },
+	{ _MMIO(0x9888), 0x4f800111 },
+	{ _MMIO(0x9888), 0x43800842 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800840 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800800 },
+	{ _MMIO(0x9888), 0x418014a2 },
+};
+
+static const struct i915_oa_reg *
+get_compute_l3_cache_mux_config(struct drm_i915_private *dev_priv,
+				int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_l3_cache);
+	return mux_config_compute_l3_cache;
+}
+
+static const struct i915_oa_reg b_counter_config_data_port_reads_coalescing[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0xba98ba98 },
+	{ _MMIO(0x2748), 0xba98ba98 },
+	{ _MMIO(0x2744), 0x00003377 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fff2 },
+	{ _MMIO(0x2774), 0x00007ff0 },
+	{ _MMIO(0x2778), 0x0007ffe2 },
+	{ _MMIO(0x277c), 0x00007ff0 },
+	{ _MMIO(0x2780), 0x0007ffc2 },
+	{ _MMIO(0x2784), 0x00007ff0 },
+	{ _MMIO(0x2788), 0x0007ff82 },
+	{ _MMIO(0x278c), 0x00007ff0 },
+	{ _MMIO(0x2790), 0x0007fffa },
+	{ _MMIO(0x2794), 0x0000bfef },
+	{ _MMIO(0x2798), 0x0007fffa },
+	{ _MMIO(0x279c), 0x0000bfdf },
+	{ _MMIO(0x27a0), 0x0007fffa },
+	{ _MMIO(0x27a4), 0x0000bfbf },
+	{ _MMIO(0x27a8), 0x0007fffa },
+	{ _MMIO(0x27ac), 0x0000bf7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_data_port_reads_coalescing[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_data_port_reads_coalescing_0_subslices_0x01[] = {
+	{ _MMIO(0x9888), 0x103d0005 },
+	{ _MMIO(0x9888), 0x163d240b },
+	{ _MMIO(0x9888), 0x1058022f },
+	{ _MMIO(0x9888), 0x185b5520 },
+	{ _MMIO(0x9888), 0x198b0003 },
+	{ _MMIO(0x9888), 0x005cc000 },
+	{ _MMIO(0x9888), 0x065cc000 },
+	{ _MMIO(0x9888), 0x085cc000 },
+	{ _MMIO(0x9888), 0x0a5cc000 },
+	{ _MMIO(0x9888), 0x0c5cc000 },
+	{ _MMIO(0x9888), 0x0e5cc000 },
+	{ _MMIO(0x9888), 0x025c4000 },
+	{ _MMIO(0x9888), 0x045c8000 },
+	{ _MMIO(0x9888), 0x003d0000 },
+	{ _MMIO(0x9888), 0x063d00b0 },
+	{ _MMIO(0x9888), 0x083d0182 },
+	{ _MMIO(0x9888), 0x0a3d10a0 },
+	{ _MMIO(0x9888), 0x0c3d11a2 },
+	{ _MMIO(0x9888), 0x0e3d0000 },
+	{ _MMIO(0x9888), 0x183d0000 },
+	{ _MMIO(0x9888), 0x1a3d0000 },
+	{ _MMIO(0x9888), 0x0e582242 },
+	{ _MMIO(0x9888), 0x00586700 },
+	{ _MMIO(0x9888), 0x0258004f },
+	{ _MMIO(0x9888), 0x0658c000 },
+	{ _MMIO(0x9888), 0x0858c000 },
+	{ _MMIO(0x9888), 0x0a58c000 },
+	{ _MMIO(0x9888), 0x0c58c000 },
+	{ _MMIO(0x9888), 0x045b6300 },
+	{ _MMIO(0x9888), 0x105b0000 },
+	{ _MMIO(0x9888), 0x005b4000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x1a5b0155 },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x0a5b0000 },
+	{ _MMIO(0x9888), 0x0c5b4000 },
+	{ _MMIO(0x9888), 0x0c1fa800 },
+	{ _MMIO(0x9888), 0x0e1faaa0 },
+	{ _MMIO(0x9888), 0x101f02aa },
+	{ _MMIO(0x9888), 0x00384000 },
+	{ _MMIO(0x9888), 0x0e384000 },
+	{ _MMIO(0x9888), 0x16384000 },
+	{ _MMIO(0x9888), 0x18381555 },
+	{ _MMIO(0x9888), 0x02384000 },
+	{ _MMIO(0x9888), 0x04384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0c384000 },
+	{ _MMIO(0x9888), 0x0039a000 },
+	{ _MMIO(0x9888), 0x0639a000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0c39a000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x02392000 },
+	{ _MMIO(0x9888), 0x04398000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8aaaa0 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x038b6300 },
+	{ _MMIO(0x9888), 0x058b0062 },
+	{ _MMIO(0x9888), 0x118b0000 },
+	{ _MMIO(0x9888), 0x238b02a0 },
+	{ _MMIO(0x9888), 0x258b5555 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d801000 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f800001 },
+	{ _MMIO(0x9888), 0x43800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800420 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800421 },
+	{ _MMIO(0x9888), 0x41800041 },
+};
+
+static const struct i915_oa_reg *
+get_data_port_reads_coalescing_mux_config(struct drm_i915_private *dev_priv,
+					  int *len)
+{
+	if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x01) {
+		*len = ARRAY_SIZE(mux_config_data_port_reads_coalescing_0_subslices_0x01);
+		return mux_config_data_port_reads_coalescing_0_subslices_0x01;
+	}
+
+	*len = 0;
+	return NULL;
+}
+
+static const struct i915_oa_reg b_counter_config_data_port_writes_coalescing[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0xba98ba98 },
+	{ _MMIO(0x2748), 0xba98ba98 },
+	{ _MMIO(0x2744), 0x00003377 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ff72 },
+	{ _MMIO(0x2774), 0x0000bfd0 },
+	{ _MMIO(0x2778), 0x0007ff62 },
+	{ _MMIO(0x277c), 0x0000bfd0 },
+	{ _MMIO(0x2780), 0x0007ff42 },
+	{ _MMIO(0x2784), 0x0000bfd0 },
+	{ _MMIO(0x2788), 0x0007ff02 },
+	{ _MMIO(0x278c), 0x0000bfd0 },
+	{ _MMIO(0x2790), 0x0005fff2 },
+	{ _MMIO(0x2794), 0x0000bfd0 },
+	{ _MMIO(0x2798), 0x0005ffe2 },
+	{ _MMIO(0x279c), 0x0000bfd0 },
+	{ _MMIO(0x27a0), 0x0005ffc2 },
+	{ _MMIO(0x27a4), 0x0000bfd0 },
+	{ _MMIO(0x27a8), 0x0005ff82 },
+	{ _MMIO(0x27ac), 0x0000bfd0 },
+};
+
+static const struct i915_oa_reg flex_eu_config_data_port_writes_coalescing[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_data_port_writes_coalescing_0_subslices_0x01[] = {
+	{ _MMIO(0x9888), 0x103d0005 },
+	{ _MMIO(0x9888), 0x143d0120 },
+	{ _MMIO(0x9888), 0x163d2400 },
+	{ _MMIO(0x9888), 0x1058022f },
+	{ _MMIO(0x9888), 0x105b0000 },
+	{ _MMIO(0x9888), 0x198b0003 },
+	{ _MMIO(0x9888), 0x005cc000 },
+	{ _MMIO(0x9888), 0x065cc000 },
+	{ _MMIO(0x9888), 0x085cc000 },
+	{ _MMIO(0x9888), 0x0a5cc000 },
+	{ _MMIO(0x9888), 0x0e5cc000 },
+	{ _MMIO(0x9888), 0x025c4000 },
+	{ _MMIO(0x9888), 0x045c8000 },
+	{ _MMIO(0x9888), 0x003d0000 },
+	{ _MMIO(0x9888), 0x063d0094 },
+	{ _MMIO(0x9888), 0x083d0182 },
+	{ _MMIO(0x9888), 0x0a3d1814 },
+	{ _MMIO(0x9888), 0x0e3d0000 },
+	{ _MMIO(0x9888), 0x183d0000 },
+	{ _MMIO(0x9888), 0x1a3d0000 },
+	{ _MMIO(0x9888), 0x0c3d0000 },
+	{ _MMIO(0x9888), 0x0e582242 },
+	{ _MMIO(0x9888), 0x00586700 },
+	{ _MMIO(0x9888), 0x0258004f },
+	{ _MMIO(0x9888), 0x0658c000 },
+	{ _MMIO(0x9888), 0x0858c000 },
+	{ _MMIO(0x9888), 0x0a58c000 },
+	{ _MMIO(0x9888), 0x045b6a80 },
+	{ _MMIO(0x9888), 0x005b4000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x185b5400 },
+	{ _MMIO(0x9888), 0x1a5b0141 },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x0a5b0000 },
+	{ _MMIO(0x9888), 0x0c5b4000 },
+	{ _MMIO(0x9888), 0x0c1fa800 },
+	{ _MMIO(0x9888), 0x0e1faaa0 },
+	{ _MMIO(0x9888), 0x101f0282 },
+	{ _MMIO(0x9888), 0x00384000 },
+	{ _MMIO(0x9888), 0x0e384000 },
+	{ _MMIO(0x9888), 0x16384000 },
+	{ _MMIO(0x9888), 0x18381415 },
+	{ _MMIO(0x9888), 0x02384000 },
+	{ _MMIO(0x9888), 0x04384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0c384000 },
+	{ _MMIO(0x9888), 0x0039a000 },
+	{ _MMIO(0x9888), 0x0639a000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x02392000 },
+	{ _MMIO(0x9888), 0x04398000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8a82a0 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x038b6300 },
+	{ _MMIO(0x9888), 0x058b0062 },
+	{ _MMIO(0x9888), 0x118b0000 },
+	{ _MMIO(0x9888), 0x238b02a0 },
+	{ _MMIO(0x9888), 0x258b1555 },
+	{ _MMIO(0x9888), 0x278b0014 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x21852aaa },
+	{ _MMIO(0x9888), 0x23850028 },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830141 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d801000 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f800001 },
+	{ _MMIO(0x9888), 0x43800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800420 },
+	{ _MMIO(0x9888), 0x3f800421 },
+	{ _MMIO(0x9888), 0x41800041 },
+};
+
+static const struct i915_oa_reg *
+get_data_port_writes_coalescing_mux_config(struct drm_i915_private *dev_priv,
+					   int *len)
+{
+	if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x01) {
+		*len = ARRAY_SIZE(mux_config_data_port_writes_coalescing_0_subslices_0x01);
+		return mux_config_data_port_writes_coalescing_0_subslices_0x01;
+	}
+
+	*len = 0;
+	return NULL;
+}
+
+static const struct i915_oa_reg b_counter_config_hdc_and_sf[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x10800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fff7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_hdc_and_sf[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_hdc_and_sf[] = {
+	{ _MMIO(0x9888), 0x105c0232 },
+	{ _MMIO(0x9888), 0x10580232 },
+	{ _MMIO(0x9888), 0x10380232 },
+	{ _MMIO(0x9888), 0x10dc0232 },
+	{ _MMIO(0x9888), 0x10d80232 },
+	{ _MMIO(0x9888), 0x10b80232 },
+	{ _MMIO(0x9888), 0x118e4400 },
+	{ _MMIO(0x9888), 0x025c6080 },
+	{ _MMIO(0x9888), 0x045c004b },
+	{ _MMIO(0x9888), 0x005c8000 },
+	{ _MMIO(0x9888), 0x00582080 },
+	{ _MMIO(0x9888), 0x0258004b },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x045b4000 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f00aa },
+	{ _MMIO(0x9888), 0x04386080 },
+	{ _MMIO(0x9888), 0x0638404b },
+	{ _MMIO(0x9888), 0x02384000 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a380000 },
+	{ _MMIO(0x9888), 0x0c380000 },
+	{ _MMIO(0x9888), 0x00398000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x0cdc25c1 },
+	{ _MMIO(0x9888), 0x0adcc000 },
+	{ _MMIO(0x9888), 0x0ad825c1 },
+	{ _MMIO(0x9888), 0x18db4000 },
+	{ _MMIO(0x9888), 0x1adb0001 },
+	{ _MMIO(0x9888), 0x0e9f8000 },
+	{ _MMIO(0x9888), 0x109f02aa },
+	{ _MMIO(0x9888), 0x0eb825c1 },
+	{ _MMIO(0x9888), 0x18b80154 },
+	{ _MMIO(0x9888), 0x0ab9a000 },
+	{ _MMIO(0x9888), 0x0cb9a000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x0d88c000 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x258baa05 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x198c5400 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x098dc000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x098e05c0 },
+	{ _MMIO(0x9888), 0x058e0000 },
+	{ _MMIO(0x9888), 0x198f0020 },
+	{ _MMIO(0x9888), 0x2185aa0a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x19835000 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x09848000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x19808000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x51800040 },
+	{ _MMIO(0x9888), 0x43800400 },
+	{ _MMIO(0x9888), 0x45800800 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800c62 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f801042 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x418014a4 },
+};
+
+static const struct i915_oa_reg *
+get_hdc_and_sf_mux_config(struct drm_i915_private *dev_priv,
+			  int *len)
+{
+	*len = ARRAY_SIZE(mux_config_hdc_and_sf);
+	return mux_config_hdc_and_sf;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1[] = {
+	{ _MMIO(0x9888), 0x10bf03da },
+	{ _MMIO(0x9888), 0x14bf0001 },
+	{ _MMIO(0x9888), 0x12980340 },
+	{ _MMIO(0x9888), 0x12990340 },
+	{ _MMIO(0x9888), 0x0cbf1187 },
+	{ _MMIO(0x9888), 0x0ebf1205 },
+	{ _MMIO(0x9888), 0x00bf0500 },
+	{ _MMIO(0x9888), 0x02bf042b },
+	{ _MMIO(0x9888), 0x04bf002c },
+	{ _MMIO(0x9888), 0x0cdac000 },
+	{ _MMIO(0x9888), 0x0edac000 },
+	{ _MMIO(0x9888), 0x00da8000 },
+	{ _MMIO(0x9888), 0x02dac000 },
+	{ _MMIO(0x9888), 0x04da4000 },
+	{ _MMIO(0x9888), 0x04983400 },
+	{ _MMIO(0x9888), 0x10980000 },
+	{ _MMIO(0x9888), 0x06990034 },
+	{ _MMIO(0x9888), 0x10990000 },
+	{ _MMIO(0x9888), 0x0c9dc000 },
+	{ _MMIO(0x9888), 0x0e9dc000 },
+	{ _MMIO(0x9888), 0x009d8000 },
+	{ _MMIO(0x9888), 0x029dc000 },
+	{ _MMIO(0x9888), 0x049d4000 },
+	{ _MMIO(0x9888), 0x109f02a8 },
+	{ _MMIO(0x9888), 0x0c9fa000 },
+	{ _MMIO(0x9888), 0x0e9f00ba },
+	{ _MMIO(0x9888), 0x0cb88000 },
+	{ _MMIO(0x9888), 0x0cb95000 },
+	{ _MMIO(0x9888), 0x0eb95000 },
+	{ _MMIO(0x9888), 0x00b94000 },
+	{ _MMIO(0x9888), 0x02b95000 },
+	{ _MMIO(0x9888), 0x04b91000 },
+	{ _MMIO(0x9888), 0x06b92000 },
+	{ _MMIO(0x9888), 0x0cba4000 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x258b800a },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b5500 },
+	{ _MMIO(0x9888), 0x198c4000 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x018d8000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2185800a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x1b830154 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x47800000 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800060 },
+};
+
+static const struct i915_oa_reg *
+get_l3_1_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_1);
+	return mux_config_l3_1;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_2[] = {
+	{ _MMIO(0x9888), 0x103f03da },
+	{ _MMIO(0x9888), 0x143f0001 },
+	{ _MMIO(0x9888), 0x12180340 },
+	{ _MMIO(0x9888), 0x12190340 },
+	{ _MMIO(0x9888), 0x0c3f1187 },
+	{ _MMIO(0x9888), 0x0e3f1205 },
+	{ _MMIO(0x9888), 0x003f0500 },
+	{ _MMIO(0x9888), 0x023f042b },
+	{ _MMIO(0x9888), 0x043f002c },
+	{ _MMIO(0x9888), 0x0c5ac000 },
+	{ _MMIO(0x9888), 0x0e5ac000 },
+	{ _MMIO(0x9888), 0x005a8000 },
+	{ _MMIO(0x9888), 0x025ac000 },
+	{ _MMIO(0x9888), 0x045a4000 },
+	{ _MMIO(0x9888), 0x04183400 },
+	{ _MMIO(0x9888), 0x10180000 },
+	{ _MMIO(0x9888), 0x06190034 },
+	{ _MMIO(0x9888), 0x10190000 },
+	{ _MMIO(0x9888), 0x0c1dc000 },
+	{ _MMIO(0x9888), 0x0e1dc000 },
+	{ _MMIO(0x9888), 0x001d8000 },
+	{ _MMIO(0x9888), 0x021dc000 },
+	{ _MMIO(0x9888), 0x041d4000 },
+	{ _MMIO(0x9888), 0x101f02a8 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f00ba },
+	{ _MMIO(0x9888), 0x0c388000 },
+	{ _MMIO(0x9888), 0x0c395000 },
+	{ _MMIO(0x9888), 0x0e395000 },
+	{ _MMIO(0x9888), 0x00394000 },
+	{ _MMIO(0x9888), 0x02395000 },
+	{ _MMIO(0x9888), 0x04391000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x0c3a4000 },
+	{ _MMIO(0x9888), 0x1b8aa800 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x258b4005 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x2185800a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x1b830154 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x47800000 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800060 },
+};
+
+static const struct i915_oa_reg *
+get_l3_2_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_2);
+	return mux_config_l3_2;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_3[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_3[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_3[] = {
+	{ _MMIO(0x9888), 0x121b0340 },
+	{ _MMIO(0x9888), 0x103f0274 },
+	{ _MMIO(0x9888), 0x123f0000 },
+	{ _MMIO(0x9888), 0x129b0340 },
+	{ _MMIO(0x9888), 0x10bf0274 },
+	{ _MMIO(0x9888), 0x12bf0000 },
+	{ _MMIO(0x9888), 0x041b3400 },
+	{ _MMIO(0x9888), 0x101b0000 },
+	{ _MMIO(0x9888), 0x045c8000 },
+	{ _MMIO(0x9888), 0x0a3d4000 },
+	{ _MMIO(0x9888), 0x003f0080 },
+	{ _MMIO(0x9888), 0x023f0793 },
+	{ _MMIO(0x9888), 0x043f0014 },
+	{ _MMIO(0x9888), 0x04588000 },
+	{ _MMIO(0x9888), 0x005a8000 },
+	{ _MMIO(0x9888), 0x025ac000 },
+	{ _MMIO(0x9888), 0x045a4000 },
+	{ _MMIO(0x9888), 0x0a5b4000 },
+	{ _MMIO(0x9888), 0x001d8000 },
+	{ _MMIO(0x9888), 0x021dc000 },
+	{ _MMIO(0x9888), 0x041d4000 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f002a },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x00394000 },
+	{ _MMIO(0x9888), 0x02395000 },
+	{ _MMIO(0x9888), 0x04399000 },
+	{ _MMIO(0x9888), 0x069b0034 },
+	{ _MMIO(0x9888), 0x109b0000 },
+	{ _MMIO(0x9888), 0x06dc4000 },
+	{ _MMIO(0x9888), 0x0cbd4000 },
+	{ _MMIO(0x9888), 0x0cbf0981 },
+	{ _MMIO(0x9888), 0x0ebf0a0f },
+	{ _MMIO(0x9888), 0x06d84000 },
+	{ _MMIO(0x9888), 0x0cdac000 },
+	{ _MMIO(0x9888), 0x0edac000 },
+	{ _MMIO(0x9888), 0x0cdb4000 },
+	{ _MMIO(0x9888), 0x0c9dc000 },
+	{ _MMIO(0x9888), 0x0e9dc000 },
+	{ _MMIO(0x9888), 0x109f02a8 },
+	{ _MMIO(0x9888), 0x0e9f0080 },
+	{ _MMIO(0x9888), 0x0cb84000 },
+	{ _MMIO(0x9888), 0x0cb95000 },
+	{ _MMIO(0x9888), 0x0eb95000 },
+	{ _MMIO(0x9888), 0x06b92000 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x258b8009 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x198c4000 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2185800a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x1b830154 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x45800c00 },
+	{ _MMIO(0x9888), 0x47800c63 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f8014a5 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800045 },
+};
+
+static const struct i915_oa_reg *
+get_l3_3_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_3);
+	return mux_config_l3_3;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_4[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_4[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_4[] = {
+	{ _MMIO(0x9888), 0x121a0340 },
+	{ _MMIO(0x9888), 0x103f0017 },
+	{ _MMIO(0x9888), 0x123f0020 },
+	{ _MMIO(0x9888), 0x129a0340 },
+	{ _MMIO(0x9888), 0x10bf0017 },
+	{ _MMIO(0x9888), 0x12bf0020 },
+	{ _MMIO(0x9888), 0x041a3400 },
+	{ _MMIO(0x9888), 0x101a0000 },
+	{ _MMIO(0x9888), 0x043b8000 },
+	{ _MMIO(0x9888), 0x0a3e0010 },
+	{ _MMIO(0x9888), 0x003f0200 },
+	{ _MMIO(0x9888), 0x023f0113 },
+	{ _MMIO(0x9888), 0x043f0014 },
+	{ _MMIO(0x9888), 0x02592000 },
+	{ _MMIO(0x9888), 0x005a8000 },
+	{ _MMIO(0x9888), 0x025ac000 },
+	{ _MMIO(0x9888), 0x045a4000 },
+	{ _MMIO(0x9888), 0x0a1c8000 },
+	{ _MMIO(0x9888), 0x001d8000 },
+	{ _MMIO(0x9888), 0x021dc000 },
+	{ _MMIO(0x9888), 0x041d4000 },
+	{ _MMIO(0x9888), 0x0a1e8000 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f001a },
+	{ _MMIO(0x9888), 0x00394000 },
+	{ _MMIO(0x9888), 0x02395000 },
+	{ _MMIO(0x9888), 0x04391000 },
+	{ _MMIO(0x9888), 0x069a0034 },
+	{ _MMIO(0x9888), 0x109a0000 },
+	{ _MMIO(0x9888), 0x06bb4000 },
+	{ _MMIO(0x9888), 0x0abe0040 },
+	{ _MMIO(0x9888), 0x0cbf0984 },
+	{ _MMIO(0x9888), 0x0ebf0a02 },
+	{ _MMIO(0x9888), 0x02d94000 },
+	{ _MMIO(0x9888), 0x0cdac000 },
+	{ _MMIO(0x9888), 0x0edac000 },
+	{ _MMIO(0x9888), 0x0c9c0400 },
+	{ _MMIO(0x9888), 0x0c9dc000 },
+	{ _MMIO(0x9888), 0x0e9dc000 },
+	{ _MMIO(0x9888), 0x0c9e0400 },
+	{ _MMIO(0x9888), 0x109f02a8 },
+	{ _MMIO(0x9888), 0x0e9f0040 },
+	{ _MMIO(0x9888), 0x0cb95000 },
+	{ _MMIO(0x9888), 0x0eb95000 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x258b8009 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x198c4000 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2185800a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x1b830154 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x45800800 },
+	{ _MMIO(0x9888), 0x47800842 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f801084 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800044 },
+};
+
+static const struct i915_oa_reg *
+get_l3_4_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_4);
+	return mux_config_l3_4;
+}
+
+static const struct i915_oa_reg b_counter_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00006000 },
+	{ _MMIO(0x2774), 0x0000f3ff },
+	{ _MMIO(0x2778), 0x00001800 },
+	{ _MMIO(0x277c), 0x0000fcff },
+	{ _MMIO(0x2780), 0x00000600 },
+	{ _MMIO(0x2784), 0x0000ff3f },
+	{ _MMIO(0x2788), 0x00000180 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000060 },
+	{ _MMIO(0x2794), 0x0000fff3 },
+	{ _MMIO(0x2798), 0x00000018 },
+	{ _MMIO(0x279c), 0x0000fffc },
+};
+
+static const struct i915_oa_reg flex_eu_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x9888), 0x143b000e },
+	{ _MMIO(0x9888), 0x043c55c0 },
+	{ _MMIO(0x9888), 0x0a1e0280 },
+	{ _MMIO(0x9888), 0x0c1e0408 },
+	{ _MMIO(0x9888), 0x10390000 },
+	{ _MMIO(0x9888), 0x12397a1f },
+	{ _MMIO(0x9888), 0x14bb000e },
+	{ _MMIO(0x9888), 0x04bc5000 },
+	{ _MMIO(0x9888), 0x0a9e0296 },
+	{ _MMIO(0x9888), 0x0c9e0008 },
+	{ _MMIO(0x9888), 0x10b90000 },
+	{ _MMIO(0x9888), 0x12b97a1f },
+	{ _MMIO(0x9888), 0x063b0042 },
+	{ _MMIO(0x9888), 0x103b0000 },
+	{ _MMIO(0x9888), 0x083c0000 },
+	{ _MMIO(0x9888), 0x0a3e0040 },
+	{ _MMIO(0x9888), 0x043f8000 },
+	{ _MMIO(0x9888), 0x02594000 },
+	{ _MMIO(0x9888), 0x045a8000 },
+	{ _MMIO(0x9888), 0x0c1c0400 },
+	{ _MMIO(0x9888), 0x041d8000 },
+	{ _MMIO(0x9888), 0x081e02c0 },
+	{ _MMIO(0x9888), 0x0e1e0000 },
+	{ _MMIO(0x9888), 0x0c1fa800 },
+	{ _MMIO(0x9888), 0x0e1f0260 },
+	{ _MMIO(0x9888), 0x101f0014 },
+	{ _MMIO(0x9888), 0x003905e0 },
+	{ _MMIO(0x9888), 0x06390bc0 },
+	{ _MMIO(0x9888), 0x02390018 },
+	{ _MMIO(0x9888), 0x04394000 },
+	{ _MMIO(0x9888), 0x04bb0042 },
+	{ _MMIO(0x9888), 0x10bb0000 },
+	{ _MMIO(0x9888), 0x02bc05c0 },
+	{ _MMIO(0x9888), 0x08bc0000 },
+	{ _MMIO(0x9888), 0x0abe0004 },
+	{ _MMIO(0x9888), 0x02bf8000 },
+	{ _MMIO(0x9888), 0x02d91000 },
+	{ _MMIO(0x9888), 0x02da8000 },
+	{ _MMIO(0x9888), 0x089c8000 },
+	{ _MMIO(0x9888), 0x029d8000 },
+	{ _MMIO(0x9888), 0x089e8000 },
+	{ _MMIO(0x9888), 0x0e9e0000 },
+	{ _MMIO(0x9888), 0x0e9fa806 },
+	{ _MMIO(0x9888), 0x109f0142 },
+	{ _MMIO(0x9888), 0x08b90617 },
+	{ _MMIO(0x9888), 0x0ab90be0 },
+	{ _MMIO(0x9888), 0x02b94000 },
+	{ _MMIO(0x9888), 0x0d88f000 },
+	{ _MMIO(0x9888), 0x0f88000c },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x1b8a2800 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x238b52a0 },
+	{ _MMIO(0x9888), 0x258b6a95 },
+	{ _MMIO(0x9888), 0x278b0029 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c1500 },
+	{ _MMIO(0x9888), 0x1b8c0014 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x038d8000 },
+	{ _MMIO(0x9888), 0x058d2000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x4d800444 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f804000 },
+	{ _MMIO(0x9888), 0x43801080 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800084 },
+	{ _MMIO(0x9888), 0x53800044 },
+	{ _MMIO(0x9888), 0x47801080 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800000 },
+	{ _MMIO(0x9888), 0x41800840 },
+};
+
+static const struct i915_oa_reg *
+get_rasterizer_and_pixel_backend_mux_config(struct drm_i915_private *dev_priv,
+					    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_rasterizer_and_pixel_backend);
+	return mux_config_rasterizer_and_pixel_backend;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler_1[] = {
+	{ _MMIO(0x9888), 0x18921400 },
+	{ _MMIO(0x9888), 0x149500ab },
+	{ _MMIO(0x9888), 0x18b21400 },
+	{ _MMIO(0x9888), 0x14b500ab },
+	{ _MMIO(0x9888), 0x18d21400 },
+	{ _MMIO(0x9888), 0x14d500ab },
+	{ _MMIO(0x9888), 0x0cdc8000 },
+	{ _MMIO(0x9888), 0x0edc4000 },
+	{ _MMIO(0x9888), 0x02dcc000 },
+	{ _MMIO(0x9888), 0x04dcc000 },
+	{ _MMIO(0x9888), 0x1abd00a0 },
+	{ _MMIO(0x9888), 0x0abd8000 },
+	{ _MMIO(0x9888), 0x0cd88000 },
+	{ _MMIO(0x9888), 0x0ed84000 },
+	{ _MMIO(0x9888), 0x04d88000 },
+	{ _MMIO(0x9888), 0x1adb0050 },
+	{ _MMIO(0x9888), 0x04db8000 },
+	{ _MMIO(0x9888), 0x06db8000 },
+	{ _MMIO(0x9888), 0x08db8000 },
+	{ _MMIO(0x9888), 0x0adb4000 },
+	{ _MMIO(0x9888), 0x109f02a0 },
+	{ _MMIO(0x9888), 0x0c9fa000 },
+	{ _MMIO(0x9888), 0x0e9f00aa },
+	{ _MMIO(0x9888), 0x18b82500 },
+	{ _MMIO(0x9888), 0x02b88000 },
+	{ _MMIO(0x9888), 0x04b84000 },
+	{ _MMIO(0x9888), 0x06b84000 },
+	{ _MMIO(0x9888), 0x08b84000 },
+	{ _MMIO(0x9888), 0x0ab84000 },
+	{ _MMIO(0x9888), 0x0cb88000 },
+	{ _MMIO(0x9888), 0x0cb98000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x00b98000 },
+	{ _MMIO(0x9888), 0x02b9a000 },
+	{ _MMIO(0x9888), 0x04b9a000 },
+	{ _MMIO(0x9888), 0x06b92000 },
+	{ _MMIO(0x9888), 0x1aba0200 },
+	{ _MMIO(0x9888), 0x02ba8000 },
+	{ _MMIO(0x9888), 0x0cba8000 },
+	{ _MMIO(0x9888), 0x04908000 },
+	{ _MMIO(0x9888), 0x04918000 },
+	{ _MMIO(0x9888), 0x04927300 },
+	{ _MMIO(0x9888), 0x10920000 },
+	{ _MMIO(0x9888), 0x1893000a },
+	{ _MMIO(0x9888), 0x0a934000 },
+	{ _MMIO(0x9888), 0x0a946000 },
+	{ _MMIO(0x9888), 0x0c959000 },
+	{ _MMIO(0x9888), 0x0e950098 },
+	{ _MMIO(0x9888), 0x10950000 },
+	{ _MMIO(0x9888), 0x04b04000 },
+	{ _MMIO(0x9888), 0x04b14000 },
+	{ _MMIO(0x9888), 0x04b20073 },
+	{ _MMIO(0x9888), 0x10b20000 },
+	{ _MMIO(0x9888), 0x04b38000 },
+	{ _MMIO(0x9888), 0x06b38000 },
+	{ _MMIO(0x9888), 0x08b34000 },
+	{ _MMIO(0x9888), 0x04b4c000 },
+	{ _MMIO(0x9888), 0x02b59890 },
+	{ _MMIO(0x9888), 0x10b50000 },
+	{ _MMIO(0x9888), 0x06d04000 },
+	{ _MMIO(0x9888), 0x06d14000 },
+	{ _MMIO(0x9888), 0x06d20073 },
+	{ _MMIO(0x9888), 0x10d20000 },
+	{ _MMIO(0x9888), 0x18d30020 },
+	{ _MMIO(0x9888), 0x02d38000 },
+	{ _MMIO(0x9888), 0x0cd34000 },
+	{ _MMIO(0x9888), 0x0ad48000 },
+	{ _MMIO(0x9888), 0x04d42000 },
+	{ _MMIO(0x9888), 0x0ed59000 },
+	{ _MMIO(0x9888), 0x00d59800 },
+	{ _MMIO(0x9888), 0x10d50000 },
+	{ _MMIO(0x9888), 0x0f88000e },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b5500 },
+	{ _MMIO(0x9888), 0x258b000a },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x0d8d8000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x018d8000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x2185000a },
+	{ _MMIO(0x9888), 0x1b830150 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d848000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d808000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47801021 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800c64 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800c02 },
+};
+
+static const struct i915_oa_reg *
+get_sampler_1_mux_config(struct drm_i915_private *dev_priv,
+			 int *len)
+{
+	*len = ARRAY_SIZE(mux_config_sampler_1);
+	return mux_config_sampler_1;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler_2[] = {
+	{ _MMIO(0x9888), 0x18121400 },
+	{ _MMIO(0x9888), 0x141500ab },
+	{ _MMIO(0x9888), 0x18321400 },
+	{ _MMIO(0x9888), 0x143500ab },
+	{ _MMIO(0x9888), 0x18521400 },
+	{ _MMIO(0x9888), 0x145500ab },
+	{ _MMIO(0x9888), 0x0c5c8000 },
+	{ _MMIO(0x9888), 0x0e5c4000 },
+	{ _MMIO(0x9888), 0x025cc000 },
+	{ _MMIO(0x9888), 0x045cc000 },
+	{ _MMIO(0x9888), 0x1a3d00a0 },
+	{ _MMIO(0x9888), 0x0a3d8000 },
+	{ _MMIO(0x9888), 0x0c588000 },
+	{ _MMIO(0x9888), 0x0e584000 },
+	{ _MMIO(0x9888), 0x04588000 },
+	{ _MMIO(0x9888), 0x1a5b0050 },
+	{ _MMIO(0x9888), 0x045b8000 },
+	{ _MMIO(0x9888), 0x065b8000 },
+	{ _MMIO(0x9888), 0x085b8000 },
+	{ _MMIO(0x9888), 0x0a5b4000 },
+	{ _MMIO(0x9888), 0x101f02a0 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f00aa },
+	{ _MMIO(0x9888), 0x18382500 },
+	{ _MMIO(0x9888), 0x02388000 },
+	{ _MMIO(0x9888), 0x04384000 },
+	{ _MMIO(0x9888), 0x06384000 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0c388000 },
+	{ _MMIO(0x9888), 0x0c398000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x00398000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x1a3a0200 },
+	{ _MMIO(0x9888), 0x023a8000 },
+	{ _MMIO(0x9888), 0x0c3a8000 },
+	{ _MMIO(0x9888), 0x04108000 },
+	{ _MMIO(0x9888), 0x04118000 },
+	{ _MMIO(0x9888), 0x04127300 },
+	{ _MMIO(0x9888), 0x10120000 },
+	{ _MMIO(0x9888), 0x1813000a },
+	{ _MMIO(0x9888), 0x0a134000 },
+	{ _MMIO(0x9888), 0x0a146000 },
+	{ _MMIO(0x9888), 0x0c159000 },
+	{ _MMIO(0x9888), 0x0e150098 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x04304000 },
+	{ _MMIO(0x9888), 0x04314000 },
+	{ _MMIO(0x9888), 0x04320073 },
+	{ _MMIO(0x9888), 0x10320000 },
+	{ _MMIO(0x9888), 0x04338000 },
+	{ _MMIO(0x9888), 0x06338000 },
+	{ _MMIO(0x9888), 0x08334000 },
+	{ _MMIO(0x9888), 0x0434c000 },
+	{ _MMIO(0x9888), 0x02359890 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x06504000 },
+	{ _MMIO(0x9888), 0x06514000 },
+	{ _MMIO(0x9888), 0x06520073 },
+	{ _MMIO(0x9888), 0x10520000 },
+	{ _MMIO(0x9888), 0x18530020 },
+	{ _MMIO(0x9888), 0x02538000 },
+	{ _MMIO(0x9888), 0x0c534000 },
+	{ _MMIO(0x9888), 0x0a548000 },
+	{ _MMIO(0x9888), 0x04542000 },
+	{ _MMIO(0x9888), 0x0e559000 },
+	{ _MMIO(0x9888), 0x00559800 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x1b8aa000 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x258b0005 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x2185000a },
+	{ _MMIO(0x9888), 0x1b830150 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d848000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d808000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47801021 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800c64 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800c02 },
+};
+
+static const struct i915_oa_reg *
+get_sampler_2_mux_config(struct drm_i915_private *dev_priv,
+			 int *len)
+{
+	*len = ARRAY_SIZE(mux_config_sampler_2);
+	return mux_config_sampler_2;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x0000fe7f },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000ffbf },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fff7 },
+	{ _MMIO(0x2798), 0x00000000 },
+	{ _MMIO(0x279c), 0x0000fff9 },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_1[] = {
+	{ _MMIO(0x9888), 0x16154d60 },
+	{ _MMIO(0x9888), 0x16352e60 },
+	{ _MMIO(0x9888), 0x16554d60 },
+	{ _MMIO(0x9888), 0x16950000 },
+	{ _MMIO(0x9888), 0x16b50000 },
+	{ _MMIO(0x9888), 0x16d50000 },
+	{ _MMIO(0x9888), 0x005c8000 },
+	{ _MMIO(0x9888), 0x045cc000 },
+	{ _MMIO(0x9888), 0x065c4000 },
+	{ _MMIO(0x9888), 0x083d8000 },
+	{ _MMIO(0x9888), 0x0a3d8000 },
+	{ _MMIO(0x9888), 0x0458c000 },
+	{ _MMIO(0x9888), 0x025b8000 },
+	{ _MMIO(0x9888), 0x085b4000 },
+	{ _MMIO(0x9888), 0x0a5b4000 },
+	{ _MMIO(0x9888), 0x0c5b8000 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f00aa },
+	{ _MMIO(0x9888), 0x02384000 },
+	{ _MMIO(0x9888), 0x04388000 },
+	{ _MMIO(0x9888), 0x06388000 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0c384000 },
+	{ _MMIO(0x9888), 0x00398000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x043a8000 },
+	{ _MMIO(0x9888), 0x063a8000 },
+	{ _MMIO(0x9888), 0x08138000 },
+	{ _MMIO(0x9888), 0x0a138000 },
+	{ _MMIO(0x9888), 0x06143000 },
+	{ _MMIO(0x9888), 0x0415cfc7 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x02338000 },
+	{ _MMIO(0x9888), 0x0c338000 },
+	{ _MMIO(0x9888), 0x04342000 },
+	{ _MMIO(0x9888), 0x06344000 },
+	{ _MMIO(0x9888), 0x0035c700 },
+	{ _MMIO(0x9888), 0x063500cf },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x04538000 },
+	{ _MMIO(0x9888), 0x06538000 },
+	{ _MMIO(0x9888), 0x0454c000 },
+	{ _MMIO(0x9888), 0x0255cfc7 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x06dc8000 },
+	{ _MMIO(0x9888), 0x08dc4000 },
+	{ _MMIO(0x9888), 0x0cdcc000 },
+	{ _MMIO(0x9888), 0x0edcc000 },
+	{ _MMIO(0x9888), 0x1abd00a8 },
+	{ _MMIO(0x9888), 0x0cd8c000 },
+	{ _MMIO(0x9888), 0x0ed84000 },
+	{ _MMIO(0x9888), 0x0edb8000 },
+	{ _MMIO(0x9888), 0x18db0800 },
+	{ _MMIO(0x9888), 0x1adb0254 },
+	{ _MMIO(0x9888), 0x0e9faa00 },
+	{ _MMIO(0x9888), 0x109f02aa },
+	{ _MMIO(0x9888), 0x0eb84000 },
+	{ _MMIO(0x9888), 0x16b84000 },
+	{ _MMIO(0x9888), 0x18b8156a },
+	{ _MMIO(0x9888), 0x06b98000 },
+	{ _MMIO(0x9888), 0x08b9a000 },
+	{ _MMIO(0x9888), 0x0ab9a000 },
+	{ _MMIO(0x9888), 0x0cb9a000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x18baa000 },
+	{ _MMIO(0x9888), 0x1aba0002 },
+	{ _MMIO(0x9888), 0x16934000 },
+	{ _MMIO(0x9888), 0x1893000a },
+	{ _MMIO(0x9888), 0x0a947000 },
+	{ _MMIO(0x9888), 0x0c95c5c1 },
+	{ _MMIO(0x9888), 0x0e9500c3 },
+	{ _MMIO(0x9888), 0x10950000 },
+	{ _MMIO(0x9888), 0x0eb38000 },
+	{ _MMIO(0x9888), 0x16b30040 },
+	{ _MMIO(0x9888), 0x18b30020 },
+	{ _MMIO(0x9888), 0x06b48000 },
+	{ _MMIO(0x9888), 0x08b41000 },
+	{ _MMIO(0x9888), 0x0ab48000 },
+	{ _MMIO(0x9888), 0x06b5c500 },
+	{ _MMIO(0x9888), 0x08b500c3 },
+	{ _MMIO(0x9888), 0x0eb5c100 },
+	{ _MMIO(0x9888), 0x10b50000 },
+	{ _MMIO(0x9888), 0x16d31500 },
+	{ _MMIO(0x9888), 0x08d4e000 },
+	{ _MMIO(0x9888), 0x08d5c100 },
+	{ _MMIO(0x9888), 0x0ad5c3c5 },
+	{ _MMIO(0x9888), 0x10d50000 },
+	{ _MMIO(0x9888), 0x0d88f800 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x258baaa5 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x0f8c4000 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c5500 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x078d8000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800c42 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800063 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800800 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f8014a4 },
+	{ _MMIO(0x9888), 0x41801042 },
+};
+
+static const struct i915_oa_reg *
+get_tdl_1_mux_config(struct drm_i915_private *dev_priv,
+		     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_tdl_1);
+	return mux_config_tdl_1;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x0000fe7f },
+	{ _MMIO(0x2780), 0x00000000 },
+	{ _MMIO(0x2784), 0x0000ff9f },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000ffe7 },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fffb },
+	{ _MMIO(0x2798), 0x00000002 },
+	{ _MMIO(0x279c), 0x0000fffd },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_2[] = {
+	{ _MMIO(0x9888), 0x16150000 },
+	{ _MMIO(0x9888), 0x16350000 },
+	{ _MMIO(0x9888), 0x16550000 },
+	{ _MMIO(0x9888), 0x16952e60 },
+	{ _MMIO(0x9888), 0x16b54d60 },
+	{ _MMIO(0x9888), 0x16d52e60 },
+	{ _MMIO(0x9888), 0x065c8000 },
+	{ _MMIO(0x9888), 0x085cc000 },
+	{ _MMIO(0x9888), 0x0a5cc000 },
+	{ _MMIO(0x9888), 0x0c5c4000 },
+	{ _MMIO(0x9888), 0x0e3d8000 },
+	{ _MMIO(0x9888), 0x183da000 },
+	{ _MMIO(0x9888), 0x06588000 },
+	{ _MMIO(0x9888), 0x08588000 },
+	{ _MMIO(0x9888), 0x0a584000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x185b5800 },
+	{ _MMIO(0x9888), 0x1a5b000a },
+	{ _MMIO(0x9888), 0x0e1faa00 },
+	{ _MMIO(0x9888), 0x101f02aa },
+	{ _MMIO(0x9888), 0x0e384000 },
+	{ _MMIO(0x9888), 0x16384000 },
+	{ _MMIO(0x9888), 0x18382a55 },
+	{ _MMIO(0x9888), 0x06398000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0c39a000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x1a3a02a0 },
+	{ _MMIO(0x9888), 0x0e138000 },
+	{ _MMIO(0x9888), 0x16130500 },
+	{ _MMIO(0x9888), 0x06148000 },
+	{ _MMIO(0x9888), 0x08146000 },
+	{ _MMIO(0x9888), 0x0615c100 },
+	{ _MMIO(0x9888), 0x0815c500 },
+	{ _MMIO(0x9888), 0x0a1500c3 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x16335040 },
+	{ _MMIO(0x9888), 0x08349000 },
+	{ _MMIO(0x9888), 0x0a341000 },
+	{ _MMIO(0x9888), 0x083500c1 },
+	{ _MMIO(0x9888), 0x0a35c500 },
+	{ _MMIO(0x9888), 0x0c3500c3 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x1853002a },
+	{ _MMIO(0x9888), 0x0a54e000 },
+	{ _MMIO(0x9888), 0x0c55c500 },
+	{ _MMIO(0x9888), 0x0e55c1c3 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x00dc8000 },
+	{ _MMIO(0x9888), 0x02dcc000 },
+	{ _MMIO(0x9888), 0x04dc4000 },
+	{ _MMIO(0x9888), 0x04bd8000 },
+	{ _MMIO(0x9888), 0x06bd8000 },
+	{ _MMIO(0x9888), 0x02d8c000 },
+	{ _MMIO(0x9888), 0x02db8000 },
+	{ _MMIO(0x9888), 0x04db4000 },
+	{ _MMIO(0x9888), 0x06db4000 },
+	{ _MMIO(0x9888), 0x08db8000 },
+	{ _MMIO(0x9888), 0x0c9fa000 },
+	{ _MMIO(0x9888), 0x0e9f00aa },
+	{ _MMIO(0x9888), 0x02b84000 },
+	{ _MMIO(0x9888), 0x04b84000 },
+	{ _MMIO(0x9888), 0x06b84000 },
+	{ _MMIO(0x9888), 0x08b84000 },
+	{ _MMIO(0x9888), 0x0ab88000 },
+	{ _MMIO(0x9888), 0x0cb88000 },
+	{ _MMIO(0x9888), 0x00b98000 },
+	{ _MMIO(0x9888), 0x02b9a000 },
+	{ _MMIO(0x9888), 0x04b9a000 },
+	{ _MMIO(0x9888), 0x06b92000 },
+	{ _MMIO(0x9888), 0x0aba8000 },
+	{ _MMIO(0x9888), 0x0cba8000 },
+	{ _MMIO(0x9888), 0x04938000 },
+	{ _MMIO(0x9888), 0x06938000 },
+	{ _MMIO(0x9888), 0x0494c000 },
+	{ _MMIO(0x9888), 0x0295cfc7 },
+	{ _MMIO(0x9888), 0x10950000 },
+	{ _MMIO(0x9888), 0x02b38000 },
+	{ _MMIO(0x9888), 0x08b38000 },
+	{ _MMIO(0x9888), 0x04b42000 },
+	{ _MMIO(0x9888), 0x06b41000 },
+	{ _MMIO(0x9888), 0x00b5c700 },
+	{ _MMIO(0x9888), 0x04b500cf },
+	{ _MMIO(0x9888), 0x10b50000 },
+	{ _MMIO(0x9888), 0x0ad38000 },
+	{ _MMIO(0x9888), 0x0cd38000 },
+	{ _MMIO(0x9888), 0x06d46000 },
+	{ _MMIO(0x9888), 0x04d5c700 },
+	{ _MMIO(0x9888), 0x06d500cf },
+	{ _MMIO(0x9888), 0x10d50000 },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8aaaa0 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x258b555a },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x238b5500 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x018d8000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800882 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45801082 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x478014a5 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800002 },
+	{ _MMIO(0x9888), 0x41800c62 },
+};
+
+static const struct i915_oa_reg *
+get_tdl_2_mux_config(struct drm_i915_private *dev_priv,
+		     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_tdl_2);
+	return mux_config_tdl_2;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extra[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extra[] = {
+	{ _MMIO(0xe458), 0x00001000 },
+	{ _MMIO(0xe558), 0x00003002 },
+	{ _MMIO(0xe658), 0x00005004 },
+	{ _MMIO(0xe758), 0x00011010 },
+	{ _MMIO(0xe45c), 0x00050012 },
+	{ _MMIO(0xe55c), 0x00052051 },
+	{ _MMIO(0xe65c), 0x00000008 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extra[] = {
+	{ _MMIO(0x9888), 0x161503e0 },
+	{ _MMIO(0x9888), 0x163503e0 },
+	{ _MMIO(0x9888), 0x165503e0 },
+	{ _MMIO(0x9888), 0x169503e0 },
+	{ _MMIO(0x9888), 0x16b503e0 },
+	{ _MMIO(0x9888), 0x16d503e0 },
+	{ _MMIO(0x9888), 0x045cc000 },
+	{ _MMIO(0x9888), 0x083d8000 },
+	{ _MMIO(0x9888), 0x04584000 },
+	{ _MMIO(0x9888), 0x085b4000 },
+	{ _MMIO(0x9888), 0x0a5b8000 },
+	{ _MMIO(0x9888), 0x0e1f00a8 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0c388000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x0c3a8000 },
+	{ _MMIO(0x9888), 0x08138000 },
+	{ _MMIO(0x9888), 0x06141000 },
+	{ _MMIO(0x9888), 0x041500c3 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x0a338000 },
+	{ _MMIO(0x9888), 0x06342000 },
+	{ _MMIO(0x9888), 0x0435c300 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x0c538000 },
+	{ _MMIO(0x9888), 0x06544000 },
+	{ _MMIO(0x9888), 0x065500c3 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x00dc8000 },
+	{ _MMIO(0x9888), 0x02dc4000 },
+	{ _MMIO(0x9888), 0x02bd8000 },
+	{ _MMIO(0x9888), 0x00d88000 },
+	{ _MMIO(0x9888), 0x02db4000 },
+	{ _MMIO(0x9888), 0x04db8000 },
+	{ _MMIO(0x9888), 0x0c9fa000 },
+	{ _MMIO(0x9888), 0x0e9f0002 },
+	{ _MMIO(0x9888), 0x02b84000 },
+	{ _MMIO(0x9888), 0x04b84000 },
+	{ _MMIO(0x9888), 0x06b88000 },
+	{ _MMIO(0x9888), 0x00b98000 },
+	{ _MMIO(0x9888), 0x02b9a000 },
+	{ _MMIO(0x9888), 0x06ba8000 },
+	{ _MMIO(0x9888), 0x02938000 },
+	{ _MMIO(0x9888), 0x04942000 },
+	{ _MMIO(0x9888), 0x0095c300 },
+	{ _MMIO(0x9888), 0x10950000 },
+	{ _MMIO(0x9888), 0x04b38000 },
+	{ _MMIO(0x9888), 0x04b44000 },
+	{ _MMIO(0x9888), 0x02b500c3 },
+	{ _MMIO(0x9888), 0x10b50000 },
+	{ _MMIO(0x9888), 0x06d38000 },
+	{ _MMIO(0x9888), 0x04d48000 },
+	{ _MMIO(0x9888), 0x02d5c300 },
+	{ _MMIO(0x9888), 0x10d50000 },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x238b3500 },
+	{ _MMIO(0x9888), 0x258b0005 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x018d8000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x2185000a },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800c40 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41801482 },
+	{ _MMIO(0x9888), 0x31800000 },
+};
+
+static const struct i915_oa_reg *
+get_compute_extra_mux_config(struct drm_i915_private *dev_priv,
+			     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_extra);
+	return mux_config_compute_extra;
+}
+
+static const struct i915_oa_reg b_counter_config_vme_pipe[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00100030 },
+	{ _MMIO(0x2774), 0x0000fff9 },
+	{ _MMIO(0x2778), 0x00000002 },
+	{ _MMIO(0x277c), 0x0000fffc },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000fff3 },
+	{ _MMIO(0x2788), 0x00100180 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00000002 },
+	{ _MMIO(0x279c), 0x0000ff3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_vme_pipe[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00008003 },
+};
+
+static const struct i915_oa_reg mux_config_vme_pipe[] = {
+	{ _MMIO(0x9888), 0x14100812 },
+	{ _MMIO(0x9888), 0x14125800 },
+	{ _MMIO(0x9888), 0x161200c0 },
+	{ _MMIO(0x9888), 0x14300812 },
+	{ _MMIO(0x9888), 0x14325800 },
+	{ _MMIO(0x9888), 0x163200c0 },
+	{ _MMIO(0x9888), 0x005c4000 },
+	{ _MMIO(0x9888), 0x065c8000 },
+	{ _MMIO(0x9888), 0x085cc000 },
+	{ _MMIO(0x9888), 0x0a5cc000 },
+	{ _MMIO(0x9888), 0x0c5cc000 },
+	{ _MMIO(0x9888), 0x003d8000 },
+	{ _MMIO(0x9888), 0x0e3d8000 },
+	{ _MMIO(0x9888), 0x183d2800 },
+	{ _MMIO(0x9888), 0x00584000 },
+	{ _MMIO(0x9888), 0x06588000 },
+	{ _MMIO(0x9888), 0x0858c000 },
+	{ _MMIO(0x9888), 0x005b4000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x185b9400 },
+	{ _MMIO(0x9888), 0x1a5b002a },
+	{ _MMIO(0x9888), 0x0c1f0800 },
+	{ _MMIO(0x9888), 0x0e1faa00 },
+	{ _MMIO(0x9888), 0x101f002a },
+	{ _MMIO(0x9888), 0x00384000 },
+	{ _MMIO(0x9888), 0x0e384000 },
+	{ _MMIO(0x9888), 0x16384000 },
+	{ _MMIO(0x9888), 0x18380155 },
+	{ _MMIO(0x9888), 0x00392000 },
+	{ _MMIO(0x9888), 0x06398000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0c39a000 },
+	{ _MMIO(0x9888), 0x00100047 },
+	{ _MMIO(0x9888), 0x06101a80 },
+	{ _MMIO(0x9888), 0x10100000 },
+	{ _MMIO(0x9888), 0x0810c000 },
+	{ _MMIO(0x9888), 0x0811c000 },
+	{ _MMIO(0x9888), 0x08126151 },
+	{ _MMIO(0x9888), 0x10120000 },
+	{ _MMIO(0x9888), 0x00134000 },
+	{ _MMIO(0x9888), 0x0e134000 },
+	{ _MMIO(0x9888), 0x161300a0 },
+	{ _MMIO(0x9888), 0x0a301ac7 },
+	{ _MMIO(0x9888), 0x10300000 },
+	{ _MMIO(0x9888), 0x0c30c000 },
+	{ _MMIO(0x9888), 0x0c31c000 },
+	{ _MMIO(0x9888), 0x0c326151 },
+	{ _MMIO(0x9888), 0x10320000 },
+	{ _MMIO(0x9888), 0x16332a00 },
+	{ _MMIO(0x9888), 0x18330001 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8a2aa0 },
+	{ _MMIO(0x9888), 0x238b0020 },
+	{ _MMIO(0x9888), 0x258b5550 },
+	{ _MMIO(0x9888), 0x278b0001 },
+	{ _MMIO(0x9888), 0x1f850080 },
+	{ _MMIO(0x9888), 0x2185aaa0 },
+	{ _MMIO(0x9888), 0x23850002 },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830015 },
+	{ _MMIO(0x9888), 0x01844000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x11804000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3d800800 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800002 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800884 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800002 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+};
+
+static const struct i915_oa_reg *
+get_vme_pipe_mux_config(struct drm_i915_private *dev_priv,
+			int *len)
+{
+	*len = ARRAY_SIZE(mux_config_vme_pipe);
+	return mux_config_vme_pipe;
+}
+
+static const struct i915_oa_reg b_counter_config_test_oa[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2770), 0x00000004 },
+	{ _MMIO(0x2774), 0x00000000 },
+	{ _MMIO(0x2778), 0x00000003 },
+	{ _MMIO(0x277c), 0x00000000 },
+	{ _MMIO(0x2780), 0x00000007 },
+	{ _MMIO(0x2784), 0x00000000 },
+	{ _MMIO(0x2788), 0x00100002 },
+	{ _MMIO(0x278c), 0x0000fff7 },
+	{ _MMIO(0x2790), 0x00100002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00100082 },
+	{ _MMIO(0x279c), 0x0000ffef },
+	{ _MMIO(0x27a0), 0x001000c2 },
+	{ _MMIO(0x27a4), 0x0000ffe7 },
+	{ _MMIO(0x27a8), 0x00100001 },
+	{ _MMIO(0x27ac), 0x0000ffe7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_test_oa[] = {
+};
+
+static const struct i915_oa_reg mux_config_test_oa[] = {
+	{ _MMIO(0x9888), 0x198b0000 },
+	{ _MMIO(0x9888), 0x078b0066 },
+	{ _MMIO(0x9888), 0x118b0000 },
+	{ _MMIO(0x9888), 0x258b0000 },
+	{ _MMIO(0x9888), 0x21850008 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+};
+
+static const struct i915_oa_reg *
+get_test_oa_mux_config(struct drm_i915_private *dev_priv,
+		       int *len)
+{
+	*len = ARRAY_SIZE(mux_config_test_oa);
+	return mux_config_test_oa;
+}
+
 int i915_oa_select_metric_set_bdw(struct drm_i915_private *dev_priv)
 {
-	dev_priv->perf.oa.mux_regs = NULL;
-	dev_priv->perf.oa.mux_regs_len = 0;
-	dev_priv->perf.oa.b_counter_regs = NULL;
-	dev_priv->perf.oa.b_counter_regs_len = 0;
-	dev_priv->perf.oa.flex_regs = NULL;
-	dev_priv->perf.oa.flex_regs_len = 0;
+	dev_priv->perf.oa.mux_regs = NULL;
+	dev_priv->perf.oa.mux_regs_len = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_render_basic_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_basic_mux_config(dev_priv,
+						     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
+		return 0;
+	case METRIC_SET_ID_RENDER_PIPE_PROFILE:
+		dev_priv->perf.oa.mux_regs =
+			get_render_pipe_profile_mux_config(dev_priv,
+							   &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_PIPE_PROFILE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_pipe_profile;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_pipe_profile);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_pipe_profile;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_pipe_profile);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_READS:
+		dev_priv->perf.oa.mux_regs =
+			get_memory_reads_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_READS\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_reads;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_reads);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_reads;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_reads);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_WRITES:
+		dev_priv->perf.oa.mux_regs =
+			get_memory_writes_mux_config(dev_priv,
+						     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_WRITES\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_writes;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_writes);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_writes;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_writes);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTENDED:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_extended_mux_config(dev_priv,
+							&dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTENDED\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extended;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extended);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extended;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extended);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_L3_CACHE:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_l3_cache_mux_config(dev_priv,
+							&dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_L3_CACHE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_l3_cache;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_l3_cache);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_l3_cache;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_l3_cache);
+
+		return 0;
+	case METRIC_SET_ID_DATA_PORT_READS_COALESCING:
+		dev_priv->perf.oa.mux_regs =
+			get_data_port_reads_coalescing_mux_config(dev_priv,
+								  &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"DATA_PORT_READS_COALESCING\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_data_port_reads_coalescing;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_data_port_reads_coalescing);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_data_port_reads_coalescing;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_data_port_reads_coalescing);
+
+		return 0;
+	case METRIC_SET_ID_DATA_PORT_WRITES_COALESCING:
+		dev_priv->perf.oa.mux_regs =
+			get_data_port_writes_coalescing_mux_config(dev_priv,
+								   &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"DATA_PORT_WRITES_COALESCING\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_data_port_writes_coalescing;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_data_port_writes_coalescing);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_data_port_writes_coalescing;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_data_port_writes_coalescing);
+
+		return 0;
+	case METRIC_SET_ID_HDC_AND_SF:
+		dev_priv->perf.oa.mux_regs =
+			get_hdc_and_sf_mux_config(dev_priv,
+						  &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"HDC_AND_SF\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_hdc_and_sf;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_hdc_and_sf);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_hdc_and_sf;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_hdc_and_sf);
+
+		return 0;
+	case METRIC_SET_ID_L3_1:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_1_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_1);
+
+		return 0;
+	case METRIC_SET_ID_L3_2:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_2_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_2);
+
+		return 0;
+	case METRIC_SET_ID_L3_3:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_3_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_3\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_3;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_3);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_3;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_3);
+
+		return 0;
+	case METRIC_SET_ID_L3_4:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_4_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_4\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_4;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_4);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_4;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_4);
+
+		return 0;
+	case METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND:
+		dev_priv->perf.oa.mux_regs =
+			get_rasterizer_and_pixel_backend_mux_config(dev_priv,
+								    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RASTERIZER_AND_PIXEL_BACKEND\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_rasterizer_and_pixel_backend);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_rasterizer_and_pixel_backend);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER_1:
+		dev_priv->perf.oa.mux_regs =
+			get_sampler_1_mux_config(dev_priv,
+						 &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler_1);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER_2:
+		dev_priv->perf.oa.mux_regs =
+			get_sampler_2_mux_config(dev_priv,
+						 &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler_2);
+
+		return 0;
+	case METRIC_SET_ID_TDL_1:
+		dev_priv->perf.oa.mux_regs =
+			get_tdl_1_mux_config(dev_priv,
+					     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_1);
+
+		return 0;
+	case METRIC_SET_ID_TDL_2:
+		dev_priv->perf.oa.mux_regs =
+			get_tdl_2_mux_config(dev_priv,
+					     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_2);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTRA:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_extra_mux_config(dev_priv,
+						     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTRA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extra;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extra);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extra;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extra);
+
+		return 0;
+	case METRIC_SET_ID_VME_PIPE:
+		dev_priv->perf.oa.mux_regs =
+			get_vme_pipe_mux_config(dev_priv,
+						&dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"VME_PIPE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_vme_pipe;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_vme_pipe);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_vme_pipe;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_vme_pipe);
+
+		return 0;
+	case METRIC_SET_ID_TEST_OA:
+		dev_priv->perf.oa.mux_regs =
+			get_test_oa_mux_config(dev_priv,
+					       &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TEST_OA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_test_oa;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_test_oa);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_test_oa;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_test_oa);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "b541bd57-0e0f-4154-b4c0-5858010a2bf7",
+	.attrs =  attrs_render_basic,
+};
+
+static ssize_t
+show_compute_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_BASIC);
+}
+
+static struct device_attribute dev_attr_compute_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_basic[] = {
+	&dev_attr_compute_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_basic = {
+	.name = "35fbc9b2-a891-40a6-a38d-022bb7057552",
+	.attrs =  attrs_compute_basic,
+};
+
+static ssize_t
+show_render_pipe_profile_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_PIPE_PROFILE);
+}
+
+static struct device_attribute dev_attr_render_pipe_profile_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_pipe_profile_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_pipe_profile[] = {
+	&dev_attr_render_pipe_profile_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_pipe_profile = {
+	.name = "233d0544-fff7-4281-8291-e02f222aff72",
+	.attrs =  attrs_render_pipe_profile,
+};
+
+static ssize_t
+show_memory_reads_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_READS);
+}
+
+static struct device_attribute dev_attr_memory_reads_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_reads_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_reads[] = {
+	&dev_attr_memory_reads_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_reads = {
+	.name = "2b255d48-2117-4fef-a8f7-f151e1d25a2c",
+	.attrs =  attrs_memory_reads,
+};
+
+static ssize_t
+show_memory_writes_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_WRITES);
+}
+
+static struct device_attribute dev_attr_memory_writes_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_writes_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_writes[] = {
+	&dev_attr_memory_writes_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_writes = {
+	.name = "f7fd3220-b466-4a4d-9f98-b0caf3f2394c",
+	.attrs =  attrs_memory_writes,
+};
+
+static ssize_t
+show_compute_extended_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTENDED);
+}
+
+static struct device_attribute dev_attr_compute_extended_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extended_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extended[] = {
+	&dev_attr_compute_extended_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extended = {
+	.name = "e99ccaca-821c-4df9-97a7-96bdb7204e43",
+	.attrs =  attrs_compute_extended,
+};
+
+static ssize_t
+show_compute_l3_cache_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_L3_CACHE);
+}
+
+static struct device_attribute dev_attr_compute_l3_cache_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_l3_cache_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_l3_cache[] = {
+	&dev_attr_compute_l3_cache_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_l3_cache = {
+	.name = "27a364dc-8225-4ecb-b607-d6f1925598d9",
+	.attrs =  attrs_compute_l3_cache,
+};
+
+static ssize_t
+show_data_port_reads_coalescing_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_DATA_PORT_READS_COALESCING);
+}
+
+static struct device_attribute dev_attr_data_port_reads_coalescing_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_data_port_reads_coalescing_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_data_port_reads_coalescing[] = {
+	&dev_attr_data_port_reads_coalescing_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_data_port_reads_coalescing = {
+	.name = "857fc630-2f09-4804-85f1-084adfadd5ab",
+	.attrs =  attrs_data_port_reads_coalescing,
+};
+
+static ssize_t
+show_data_port_writes_coalescing_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_DATA_PORT_WRITES_COALESCING);
+}

-	switch (dev_priv->perf.oa.metrics_set) {
-	case METRIC_SET_ID_RENDER_BASIC:
-		dev_priv->perf.oa.mux_regs =
-			get_render_basic_mux_config(dev_priv,
-						    &dev_priv->perf.oa.mux_regs_len);
-		if (!dev_priv->perf.oa.mux_regs) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+static struct device_attribute dev_attr_data_port_writes_coalescing_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_data_port_writes_coalescing_id,
+	.store = NULL,
+};

-			/* EINVAL because *_register_sysfs already checked this
-			 * and so it wouldn't have been advertised to userspace and
-			 * so shouldn't have been requested
-			 */
-			return -EINVAL;
-		}
+static struct attribute *attrs_data_port_writes_coalescing[] = {
+	&dev_attr_data_port_writes_coalescing_id.attr,
+	NULL,
+};

-		dev_priv->perf.oa.b_counter_regs =
-			b_counter_config_render_basic;
-		dev_priv->perf.oa.b_counter_regs_len =
-			ARRAY_SIZE(b_counter_config_render_basic);
+static struct attribute_group group_data_port_writes_coalescing = {
+	.name = "343ebc99-4a55-414c-8c17-d8e259cf5e20",
+	.attrs =  attrs_data_port_writes_coalescing,
+};

-		dev_priv->perf.oa.flex_regs =
-			flex_eu_config_render_basic;
-		dev_priv->perf.oa.flex_regs_len =
-			ARRAY_SIZE(flex_eu_config_render_basic);
+static ssize_t
+show_hdc_and_sf_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_HDC_AND_SF);
+}

-		return 0;
-	default:
-		return -ENODEV;
-	}
+static struct device_attribute dev_attr_hdc_and_sf_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_hdc_and_sf_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_hdc_and_sf[] = {
+	&dev_attr_hdc_and_sf_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_hdc_and_sf = {
+	.name = "7bdafd88-a4fa-4ed5-bc09-1a977aa5be3e",
+	.attrs =  attrs_hdc_and_sf,
+};
+
+static ssize_t
+show_l3_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_1);
 }

+static struct device_attribute dev_attr_l3_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_1[] = {
+	&dev_attr_l3_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_1 = {
+	.name = "9385ebb2-f34f-4aa5-aec5-7e9cbbea0f0b",
+	.attrs =  attrs_l3_1,
+};
+
 static ssize_t
-show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+show_l3_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
 {
-	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_2);
 }

-static struct device_attribute dev_attr_render_basic_id = {
+static struct device_attribute dev_attr_l3_2_id = {
 	.attr = { .name = "id", .mode = 0444 },
-	.show = show_render_basic_id,
+	.show = show_l3_2_id,
 	.store = NULL,
 };

-static struct attribute *attrs_render_basic[] = {
-	&dev_attr_render_basic_id.attr,
+static struct attribute *attrs_l3_2[] = {
+	&dev_attr_l3_2_id.attr,
 	NULL,
 };

-static struct attribute_group group_render_basic = {
-	.name = "b541bd57-0e0f-4154-b4c0-5858010a2bf7",
-	.attrs =  attrs_render_basic,
+static struct attribute_group group_l3_2 = {
+	.name = "446ae59b-ff2e-41c9-b49e-0184a54bf00a",
+	.attrs =  attrs_l3_2,
+};
+
+static ssize_t
+show_l3_3_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_3);
+}
+
+static struct device_attribute dev_attr_l3_3_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_3_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_3[] = {
+	&dev_attr_l3_3_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_3 = {
+	.name = "84a7956f-1ea4-4d0d-837f-e39a0376e38c",
+	.attrs =  attrs_l3_3,
+};
+
+static ssize_t
+show_l3_4_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_4);
+}
+
+static struct device_attribute dev_attr_l3_4_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_4_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_4[] = {
+	&dev_attr_l3_4_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_4 = {
+	.name = "92b493d9-df18-4bed-be06-5cac6f2a6f5f",
+	.attrs =  attrs_l3_4,
+};
+
+static ssize_t
+show_rasterizer_and_pixel_backend_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND);
+}
+
+static struct device_attribute dev_attr_rasterizer_and_pixel_backend_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_rasterizer_and_pixel_backend_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_rasterizer_and_pixel_backend[] = {
+	&dev_attr_rasterizer_and_pixel_backend_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_rasterizer_and_pixel_backend = {
+	.name = "14345c35-cc46-40d0-bb04-6ed1fbb43679",
+	.attrs =  attrs_rasterizer_and_pixel_backend,
+};
+
+static ssize_t
+show_sampler_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER_1);
+}
+
+static struct device_attribute dev_attr_sampler_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler_1[] = {
+	&dev_attr_sampler_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler_1 = {
+	.name = "f0c6ba37-d3d3-4211-91b5-226730312a54",
+	.attrs =  attrs_sampler_1,
+};
+
+static ssize_t
+show_sampler_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER_2);
+}
+
+static struct device_attribute dev_attr_sampler_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler_2[] = {
+	&dev_attr_sampler_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler_2 = {
+	.name = "30bf3702-48cf-4bca-b412-7cf50bb2f564",
+	.attrs =  attrs_sampler_2,
+};
+
+static ssize_t
+show_tdl_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_1);
+}
+
+static struct device_attribute dev_attr_tdl_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_1[] = {
+	&dev_attr_tdl_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_1 = {
+	.name = "238bec85-df05-44f3-b905-d166712f2451",
+	.attrs =  attrs_tdl_1,
+};
+
+static ssize_t
+show_tdl_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_2);
+}
+
+static struct device_attribute dev_attr_tdl_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_2[] = {
+	&dev_attr_tdl_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_2 = {
+	.name = "24bf02cd-8693-4583-981c-c4165b33da01",
+	.attrs =  attrs_tdl_2,
+};
+
+static ssize_t
+show_compute_extra_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTRA);
+}
+
+static struct device_attribute dev_attr_compute_extra_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extra_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extra[] = {
+	&dev_attr_compute_extra_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extra = {
+	.name = "8fb61ba2-2fbb-454c-a136-2dec5a8a595e",
+	.attrs =  attrs_compute_extra,
+};
+
+static ssize_t
+show_vme_pipe_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_VME_PIPE);
+}
+
+static struct device_attribute dev_attr_vme_pipe_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_vme_pipe_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_vme_pipe[] = {
+	&dev_attr_vme_pipe_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_vme_pipe = {
+	.name = "e1743ca0-7fc8-410b-a066-de7bbb9280b7",
+	.attrs =  attrs_vme_pipe,
+};
+
+static ssize_t
+show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TEST_OA);
+}
+
+static struct device_attribute dev_attr_test_oa_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_test_oa_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_test_oa[] = {
+	&dev_attr_test_oa_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_test_oa = {
+	.name = "d6de6f55-e526-4f79-a6a6-d7315c09044e",
+	.attrs =  attrs_test_oa,
 };

 int
@@ -363,9 +4927,177 @@ i915_perf_register_sysfs_bdw(struct drm_i915_private *dev_priv)
 		if (ret)
 			goto error_render_basic;
 	}
+	if (get_compute_basic_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+		if (ret)
+			goto error_compute_basic;
+	}
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+		if (ret)
+			goto error_render_pipe_profile;
+	}
+	if (get_memory_reads_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+		if (ret)
+			goto error_memory_reads;
+	}
+	if (get_memory_writes_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+		if (ret)
+			goto error_memory_writes;
+	}
+	if (get_compute_extended_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+		if (ret)
+			goto error_compute_extended;
+	}
+	if (get_compute_l3_cache_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+		if (ret)
+			goto error_compute_l3_cache;
+	}
+	if (get_data_port_reads_coalescing_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_data_port_reads_coalescing);
+		if (ret)
+			goto error_data_port_reads_coalescing;
+	}
+	if (get_data_port_writes_coalescing_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_data_port_writes_coalescing);
+		if (ret)
+			goto error_data_port_writes_coalescing;
+	}
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+		if (ret)
+			goto error_hdc_and_sf;
+	}
+	if (get_l3_1_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+		if (ret)
+			goto error_l3_1;
+	}
+	if (get_l3_2_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+		if (ret)
+			goto error_l3_2;
+	}
+	if (get_l3_3_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+		if (ret)
+			goto error_l3_3;
+	}
+	if (get_l3_4_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_4);
+		if (ret)
+			goto error_l3_4;
+	}
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+		if (ret)
+			goto error_rasterizer_and_pixel_backend;
+	}
+	if (get_sampler_1_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler_1);
+		if (ret)
+			goto error_sampler_1;
+	}
+	if (get_sampler_2_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler_2);
+		if (ret)
+			goto error_sampler_2;
+	}
+	if (get_tdl_1_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+		if (ret)
+			goto error_tdl_1;
+	}
+	if (get_tdl_2_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+		if (ret)
+			goto error_tdl_2;
+	}
+	if (get_compute_extra_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+		if (ret)
+			goto error_compute_extra;
+	}
+	if (get_vme_pipe_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+		if (ret)
+			goto error_vme_pipe;
+	}
+	if (get_test_oa_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+		if (ret)
+			goto error_test_oa;
+	}

 	return 0;

+error_test_oa:
+	if (get_vme_pipe_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+error_vme_pipe:
+	if (get_compute_extra_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+error_compute_extra:
+	if (get_tdl_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+error_tdl_2:
+	if (get_tdl_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+error_tdl_1:
+	if (get_sampler_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_2);
+error_sampler_2:
+	if (get_sampler_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_1);
+error_sampler_1:
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+error_rasterizer_and_pixel_backend:
+	if (get_l3_4_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_4);
+error_l3_4:
+	if (get_l3_3_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+error_l3_3:
+	if (get_l3_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+error_l3_2:
+	if (get_l3_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+error_l3_1:
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+error_hdc_and_sf:
+	if (get_data_port_writes_coalescing_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_data_port_writes_coalescing);
+error_data_port_writes_coalescing:
+	if (get_data_port_reads_coalescing_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_data_port_reads_coalescing);
+error_data_port_reads_coalescing:
+	if (get_compute_l3_cache_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+error_compute_l3_cache:
+	if (get_compute_extended_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+error_compute_extended:
+	if (get_memory_writes_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+error_memory_writes:
+	if (get_memory_reads_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+error_memory_reads:
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+error_render_pipe_profile:
+	if (get_compute_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+error_compute_basic:
+	if (get_render_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
 error_render_basic:
 	return ret;
 }
@@ -377,4 +5109,46 @@ i915_perf_unregister_sysfs_bdw(struct drm_i915_private *dev_priv)

 	if (get_render_basic_mux_config(dev_priv, &mux_len))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+	if (get_compute_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+	if (get_memory_reads_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+	if (get_memory_writes_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+	if (get_compute_extended_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+	if (get_compute_l3_cache_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+	if (get_data_port_reads_coalescing_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_data_port_reads_coalescing);
+	if (get_data_port_writes_coalescing_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_data_port_writes_coalescing);
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+	if (get_l3_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+	if (get_l3_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+	if (get_l3_3_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+	if (get_l3_4_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_4);
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+	if (get_sampler_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_1);
+	if (get_sampler_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_2);
+	if (get_tdl_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+	if (get_tdl_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+	if (get_compute_extra_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+	if (get_vme_pipe_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+	if (get_test_oa_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_test_oa);
 }
diff --git a/drivers/gpu/drm/i915/i915_oa_bxt.c b/drivers/gpu/drm/i915/i915_oa_bxt.c
index b10e44643bc1..7bc979a7c159 100644
--- a/drivers/gpu/drm/i915/i915_oa_bxt.c
+++ b/drivers/gpu/drm/i915/i915_oa_bxt.c
@@ -31,9 +31,23 @@

 enum metric_set_id {
 	METRIC_SET_ID_RENDER_BASIC = 1,
+	METRIC_SET_ID_COMPUTE_BASIC,
+	METRIC_SET_ID_RENDER_PIPE_PROFILE,
+	METRIC_SET_ID_MEMORY_READS,
+	METRIC_SET_ID_MEMORY_WRITES,
+	METRIC_SET_ID_COMPUTE_EXTENDED,
+	METRIC_SET_ID_COMPUTE_L3_CACHE,
+	METRIC_SET_ID_HDC_AND_SF,
+	METRIC_SET_ID_L3_1,
+	METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND,
+	METRIC_SET_ID_SAMPLER,
+	METRIC_SET_ID_TDL_1,
+	METRIC_SET_ID_TDL_2,
+	METRIC_SET_ID_COMPUTE_EXTRA,
+	METRIC_SET_ID_TEST_OA,
 };

-int i915_oa_n_builtin_metric_sets_bxt = 1;
+int i915_oa_n_builtin_metric_sets_bxt = 15;

 static const struct i915_oa_reg b_counter_config_render_basic[] = {
 	{ _MMIO(0x2710), 0x00000000 },
@@ -148,6 +162,1497 @@ get_render_basic_mux_config(struct drm_i915_private *dev_priv,
 	return NULL;
 }

+static const struct i915_oa_reg b_counter_config_compute_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic[] = {
+	{ _MMIO(0x9888), 0x104f00e0 },
+	{ _MMIO(0x9888), 0x124f1c00 },
+	{ _MMIO(0x9888), 0x39900340 },
+	{ _MMIO(0x9888), 0x3f900c00 },
+	{ _MMIO(0x9888), 0x41900000 },
+	{ _MMIO(0x9888), 0x002d5000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x082d4000 },
+	{ _MMIO(0x9888), 0x0a2d1000 },
+	{ _MMIO(0x9888), 0x0c2d5000 },
+	{ _MMIO(0x9888), 0x0e2d4000 },
+	{ _MMIO(0x9888), 0x0c2e1400 },
+	{ _MMIO(0x9888), 0x0e2e5100 },
+	{ _MMIO(0x9888), 0x102e0114 },
+	{ _MMIO(0x9888), 0x044cc000 },
+	{ _MMIO(0x9888), 0x0a4c8000 },
+	{ _MMIO(0x9888), 0x0c4c8000 },
+	{ _MMIO(0x9888), 0x0e4c4000 },
+	{ _MMIO(0x9888), 0x104c8000 },
+	{ _MMIO(0x9888), 0x124c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x004ea000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e2000 },
+	{ _MMIO(0x9888), 0x0c4ea000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x004f6b42 },
+	{ _MMIO(0x9888), 0x064f6200 },
+	{ _MMIO(0x9888), 0x084f4100 },
+	{ _MMIO(0x9888), 0x0a4f0061 },
+	{ _MMIO(0x9888), 0x0c4f6c4c },
+	{ _MMIO(0x9888), 0x0e4f4b00 },
+	{ _MMIO(0x9888), 0x1a4f0000 },
+	{ _MMIO(0x9888), 0x1c4f0000 },
+	{ _MMIO(0x9888), 0x180f5000 },
+	{ _MMIO(0x9888), 0x1a0f8800 },
+	{ _MMIO(0x9888), 0x1c0f08a2 },
+	{ _MMIO(0x9888), 0x182c4000 },
+	{ _MMIO(0x9888), 0x1c2c1451 },
+	{ _MMIO(0x9888), 0x1e2c0001 },
+	{ _MMIO(0x9888), 0x1a2c0010 },
+	{ _MMIO(0x9888), 0x01938000 },
+	{ _MMIO(0x9888), 0x0f938000 },
+	{ _MMIO(0x9888), 0x19938a28 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x19900177 },
+	{ _MMIO(0x9888), 0x1b900178 },
+	{ _MMIO(0x9888), 0x1d900125 },
+	{ _MMIO(0x9888), 0x1f900123 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x25904000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x53901000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x55900111 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+};
+
+static const struct i915_oa_reg *
+get_compute_basic_mux_config(struct drm_i915_private *dev_priv,
+			     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_basic);
+	return mux_config_compute_basic;
+}
+
+static const struct i915_oa_reg b_counter_config_render_pipe_profile[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ffea },
+	{ _MMIO(0x2774), 0x00007ffc },
+	{ _MMIO(0x2778), 0x0007affa },
+	{ _MMIO(0x277c), 0x0000f5fd },
+	{ _MMIO(0x2780), 0x00079ffa },
+	{ _MMIO(0x2784), 0x0000f3fb },
+	{ _MMIO(0x2788), 0x0007bf7a },
+	{ _MMIO(0x278c), 0x0000f7e7 },
+	{ _MMIO(0x2790), 0x0007fefa },
+	{ _MMIO(0x2794), 0x0000f7cf },
+	{ _MMIO(0x2798), 0x00077ffa },
+	{ _MMIO(0x279c), 0x0000efdf },
+	{ _MMIO(0x27a0), 0x0006fffa },
+	{ _MMIO(0x27a4), 0x0000cfbf },
+	{ _MMIO(0x27a8), 0x0003fffa },
+	{ _MMIO(0x27ac), 0x00005f7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_pipe_profile[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile[] = {
+	{ _MMIO(0x9888), 0x0c2e001f },
+	{ _MMIO(0x9888), 0x0a2f0000 },
+	{ _MMIO(0x9888), 0x10186800 },
+	{ _MMIO(0x9888), 0x11810019 },
+	{ _MMIO(0x9888), 0x15810013 },
+	{ _MMIO(0x9888), 0x13820020 },
+	{ _MMIO(0x9888), 0x11830020 },
+	{ _MMIO(0x9888), 0x17840000 },
+	{ _MMIO(0x9888), 0x11860007 },
+	{ _MMIO(0x9888), 0x21860000 },
+	{ _MMIO(0x9888), 0x178703e0 },
+	{ _MMIO(0x9888), 0x0c2d8000 },
+	{ _MMIO(0x9888), 0x042d4000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x022e5400 },
+	{ _MMIO(0x9888), 0x002e0000 },
+	{ _MMIO(0x9888), 0x0e2e0080 },
+	{ _MMIO(0x9888), 0x082f0040 },
+	{ _MMIO(0x9888), 0x002f0000 },
+	{ _MMIO(0x9888), 0x06143000 },
+	{ _MMIO(0x9888), 0x06174000 },
+	{ _MMIO(0x9888), 0x06180012 },
+	{ _MMIO(0x9888), 0x00180000 },
+	{ _MMIO(0x9888), 0x0d804000 },
+	{ _MMIO(0x9888), 0x0f804000 },
+	{ _MMIO(0x9888), 0x05804000 },
+	{ _MMIO(0x9888), 0x09810200 },
+	{ _MMIO(0x9888), 0x0b810030 },
+	{ _MMIO(0x9888), 0x03810003 },
+	{ _MMIO(0x9888), 0x21819140 },
+	{ _MMIO(0x9888), 0x23819050 },
+	{ _MMIO(0x9888), 0x25810018 },
+	{ _MMIO(0x9888), 0x0b820980 },
+	{ _MMIO(0x9888), 0x03820d80 },
+	{ _MMIO(0x9888), 0x11820000 },
+	{ _MMIO(0x9888), 0x0182c000 },
+	{ _MMIO(0x9888), 0x07828000 },
+	{ _MMIO(0x9888), 0x09824000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x0d830004 },
+	{ _MMIO(0x9888), 0x0583000c },
+	{ _MMIO(0x9888), 0x0f831000 },
+	{ _MMIO(0x9888), 0x01848072 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x09844000 },
+	{ _MMIO(0x9888), 0x0f848000 },
+	{ _MMIO(0x9888), 0x07860000 },
+	{ _MMIO(0x9888), 0x09860092 },
+	{ _MMIO(0x9888), 0x0f860400 },
+	{ _MMIO(0x9888), 0x01869100 },
+	{ _MMIO(0x9888), 0x0f870065 },
+	{ _MMIO(0x9888), 0x01870000 },
+	{ _MMIO(0x9888), 0x19930800 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x1b952000 },
+	{ _MMIO(0x9888), 0x1d955055 },
+	{ _MMIO(0x9888), 0x1f951455 },
+	{ _MMIO(0x9888), 0x0992a000 },
+	{ _MMIO(0x9888), 0x0f928000 },
+	{ _MMIO(0x9888), 0x1192a800 },
+	{ _MMIO(0x9888), 0x1392028a },
+	{ _MMIO(0x9888), 0x0b92a000 },
+	{ _MMIO(0x9888), 0x0d922000 },
+	{ _MMIO(0x9888), 0x13908000 },
+	{ _MMIO(0x9888), 0x21908000 },
+	{ _MMIO(0x9888), 0x23908000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27908000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f908000 },
+	{ _MMIO(0x9888), 0x31908000 },
+	{ _MMIO(0x9888), 0x15908000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900c01 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900863 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900061 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900000 },
+	{ _MMIO(0x9888), 0x45900c22 },
+};
+
+static const struct i915_oa_reg *
+get_render_pipe_profile_mux_config(struct drm_i915_private *dev_priv,
+				   int *len)
+{
+	*len = ARRAY_SIZE(mux_config_render_pipe_profile);
+	return mux_config_render_pipe_profile;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_reads[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f872 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_reads[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads[] = {
+	{ _MMIO(0x9888), 0x19800343 },
+	{ _MMIO(0x9888), 0x39900340 },
+	{ _MMIO(0x9888), 0x3f901000 },
+	{ _MMIO(0x9888), 0x41900003 },
+	{ _MMIO(0x9888), 0x03803180 },
+	{ _MMIO(0x9888), 0x058035e2 },
+	{ _MMIO(0x9888), 0x0780006a },
+	{ _MMIO(0x9888), 0x11800000 },
+	{ _MMIO(0x9888), 0x2181a000 },
+	{ _MMIO(0x9888), 0x2381000a },
+	{ _MMIO(0x9888), 0x1d950550 },
+	{ _MMIO(0x9888), 0x0b928000 },
+	{ _MMIO(0x9888), 0x0d92a000 },
+	{ _MMIO(0x9888), 0x0f922000 },
+	{ _MMIO(0x9888), 0x13900170 },
+	{ _MMIO(0x9888), 0x21900171 },
+	{ _MMIO(0x9888), 0x23900172 },
+	{ _MMIO(0x9888), 0x25900173 },
+	{ _MMIO(0x9888), 0x27900174 },
+	{ _MMIO(0x9888), 0x29900175 },
+	{ _MMIO(0x9888), 0x2b900176 },
+	{ _MMIO(0x9888), 0x2d900177 },
+	{ _MMIO(0x9888), 0x2f90017f },
+	{ _MMIO(0x9888), 0x31900125 },
+	{ _MMIO(0x9888), 0x15900123 },
+	{ _MMIO(0x9888), 0x17900121 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43901084 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47901080 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49901084 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b901084 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900004 },
+	{ _MMIO(0x9888), 0x45900000 },
+};
+
+static const struct i915_oa_reg *
+get_memory_reads_mux_config(struct drm_i915_private *dev_priv,
+			    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_memory_reads);
+	return mux_config_memory_reads;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_writes[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f822 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_writes[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes[] = {
+	{ _MMIO(0x9888), 0x19800343 },
+	{ _MMIO(0x9888), 0x39900340 },
+	{ _MMIO(0x9888), 0x3f900000 },
+	{ _MMIO(0x9888), 0x41900080 },
+	{ _MMIO(0x9888), 0x03803180 },
+	{ _MMIO(0x9888), 0x058035e2 },
+	{ _MMIO(0x9888), 0x0780006a },
+	{ _MMIO(0x9888), 0x11800000 },
+	{ _MMIO(0x9888), 0x2181a000 },
+	{ _MMIO(0x9888), 0x2381000a },
+	{ _MMIO(0x9888), 0x1d950550 },
+	{ _MMIO(0x9888), 0x0b928000 },
+	{ _MMIO(0x9888), 0x0d92a000 },
+	{ _MMIO(0x9888), 0x0f922000 },
+	{ _MMIO(0x9888), 0x13900180 },
+	{ _MMIO(0x9888), 0x21900181 },
+	{ _MMIO(0x9888), 0x23900182 },
+	{ _MMIO(0x9888), 0x25900183 },
+	{ _MMIO(0x9888), 0x27900184 },
+	{ _MMIO(0x9888), 0x29900185 },
+	{ _MMIO(0x9888), 0x2b900186 },
+	{ _MMIO(0x9888), 0x2d900187 },
+	{ _MMIO(0x9888), 0x2f900170 },
+	{ _MMIO(0x9888), 0x31900125 },
+	{ _MMIO(0x9888), 0x15900123 },
+	{ _MMIO(0x9888), 0x17900121 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43901084 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47901080 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49901084 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b901084 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900004 },
+	{ _MMIO(0x9888), 0x45900000 },
+};
+
+static const struct i915_oa_reg *
+get_memory_writes_mux_config(struct drm_i915_private *dev_priv,
+			     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_memory_writes);
+	return mux_config_memory_writes;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extended[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fc2a },
+	{ _MMIO(0x2774), 0x0000bf00 },
+	{ _MMIO(0x2778), 0x0007fc6a },
+	{ _MMIO(0x277c), 0x0000bf00 },
+	{ _MMIO(0x2780), 0x0007fc92 },
+	{ _MMIO(0x2784), 0x0000bf00 },
+	{ _MMIO(0x2788), 0x0007fca2 },
+	{ _MMIO(0x278c), 0x0000bf00 },
+	{ _MMIO(0x2790), 0x0007fc32 },
+	{ _MMIO(0x2794), 0x0000bf00 },
+	{ _MMIO(0x2798), 0x0007fc9a },
+	{ _MMIO(0x279c), 0x0000bf00 },
+	{ _MMIO(0x27a0), 0x0007fe6a },
+	{ _MMIO(0x27a4), 0x0000bf00 },
+	{ _MMIO(0x27a8), 0x0007fe7a },
+	{ _MMIO(0x27ac), 0x0000bf00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extended[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended[] = {
+	{ _MMIO(0x9888), 0x104f00e0 },
+	{ _MMIO(0x9888), 0x141c0160 },
+	{ _MMIO(0x9888), 0x161c0015 },
+	{ _MMIO(0x9888), 0x181c0120 },
+	{ _MMIO(0x9888), 0x002d5000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x082d5000 },
+	{ _MMIO(0x9888), 0x0a2d5000 },
+	{ _MMIO(0x9888), 0x0c2d5000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x0c2e5400 },
+	{ _MMIO(0x9888), 0x0e2e5515 },
+	{ _MMIO(0x9888), 0x102e0155 },
+	{ _MMIO(0x9888), 0x044cc000 },
+	{ _MMIO(0x9888), 0x0a4c8000 },
+	{ _MMIO(0x9888), 0x0c4cc000 },
+	{ _MMIO(0x9888), 0x0e4cc000 },
+	{ _MMIO(0x9888), 0x104c8000 },
+	{ _MMIO(0x9888), 0x124c8000 },
+	{ _MMIO(0x9888), 0x144c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x064cc000 },
+	{ _MMIO(0x9888), 0x084cc000 },
+	{ _MMIO(0x9888), 0x004ea000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084ea000 },
+	{ _MMIO(0x9888), 0x0a4ea000 },
+	{ _MMIO(0x9888), 0x0c4ea000 },
+	{ _MMIO(0x9888), 0x0e4ea000 },
+	{ _MMIO(0x9888), 0x024ea000 },
+	{ _MMIO(0x9888), 0x044ea000 },
+	{ _MMIO(0x9888), 0x0e4f4b41 },
+	{ _MMIO(0x9888), 0x004f4200 },
+	{ _MMIO(0x9888), 0x024f404c },
+	{ _MMIO(0x9888), 0x1c4f0000 },
+	{ _MMIO(0x9888), 0x1a4f0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x061b8000 },
+	{ _MMIO(0x9888), 0x081bc000 },
+	{ _MMIO(0x9888), 0x0a1bc000 },
+	{ _MMIO(0x9888), 0x0c1bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x001c0031 },
+	{ _MMIO(0x9888), 0x061c1900 },
+	{ _MMIO(0x9888), 0x081c1a33 },
+	{ _MMIO(0x9888), 0x0a1c1b35 },
+	{ _MMIO(0x9888), 0x0c1c3337 },
+	{ _MMIO(0x9888), 0x041c31c7 },
+	{ _MMIO(0x9888), 0x180f5000 },
+	{ _MMIO(0x9888), 0x1a0fa8aa },
+	{ _MMIO(0x9888), 0x1c0f0aaa },
+	{ _MMIO(0x9888), 0x182c8000 },
+	{ _MMIO(0x9888), 0x1c2c6aaa },
+	{ _MMIO(0x9888), 0x1e2c0001 },
+	{ _MMIO(0x9888), 0x1a2c2950 },
+	{ _MMIO(0x9888), 0x01938000 },
+	{ _MMIO(0x9888), 0x0f938000 },
+	{ _MMIO(0x9888), 0x1993aaaa },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25904000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x29904000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900420 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900400 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900001 },
+	{ _MMIO(0x9888), 0x45900001 },
+};
+
+static const struct i915_oa_reg *
+get_compute_extended_mux_config(struct drm_i915_private *dev_priv,
+				int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_extended);
+	return mux_config_compute_extended;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_l3_cache[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fffa },
+	{ _MMIO(0x2774), 0x0000fefe },
+	{ _MMIO(0x2778), 0x0007fffa },
+	{ _MMIO(0x277c), 0x0000fefd },
+	{ _MMIO(0x2790), 0x0007fffa },
+	{ _MMIO(0x2794), 0x0000fbef },
+	{ _MMIO(0x2798), 0x0007fffa },
+	{ _MMIO(0x279c), 0x0000fbdf },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_l3_cache[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00101100 },
+	{ _MMIO(0xe45c), 0x00201200 },
+	{ _MMIO(0xe55c), 0x00301300 },
+	{ _MMIO(0xe65c), 0x00401400 },
+};
+
+static const struct i915_oa_reg mux_config_compute_l3_cache[] = {
+	{ _MMIO(0x9888), 0x166c03b0 },
+	{ _MMIO(0x9888), 0x1593001e },
+	{ _MMIO(0x9888), 0x3f900c00 },
+	{ _MMIO(0x9888), 0x41900000 },
+	{ _MMIO(0x9888), 0x002d1000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x082d5000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x0c2e0400 },
+	{ _MMIO(0x9888), 0x0e2e1500 },
+	{ _MMIO(0x9888), 0x102e0140 },
+	{ _MMIO(0x9888), 0x044c4000 },
+	{ _MMIO(0x9888), 0x0a4c8000 },
+	{ _MMIO(0x9888), 0x0c4cc000 },
+	{ _MMIO(0x9888), 0x144c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x004e2000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084ea000 },
+	{ _MMIO(0x9888), 0x0e4ea000 },
+	{ _MMIO(0x9888), 0x1a4f4001 },
+	{ _MMIO(0x9888), 0x1c4f5005 },
+	{ _MMIO(0x9888), 0x006c0051 },
+	{ _MMIO(0x9888), 0x066c5000 },
+	{ _MMIO(0x9888), 0x086c5c5d },
+	{ _MMIO(0x9888), 0x0e6c5e5f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x146c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x180f1000 },
+	{ _MMIO(0x9888), 0x1a0fa800 },
+	{ _MMIO(0x9888), 0x1c0f0a00 },
+	{ _MMIO(0x9888), 0x182c4000 },
+	{ _MMIO(0x9888), 0x1c2c4015 },
+	{ _MMIO(0x9888), 0x1e2c0001 },
+	{ _MMIO(0x9888), 0x03931980 },
+	{ _MMIO(0x9888), 0x05930032 },
+	{ _MMIO(0x9888), 0x11930000 },
+	{ _MMIO(0x9888), 0x01938000 },
+	{ _MMIO(0x9888), 0x0f938000 },
+	{ _MMIO(0x9888), 0x1993a00a },
+	{ _MMIO(0x9888), 0x07930000 },
+	{ _MMIO(0x9888), 0x09930000 },
+	{ _MMIO(0x9888), 0x1d900177 },
+	{ _MMIO(0x9888), 0x1f900178 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x53901000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x55900111 },
+	{ _MMIO(0x9888), 0x47900001 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x4d900000 },
+	{ _MMIO(0x9888), 0x45900400 },
+};
+
+static const struct i915_oa_reg *
+get_compute_l3_cache_mux_config(struct drm_i915_private *dev_priv,
+				int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_l3_cache);
+	return mux_config_compute_l3_cache;
+}
+
+static const struct i915_oa_reg b_counter_config_hdc_and_sf[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x10800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+};
+
+static const struct i915_oa_reg flex_eu_config_hdc_and_sf[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_hdc_and_sf[] = {
+	{ _MMIO(0x9888), 0x104f0232 },
+	{ _MMIO(0x9888), 0x124f4640 },
+	{ _MMIO(0x9888), 0x11834400 },
+	{ _MMIO(0x9888), 0x022d4000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x0e2e0055 },
+	{ _MMIO(0x9888), 0x064c8000 },
+	{ _MMIO(0x9888), 0x084cc000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044ea000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x024f6100 },
+	{ _MMIO(0x9888), 0x044f416b },
+	{ _MMIO(0x9888), 0x064f004b },
+	{ _MMIO(0x9888), 0x1a4f0000 },
+	{ _MMIO(0x9888), 0x1a0f02a8 },
+	{ _MMIO(0x9888), 0x1a2c5500 },
+	{ _MMIO(0x9888), 0x0f808000 },
+	{ _MMIO(0x9888), 0x25810020 },
+	{ _MMIO(0x9888), 0x0f8305c0 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x1f951000 },
+	{ _MMIO(0x9888), 0x13920200 },
+	{ _MMIO(0x9888), 0x31908000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4d900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_hdc_and_sf_mux_config(struct drm_i915_private *dev_priv,
+			  int *len)
+{
+	*len = ARRAY_SIZE(mux_config_hdc_and_sf);
+	return mux_config_hdc_and_sf;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1_0_sku_gte_0x03[] = {
+	{ _MMIO(0x9888), 0x12643400 },
+	{ _MMIO(0x9888), 0x12653400 },
+	{ _MMIO(0x9888), 0x106c6800 },
+	{ _MMIO(0x9888), 0x126c001e },
+	{ _MMIO(0x9888), 0x166c0010 },
+	{ _MMIO(0x9888), 0x0c2d5000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x002d4000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x102e0154 },
+	{ _MMIO(0x9888), 0x0c2e5000 },
+	{ _MMIO(0x9888), 0x0e2e0055 },
+	{ _MMIO(0x9888), 0x104c8000 },
+	{ _MMIO(0x9888), 0x124c8000 },
+	{ _MMIO(0x9888), 0x144c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x044c8000 },
+	{ _MMIO(0x9888), 0x064cc000 },
+	{ _MMIO(0x9888), 0x084cc000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x0c4ea000 },
+	{ _MMIO(0x9888), 0x0e4ea000 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x024ea000 },
+	{ _MMIO(0x9888), 0x044ea000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x1c4f5500 },
+	{ _MMIO(0x9888), 0x1a4f1554 },
+	{ _MMIO(0x9888), 0x0a640024 },
+	{ _MMIO(0x9888), 0x10640000 },
+	{ _MMIO(0x9888), 0x04640000 },
+	{ _MMIO(0x9888), 0x0c650024 },
+	{ _MMIO(0x9888), 0x10650000 },
+	{ _MMIO(0x9888), 0x06650000 },
+	{ _MMIO(0x9888), 0x0c6c5327 },
+	{ _MMIO(0x9888), 0x0e6c5425 },
+	{ _MMIO(0x9888), 0x006c2a00 },
+	{ _MMIO(0x9888), 0x026c285b },
+	{ _MMIO(0x9888), 0x046c005c },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0900 },
+	{ _MMIO(0x9888), 0x1c0f0aa0 },
+	{ _MMIO(0x9888), 0x180f4000 },
+	{ _MMIO(0x9888), 0x1a0f02aa },
+	{ _MMIO(0x9888), 0x1c2c5400 },
+	{ _MMIO(0x9888), 0x1e2c0001 },
+	{ _MMIO(0x9888), 0x1a2c5550 },
+	{ _MMIO(0x9888), 0x1993aa00 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900421 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900001 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900420 },
+	{ _MMIO(0x9888), 0x45900021 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1_0_sku_lt_0x03[] = {
+	{ _MMIO(0x9888), 0x14640340 },
+	{ _MMIO(0x9888), 0x14650340 },
+	{ _MMIO(0x9888), 0x106c6800 },
+	{ _MMIO(0x9888), 0x126c001e },
+	{ _MMIO(0x9888), 0x166c0010 },
+	{ _MMIO(0x9888), 0x0c2d5000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x002d4000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x102e0154 },
+	{ _MMIO(0x9888), 0x0c2e5000 },
+	{ _MMIO(0x9888), 0x0e2e0055 },
+	{ _MMIO(0x9888), 0x104c8000 },
+	{ _MMIO(0x9888), 0x124c8000 },
+	{ _MMIO(0x9888), 0x144c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x044c8000 },
+	{ _MMIO(0x9888), 0x064cc000 },
+	{ _MMIO(0x9888), 0x084cc000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x0c4ea000 },
+	{ _MMIO(0x9888), 0x0e4ea000 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x024ea000 },
+	{ _MMIO(0x9888), 0x044ea000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x1c4f5500 },
+	{ _MMIO(0x9888), 0x1a4f1554 },
+	{ _MMIO(0x9888), 0x04642400 },
+	{ _MMIO(0x9888), 0x22640000 },
+	{ _MMIO(0x9888), 0x1a640000 },
+	{ _MMIO(0x9888), 0x06650024 },
+	{ _MMIO(0x9888), 0x22650000 },
+	{ _MMIO(0x9888), 0x1c650000 },
+	{ _MMIO(0x9888), 0x0c6c5327 },
+	{ _MMIO(0x9888), 0x0e6c5425 },
+	{ _MMIO(0x9888), 0x006c2a00 },
+	{ _MMIO(0x9888), 0x026c285b },
+	{ _MMIO(0x9888), 0x046c005c },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0900 },
+	{ _MMIO(0x9888), 0x1c0f0aa0 },
+	{ _MMIO(0x9888), 0x180f4000 },
+	{ _MMIO(0x9888), 0x1a0f02aa },
+	{ _MMIO(0x9888), 0x1c2c5400 },
+	{ _MMIO(0x9888), 0x1e2c0001 },
+	{ _MMIO(0x9888), 0x1a2c5550 },
+	{ _MMIO(0x9888), 0x1993aa00 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900421 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900001 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900420 },
+	{ _MMIO(0x9888), 0x45900021 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+};
+
+static const struct i915_oa_reg *
+get_l3_1_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	if (dev_priv->drm.pdev->revision >= 0x03) {
+		*len = ARRAY_SIZE(mux_config_l3_1_0_sku_gte_0x03);
+		return mux_config_l3_1_0_sku_gte_0x03;
+	} else if (dev_priv->drm.pdev->revision < 0x03) {
+		*len = ARRAY_SIZE(mux_config_l3_1_0_sku_lt_0x03);
+		return mux_config_l3_1_0_sku_lt_0x03;
+	}
+
+	*len = 0;
+	return NULL;
+}
+
+static const struct i915_oa_reg b_counter_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000efff },
+	{ _MMIO(0x2778), 0x00006000 },
+	{ _MMIO(0x277c), 0x0000f3ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x9888), 0x102d7800 },
+	{ _MMIO(0x9888), 0x122d79e0 },
+	{ _MMIO(0x9888), 0x0c2f0004 },
+	{ _MMIO(0x9888), 0x100e3800 },
+	{ _MMIO(0x9888), 0x180f0005 },
+	{ _MMIO(0x9888), 0x002d0940 },
+	{ _MMIO(0x9888), 0x022d802f },
+	{ _MMIO(0x9888), 0x042d4013 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x0e2e0050 },
+	{ _MMIO(0x9888), 0x022f0010 },
+	{ _MMIO(0x9888), 0x002f0000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x040e0480 },
+	{ _MMIO(0x9888), 0x000e0000 },
+	{ _MMIO(0x9888), 0x060f0027 },
+	{ _MMIO(0x9888), 0x100f0000 },
+	{ _MMIO(0x9888), 0x1a0f0040 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x439014a0 },
+	{ _MMIO(0x9888), 0x459000a4 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900001 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_rasterizer_and_pixel_backend_mux_config(struct drm_i915_private *dev_priv,
+					    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_rasterizer_and_pixel_backend);
+	return mux_config_rasterizer_and_pixel_backend;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler[] = {
+	{ _MMIO(0x9888), 0x121300a0 },
+	{ _MMIO(0x9888), 0x141600ab },
+	{ _MMIO(0x9888), 0x123300a0 },
+	{ _MMIO(0x9888), 0x143600ab },
+	{ _MMIO(0x9888), 0x125300a0 },
+	{ _MMIO(0x9888), 0x145600ab },
+	{ _MMIO(0x9888), 0x0c2d4000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x002d4000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x102e01a0 },
+	{ _MMIO(0x9888), 0x0c2e5000 },
+	{ _MMIO(0x9888), 0x0e2e0065 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x044c8000 },
+	{ _MMIO(0x9888), 0x064cc000 },
+	{ _MMIO(0x9888), 0x084c4000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x024ea000 },
+	{ _MMIO(0x9888), 0x044e2000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x1c0f0800 },
+	{ _MMIO(0x9888), 0x180f4000 },
+	{ _MMIO(0x9888), 0x1a0f023f },
+	{ _MMIO(0x9888), 0x1e2c0003 },
+	{ _MMIO(0x9888), 0x1a2cc030 },
+	{ _MMIO(0x9888), 0x04132180 },
+	{ _MMIO(0x9888), 0x02130000 },
+	{ _MMIO(0x9888), 0x0c148000 },
+	{ _MMIO(0x9888), 0x0e142000 },
+	{ _MMIO(0x9888), 0x04148000 },
+	{ _MMIO(0x9888), 0x1e150140 },
+	{ _MMIO(0x9888), 0x1c150040 },
+	{ _MMIO(0x9888), 0x0c163000 },
+	{ _MMIO(0x9888), 0x0e160068 },
+	{ _MMIO(0x9888), 0x10160000 },
+	{ _MMIO(0x9888), 0x18160000 },
+	{ _MMIO(0x9888), 0x0a164000 },
+	{ _MMIO(0x9888), 0x04330043 },
+	{ _MMIO(0x9888), 0x02330000 },
+	{ _MMIO(0x9888), 0x0234a000 },
+	{ _MMIO(0x9888), 0x04342000 },
+	{ _MMIO(0x9888), 0x1c350015 },
+	{ _MMIO(0x9888), 0x02363460 },
+	{ _MMIO(0x9888), 0x10360000 },
+	{ _MMIO(0x9888), 0x04360000 },
+	{ _MMIO(0x9888), 0x06360000 },
+	{ _MMIO(0x9888), 0x08364000 },
+	{ _MMIO(0x9888), 0x06530043 },
+	{ _MMIO(0x9888), 0x02530000 },
+	{ _MMIO(0x9888), 0x0e548000 },
+	{ _MMIO(0x9888), 0x00548000 },
+	{ _MMIO(0x9888), 0x06542000 },
+	{ _MMIO(0x9888), 0x1e550400 },
+	{ _MMIO(0x9888), 0x1a552000 },
+	{ _MMIO(0x9888), 0x1c550100 },
+	{ _MMIO(0x9888), 0x0e563000 },
+	{ _MMIO(0x9888), 0x00563400 },
+	{ _MMIO(0x9888), 0x10560000 },
+	{ _MMIO(0x9888), 0x18560000 },
+	{ _MMIO(0x9888), 0x02560000 },
+	{ _MMIO(0x9888), 0x0c564000 },
+	{ _MMIO(0x9888), 0x1993a800 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b9014a0 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900001 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900820 },
+	{ _MMIO(0x9888), 0x45901022 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+};
+
+static const struct i915_oa_reg *
+get_sampler_mux_config(struct drm_i915_private *dev_priv,
+		       int *len)
+{
+	*len = ARRAY_SIZE(mux_config_sampler);
+	return mux_config_sampler;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x00007fff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x00009fff },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000efff },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000f3ff },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fdff },
+	{ _MMIO(0x2798), 0x00000000 },
+	{ _MMIO(0x279c), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_1[] = {
+	{ _MMIO(0x9888), 0x141a0000 },
+	{ _MMIO(0x9888), 0x143a0000 },
+	{ _MMIO(0x9888), 0x145a0000 },
+	{ _MMIO(0x9888), 0x0c2d4000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x002d4000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x102e0150 },
+	{ _MMIO(0x9888), 0x0c2e5000 },
+	{ _MMIO(0x9888), 0x0e2e006a },
+	{ _MMIO(0x9888), 0x124c8000 },
+	{ _MMIO(0x9888), 0x144c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x044c8000 },
+	{ _MMIO(0x9888), 0x064c4000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x0e4ea000 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x024e2000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x1c0f0bc0 },
+	{ _MMIO(0x9888), 0x180f4000 },
+	{ _MMIO(0x9888), 0x1a0f0302 },
+	{ _MMIO(0x9888), 0x1e2c0003 },
+	{ _MMIO(0x9888), 0x1a2c00f0 },
+	{ _MMIO(0x9888), 0x021a3080 },
+	{ _MMIO(0x9888), 0x041a31e5 },
+	{ _MMIO(0x9888), 0x02148000 },
+	{ _MMIO(0x9888), 0x0414a000 },
+	{ _MMIO(0x9888), 0x1c150054 },
+	{ _MMIO(0x9888), 0x06168000 },
+	{ _MMIO(0x9888), 0x08168000 },
+	{ _MMIO(0x9888), 0x0a168000 },
+	{ _MMIO(0x9888), 0x0c3a3280 },
+	{ _MMIO(0x9888), 0x0e3a0063 },
+	{ _MMIO(0x9888), 0x063a0061 },
+	{ _MMIO(0x9888), 0x023a0000 },
+	{ _MMIO(0x9888), 0x0c348000 },
+	{ _MMIO(0x9888), 0x0e342000 },
+	{ _MMIO(0x9888), 0x06342000 },
+	{ _MMIO(0x9888), 0x1e350140 },
+	{ _MMIO(0x9888), 0x1c350100 },
+	{ _MMIO(0x9888), 0x18360028 },
+	{ _MMIO(0x9888), 0x0c368000 },
+	{ _MMIO(0x9888), 0x0e5a3080 },
+	{ _MMIO(0x9888), 0x005a3280 },
+	{ _MMIO(0x9888), 0x025a0063 },
+	{ _MMIO(0x9888), 0x0e548000 },
+	{ _MMIO(0x9888), 0x00548000 },
+	{ _MMIO(0x9888), 0x02542000 },
+	{ _MMIO(0x9888), 0x1e550400 },
+	{ _MMIO(0x9888), 0x1a552000 },
+	{ _MMIO(0x9888), 0x1c550001 },
+	{ _MMIO(0x9888), 0x18560080 },
+	{ _MMIO(0x9888), 0x02568000 },
+	{ _MMIO(0x9888), 0x04568000 },
+	{ _MMIO(0x9888), 0x1993a800 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900420 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x45901084 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900001 },
+};
+
+static const struct i915_oa_reg *
+get_tdl_1_mux_config(struct drm_i915_private *dev_priv,
+		     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_tdl_1);
+	return mux_config_tdl_1;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_2[] = {
+	{ _MMIO(0x9888), 0x141a026b },
+	{ _MMIO(0x9888), 0x143a0173 },
+	{ _MMIO(0x9888), 0x145a026b },
+	{ _MMIO(0x9888), 0x002d4000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x0c2e5000 },
+	{ _MMIO(0x9888), 0x0e2e0069 },
+	{ _MMIO(0x9888), 0x044c8000 },
+	{ _MMIO(0x9888), 0x064cc000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x024ea000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x180f6000 },
+	{ _MMIO(0x9888), 0x1a0f030a },
+	{ _MMIO(0x9888), 0x1a2c03c0 },
+	{ _MMIO(0x9888), 0x041a37e7 },
+	{ _MMIO(0x9888), 0x021a0000 },
+	{ _MMIO(0x9888), 0x0414a000 },
+	{ _MMIO(0x9888), 0x1c150050 },
+	{ _MMIO(0x9888), 0x08168000 },
+	{ _MMIO(0x9888), 0x0a168000 },
+	{ _MMIO(0x9888), 0x003a3380 },
+	{ _MMIO(0x9888), 0x063a006f },
+	{ _MMIO(0x9888), 0x023a0000 },
+	{ _MMIO(0x9888), 0x00348000 },
+	{ _MMIO(0x9888), 0x06342000 },
+	{ _MMIO(0x9888), 0x1a352000 },
+	{ _MMIO(0x9888), 0x1c350100 },
+	{ _MMIO(0x9888), 0x02368000 },
+	{ _MMIO(0x9888), 0x0c368000 },
+	{ _MMIO(0x9888), 0x025a37e7 },
+	{ _MMIO(0x9888), 0x0254a000 },
+	{ _MMIO(0x9888), 0x1c550005 },
+	{ _MMIO(0x9888), 0x04568000 },
+	{ _MMIO(0x9888), 0x06568000 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900020 },
+	{ _MMIO(0x9888), 0x45901080 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900001 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_tdl_2_mux_config(struct drm_i915_private *dev_priv,
+		     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_tdl_2);
+	return mux_config_tdl_2;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extra[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extra[] = {
+	{ _MMIO(0xe458), 0x00001000 },
+	{ _MMIO(0xe558), 0x00003002 },
+	{ _MMIO(0xe658), 0x00005004 },
+	{ _MMIO(0xe758), 0x00011010 },
+	{ _MMIO(0xe45c), 0x00050012 },
+	{ _MMIO(0xe55c), 0x00052051 },
+	{ _MMIO(0xe65c), 0x00000008 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extra[] = {
+	{ _MMIO(0x9888), 0x141a001f },
+	{ _MMIO(0x9888), 0x143a001f },
+	{ _MMIO(0x9888), 0x145a001f },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x0e2e0094 },
+	{ _MMIO(0x9888), 0x084cc000 },
+	{ _MMIO(0x9888), 0x044ea000 },
+	{ _MMIO(0x9888), 0x1a0f00e0 },
+	{ _MMIO(0x9888), 0x1a2c0c00 },
+	{ _MMIO(0x9888), 0x061a0063 },
+	{ _MMIO(0x9888), 0x021a0000 },
+	{ _MMIO(0x9888), 0x06142000 },
+	{ _MMIO(0x9888), 0x1c150100 },
+	{ _MMIO(0x9888), 0x0c168000 },
+	{ _MMIO(0x9888), 0x043a3180 },
+	{ _MMIO(0x9888), 0x023a0000 },
+	{ _MMIO(0x9888), 0x04348000 },
+	{ _MMIO(0x9888), 0x1c350040 },
+	{ _MMIO(0x9888), 0x0a368000 },
+	{ _MMIO(0x9888), 0x045a0063 },
+	{ _MMIO(0x9888), 0x025a0000 },
+	{ _MMIO(0x9888), 0x04542000 },
+	{ _MMIO(0x9888), 0x1c550010 },
+	{ _MMIO(0x9888), 0x08568000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900400 },
+	{ _MMIO(0x9888), 0x47900004 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_compute_extra_mux_config(struct drm_i915_private *dev_priv,
+			     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_extra);
+	return mux_config_compute_extra;
+}
+
+static const struct i915_oa_reg b_counter_config_test_oa[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2770), 0x00000004 },
+	{ _MMIO(0x2774), 0x00000000 },
+	{ _MMIO(0x2778), 0x00000003 },
+	{ _MMIO(0x277c), 0x00000000 },
+	{ _MMIO(0x2780), 0x00000007 },
+	{ _MMIO(0x2784), 0x00000000 },
+	{ _MMIO(0x2788), 0x00100002 },
+	{ _MMIO(0x278c), 0x0000fff7 },
+	{ _MMIO(0x2790), 0x00100002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00100082 },
+	{ _MMIO(0x279c), 0x0000ffef },
+	{ _MMIO(0x27a0), 0x001000c2 },
+	{ _MMIO(0x27a4), 0x0000ffe7 },
+	{ _MMIO(0x27a8), 0x00100001 },
+	{ _MMIO(0x27ac), 0x0000ffe7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_test_oa[] = {
+};
+
+static const struct i915_oa_reg mux_config_test_oa[] = {
+	{ _MMIO(0x9888), 0x19800000 },
+	{ _MMIO(0x9888), 0x07800063 },
+	{ _MMIO(0x9888), 0x11800000 },
+	{ _MMIO(0x9888), 0x23810008 },
+	{ _MMIO(0x9888), 0x1d950400 },
+	{ _MMIO(0x9888), 0x0f922000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_test_oa_mux_config(struct drm_i915_private *dev_priv,
+		       int *len)
+{
+	*len = ARRAY_SIZE(mux_config_test_oa);
+	return mux_config_test_oa;
+}
+
 int i915_oa_select_metric_set_bxt(struct drm_i915_private *dev_priv)
 {
 	dev_priv->perf.oa.mux_regs = NULL;
@@ -157,13 +1662,313 @@ int i915_oa_select_metric_set_bxt(struct drm_i915_private *dev_priv)
 	dev_priv->perf.oa.flex_regs = NULL;
 	dev_priv->perf.oa.flex_regs_len = 0;

-	switch (dev_priv->perf.oa.metrics_set) {
-	case METRIC_SET_ID_RENDER_BASIC:
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_render_basic_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_basic_mux_config(dev_priv,
+						     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
+		return 0;
+	case METRIC_SET_ID_RENDER_PIPE_PROFILE:
+		dev_priv->perf.oa.mux_regs =
+			get_render_pipe_profile_mux_config(dev_priv,
+							   &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_PIPE_PROFILE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_pipe_profile;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_pipe_profile);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_pipe_profile;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_pipe_profile);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_READS:
+		dev_priv->perf.oa.mux_regs =
+			get_memory_reads_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_READS\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_reads;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_reads);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_reads;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_reads);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_WRITES:
+		dev_priv->perf.oa.mux_regs =
+			get_memory_writes_mux_config(dev_priv,
+						     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_WRITES\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_writes;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_writes);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_writes;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_writes);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTENDED:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_extended_mux_config(dev_priv,
+							&dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTENDED\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extended;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extended);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extended;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extended);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_L3_CACHE:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_l3_cache_mux_config(dev_priv,
+							&dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_L3_CACHE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_l3_cache;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_l3_cache);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_l3_cache;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_l3_cache);
+
+		return 0;
+	case METRIC_SET_ID_HDC_AND_SF:
+		dev_priv->perf.oa.mux_regs =
+			get_hdc_and_sf_mux_config(dev_priv,
+						  &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"HDC_AND_SF\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_hdc_and_sf;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_hdc_and_sf);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_hdc_and_sf;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_hdc_and_sf);
+
+		return 0;
+	case METRIC_SET_ID_L3_1:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_1_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_1);
+
+		return 0;
+	case METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND:
+		dev_priv->perf.oa.mux_regs =
+			get_rasterizer_and_pixel_backend_mux_config(dev_priv,
+								    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RASTERIZER_AND_PIXEL_BACKEND\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_rasterizer_and_pixel_backend);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_rasterizer_and_pixel_backend);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER:
+		dev_priv->perf.oa.mux_regs =
+			get_sampler_mux_config(dev_priv,
+					       &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler);
+
+		return 0;
+	case METRIC_SET_ID_TDL_1:
+		dev_priv->perf.oa.mux_regs =
+			get_tdl_1_mux_config(dev_priv,
+					     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_1);
+
+		return 0;
+	case METRIC_SET_ID_TDL_2:
 		dev_priv->perf.oa.mux_regs =
-			get_render_basic_mux_config(dev_priv,
-						    &dev_priv->perf.oa.mux_regs_len);
+			get_tdl_2_mux_config(dev_priv,
+					     &dev_priv->perf.oa.mux_regs_len);
 		if (!dev_priv->perf.oa.mux_regs) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_2\" metric set\n");

 			/* EINVAL because *_register_sysfs already checked this
 			 * and so it wouldn't have been advertised to userspace and
@@ -173,14 +1978,64 @@ int i915_oa_select_metric_set_bxt(struct drm_i915_private *dev_priv)
 		}

 		dev_priv->perf.oa.b_counter_regs =
-			b_counter_config_render_basic;
+			b_counter_config_tdl_2;
 		dev_priv->perf.oa.b_counter_regs_len =
-			ARRAY_SIZE(b_counter_config_render_basic);
+			ARRAY_SIZE(b_counter_config_tdl_2);

 		dev_priv->perf.oa.flex_regs =
-			flex_eu_config_render_basic;
+			flex_eu_config_tdl_2;
 		dev_priv->perf.oa.flex_regs_len =
-			ARRAY_SIZE(flex_eu_config_render_basic);
+			ARRAY_SIZE(flex_eu_config_tdl_2);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTRA:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_extra_mux_config(dev_priv,
+						     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTRA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extra;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extra);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extra;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extra);
+
+		return 0;
+	case METRIC_SET_ID_TEST_OA:
+		dev_priv->perf.oa.mux_regs =
+			get_test_oa_mux_config(dev_priv,
+					       &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TEST_OA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_test_oa;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_test_oa);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_test_oa;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_test_oa);

 		return 0;
 	default:
@@ -210,6 +2065,314 @@ static struct attribute_group group_render_basic = {
 	.attrs =  attrs_render_basic,
 };

+static ssize_t
+show_compute_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_BASIC);
+}
+
+static struct device_attribute dev_attr_compute_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_basic[] = {
+	&dev_attr_compute_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_basic = {
+	.name = "012d72cf-82a9-4d25-8ddf-74076fd30797",
+	.attrs =  attrs_compute_basic,
+};
+
+static ssize_t
+show_render_pipe_profile_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_PIPE_PROFILE);
+}
+
+static struct device_attribute dev_attr_render_pipe_profile_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_pipe_profile_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_pipe_profile[] = {
+	&dev_attr_render_pipe_profile_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_pipe_profile = {
+	.name = "ce416533-e49e-4211-80af-ec513590a914",
+	.attrs =  attrs_render_pipe_profile,
+};
+
+static ssize_t
+show_memory_reads_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_READS);
+}
+
+static struct device_attribute dev_attr_memory_reads_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_reads_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_reads[] = {
+	&dev_attr_memory_reads_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_reads = {
+	.name = "398e2452-18d7-42d0-b241-e4d0a9148ada",
+	.attrs =  attrs_memory_reads,
+};
+
+static ssize_t
+show_memory_writes_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_WRITES);
+}
+
+static struct device_attribute dev_attr_memory_writes_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_writes_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_writes[] = {
+	&dev_attr_memory_writes_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_writes = {
+	.name = "d324a0d6-7269-4847-a5c2-6f71ddc7fed5",
+	.attrs =  attrs_memory_writes,
+};
+
+static ssize_t
+show_compute_extended_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTENDED);
+}
+
+static struct device_attribute dev_attr_compute_extended_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extended_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extended[] = {
+	&dev_attr_compute_extended_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extended = {
+	.name = "caf3596a-7bb1-4dec-b3b3-2a080d283b49",
+	.attrs =  attrs_compute_extended,
+};
+
+static ssize_t
+show_compute_l3_cache_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_L3_CACHE);
+}
+
+static struct device_attribute dev_attr_compute_l3_cache_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_l3_cache_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_l3_cache[] = {
+	&dev_attr_compute_l3_cache_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_l3_cache = {
+	.name = "49b956e2-d5b9-47e0-9d8a-cee5e8cec527",
+	.attrs =  attrs_compute_l3_cache,
+};
+
+static ssize_t
+show_hdc_and_sf_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_HDC_AND_SF);
+}
+
+static struct device_attribute dev_attr_hdc_and_sf_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_hdc_and_sf_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_hdc_and_sf[] = {
+	&dev_attr_hdc_and_sf_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_hdc_and_sf = {
+	.name = "f64ef50a-bdba-4b35-8f09-203c13d8ee5a",
+	.attrs =  attrs_hdc_and_sf,
+};
+
+static ssize_t
+show_l3_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_1);
+}
+
+static struct device_attribute dev_attr_l3_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_1[] = {
+	&dev_attr_l3_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_1 = {
+	.name = "00ad5a41-7eab-4f7a-9103-49d411c67219",
+	.attrs =  attrs_l3_1,
+};
+
+static ssize_t
+show_rasterizer_and_pixel_backend_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND);
+}
+
+static struct device_attribute dev_attr_rasterizer_and_pixel_backend_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_rasterizer_and_pixel_backend_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_rasterizer_and_pixel_backend[] = {
+	&dev_attr_rasterizer_and_pixel_backend_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_rasterizer_and_pixel_backend = {
+	.name = "46dc44ca-491c-4cc1-a951-e7b3e62bf02b",
+	.attrs =  attrs_rasterizer_and_pixel_backend,
+};
+
+static ssize_t
+show_sampler_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER);
+}
+
+static struct device_attribute dev_attr_sampler_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler[] = {
+	&dev_attr_sampler_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler = {
+	.name = "8364e2a8-af63-40af-b0d5-42969a255654",
+	.attrs =  attrs_sampler,
+};
+
+static ssize_t
+show_tdl_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_1);
+}
+
+static struct device_attribute dev_attr_tdl_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_1[] = {
+	&dev_attr_tdl_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_1 = {
+	.name = "175c8092-cb25-4d1e-8dc7-b4fdd39e2d92",
+	.attrs =  attrs_tdl_1,
+};
+
+static ssize_t
+show_tdl_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_2);
+}
+
+static struct device_attribute dev_attr_tdl_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_2[] = {
+	&dev_attr_tdl_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_2 = {
+	.name = "d260f03f-b34d-4b49-a44e-436819117332",
+	.attrs =  attrs_tdl_2,
+};
+
+static ssize_t
+show_compute_extra_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTRA);
+}
+
+static struct device_attribute dev_attr_compute_extra_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extra_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extra[] = {
+	&dev_attr_compute_extra_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extra = {
+	.name = "fa6ecf21-2cb8-4d0b-9308-6e4a7b4ca87a",
+	.attrs =  attrs_compute_extra,
+};
+
+static ssize_t
+show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TEST_OA);
+}
+
+static struct device_attribute dev_attr_test_oa_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_test_oa_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_test_oa[] = {
+	&dev_attr_test_oa_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_test_oa = {
+	.name = "5ee72f5c-092f-421e-8b70-225f7c3e9612",
+	.attrs =  attrs_test_oa,
+};
+
 int
 i915_perf_register_sysfs_bxt(struct drm_i915_private *dev_priv)
 {
@@ -221,9 +2384,121 @@ i915_perf_register_sysfs_bxt(struct drm_i915_private *dev_priv)
 		if (ret)
 			goto error_render_basic;
 	}
+	if (get_compute_basic_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+		if (ret)
+			goto error_compute_basic;
+	}
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+		if (ret)
+			goto error_render_pipe_profile;
+	}
+	if (get_memory_reads_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+		if (ret)
+			goto error_memory_reads;
+	}
+	if (get_memory_writes_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+		if (ret)
+			goto error_memory_writes;
+	}
+	if (get_compute_extended_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+		if (ret)
+			goto error_compute_extended;
+	}
+	if (get_compute_l3_cache_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+		if (ret)
+			goto error_compute_l3_cache;
+	}
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+		if (ret)
+			goto error_hdc_and_sf;
+	}
+	if (get_l3_1_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+		if (ret)
+			goto error_l3_1;
+	}
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+		if (ret)
+			goto error_rasterizer_and_pixel_backend;
+	}
+	if (get_sampler_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler);
+		if (ret)
+			goto error_sampler;
+	}
+	if (get_tdl_1_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+		if (ret)
+			goto error_tdl_1;
+	}
+	if (get_tdl_2_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+		if (ret)
+			goto error_tdl_2;
+	}
+	if (get_compute_extra_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+		if (ret)
+			goto error_compute_extra;
+	}
+	if (get_test_oa_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+		if (ret)
+			goto error_test_oa;
+	}

 	return 0;

+error_test_oa:
+	if (get_compute_extra_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+error_compute_extra:
+	if (get_tdl_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+error_tdl_2:
+	if (get_tdl_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+error_tdl_1:
+	if (get_sampler_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+error_sampler:
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+error_rasterizer_and_pixel_backend:
+	if (get_l3_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+error_l3_1:
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+error_hdc_and_sf:
+	if (get_compute_l3_cache_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+error_compute_l3_cache:
+	if (get_compute_extended_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+error_compute_extended:
+	if (get_memory_writes_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+error_memory_writes:
+	if (get_memory_reads_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+error_memory_reads:
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+error_render_pipe_profile:
+	if (get_compute_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+error_compute_basic:
+	if (get_render_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
 error_render_basic:
 	return ret;
 }
@@ -235,4 +2510,32 @@ i915_perf_unregister_sysfs_bxt(struct drm_i915_private *dev_priv)

 	if (get_render_basic_mux_config(dev_priv, &mux_len))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+	if (get_compute_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+	if (get_memory_reads_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+	if (get_memory_writes_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+	if (get_compute_extended_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+	if (get_compute_l3_cache_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+	if (get_l3_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+	if (get_sampler_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+	if (get_tdl_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+	if (get_tdl_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+	if (get_compute_extra_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+	if (get_test_oa_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_test_oa);
 }
diff --git a/drivers/gpu/drm/i915/i915_oa_chv.c b/drivers/gpu/drm/i915/i915_oa_chv.c
index 1c2889e819c8..878377c75e47 100644
--- a/drivers/gpu/drm/i915/i915_oa_chv.c
+++ b/drivers/gpu/drm/i915/i915_oa_chv.c
@@ -31,9 +31,22 @@

 enum metric_set_id {
 	METRIC_SET_ID_RENDER_BASIC = 1,
+	METRIC_SET_ID_COMPUTE_BASIC,
+	METRIC_SET_ID_RENDER_PIPE_PROFILE,
+	METRIC_SET_ID_HDC_AND_SF,
+	METRIC_SET_ID_L3_1,
+	METRIC_SET_ID_L3_2,
+	METRIC_SET_ID_L3_3,
+	METRIC_SET_ID_L3_4,
+	METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND,
+	METRIC_SET_ID_SAMPLER_1,
+	METRIC_SET_ID_SAMPLER_2,
+	METRIC_SET_ID_TDL_1,
+	METRIC_SET_ID_TDL_2,
+	METRIC_SET_ID_TEST_OA,
 };

-int i915_oa_n_builtin_metric_sets_chv = 1;
+int i915_oa_n_builtin_metric_sets_chv = 14;

 static const struct i915_oa_reg b_counter_config_render_basic[] = {
 	{ _MMIO(0x2740), 0x00000000 },
@@ -135,6 +148,1757 @@ get_render_basic_mux_config(struct drm_i915_private *dev_priv,
 	return mux_config_render_basic;
 }

+static const struct i915_oa_reg b_counter_config_compute_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic[] = {
+	{ _MMIO(0x9888), 0x59800000 },
+	{ _MMIO(0x9888), 0x59800001 },
+	{ _MMIO(0x9888), 0x2e5800e0 },
+	{ _MMIO(0x9888), 0x2e3800e0 },
+	{ _MMIO(0x9888), 0x3580024f },
+	{ _MMIO(0x9888), 0x3d800140 },
+	{ _MMIO(0x9888), 0x08580042 },
+	{ _MMIO(0x9888), 0x0c580040 },
+	{ _MMIO(0x9888), 0x1058004c },
+	{ _MMIO(0x9888), 0x1458004b },
+	{ _MMIO(0x9888), 0x04580000 },
+	{ _MMIO(0x9888), 0x00580000 },
+	{ _MMIO(0x9888), 0x00195555 },
+	{ _MMIO(0x9888), 0x06380042 },
+	{ _MMIO(0x9888), 0x0a380040 },
+	{ _MMIO(0x9888), 0x0e38004c },
+	{ _MMIO(0x9888), 0x1238004b },
+	{ _MMIO(0x9888), 0x04380000 },
+	{ _MMIO(0x9888), 0x00384444 },
+	{ _MMIO(0x9888), 0x003a5555 },
+	{ _MMIO(0x9888), 0x018bffff },
+	{ _MMIO(0x9888), 0x01845555 },
+	{ _MMIO(0x9888), 0x17800074 },
+	{ _MMIO(0x9888), 0x1980007d },
+	{ _MMIO(0x9888), 0x1b80007c },
+	{ _MMIO(0x9888), 0x1d8000b6 },
+	{ _MMIO(0x9888), 0x1f8000b7 },
+	{ _MMIO(0x9888), 0x05800000 },
+	{ _MMIO(0x9888), 0x03800000 },
+	{ _MMIO(0x9888), 0x418000aa },
+	{ _MMIO(0x9888), 0x438000aa },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x47800000 },
+	{ _MMIO(0x9888), 0x4980012a },
+	{ _MMIO(0x9888), 0x4b80012a },
+	{ _MMIO(0x9888), 0x4d80012a },
+	{ _MMIO(0x9888), 0x4f80012a },
+	{ _MMIO(0x9888), 0x518001ce },
+	{ _MMIO(0x9888), 0x538001ce },
+	{ _MMIO(0x9888), 0x5580000e },
+	{ _MMIO(0x9888), 0x59800000 },
+};
+
+static const struct i915_oa_reg *
+get_compute_basic_mux_config(struct drm_i915_private *dev_priv,
+			     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_basic);
+	return mux_config_compute_basic;
+}
+
+static const struct i915_oa_reg b_counter_config_render_pipe_profile[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ffea },
+	{ _MMIO(0x2774), 0x00007ffc },
+	{ _MMIO(0x2778), 0x0007affa },
+	{ _MMIO(0x277c), 0x0000f5fd },
+	{ _MMIO(0x2780), 0x00079ffa },
+	{ _MMIO(0x2784), 0x0000f3fb },
+	{ _MMIO(0x2788), 0x0007bf7a },
+	{ _MMIO(0x278c), 0x0000f7e7 },
+	{ _MMIO(0x2790), 0x0007fefa },
+	{ _MMIO(0x2794), 0x0000f7cf },
+	{ _MMIO(0x2798), 0x00077ffa },
+	{ _MMIO(0x279c), 0x0000efdf },
+	{ _MMIO(0x27a0), 0x0006fffa },
+	{ _MMIO(0x27a4), 0x0000cfbf },
+	{ _MMIO(0x27a8), 0x0003fffa },
+	{ _MMIO(0x27ac), 0x00005f7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_pipe_profile[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile[] = {
+	{ _MMIO(0x9888), 0x59800000 },
+	{ _MMIO(0x9888), 0x59800001 },
+	{ _MMIO(0x9888), 0x261e0000 },
+	{ _MMIO(0x9888), 0x281f000f },
+	{ _MMIO(0x9888), 0x2817001a },
+	{ _MMIO(0x9888), 0x2791001f },
+	{ _MMIO(0x9888), 0x27880019 },
+	{ _MMIO(0x9888), 0x2d890000 },
+	{ _MMIO(0x9888), 0x278a0007 },
+	{ _MMIO(0x9888), 0x298d001f },
+	{ _MMIO(0x9888), 0x278e0020 },
+	{ _MMIO(0x9888), 0x2b8f0012 },
+	{ _MMIO(0x9888), 0x29900000 },
+	{ _MMIO(0x9888), 0x00184000 },
+	{ _MMIO(0x9888), 0x02181000 },
+	{ _MMIO(0x9888), 0x02194000 },
+	{ _MMIO(0x9888), 0x141e0002 },
+	{ _MMIO(0x9888), 0x041e0000 },
+	{ _MMIO(0x9888), 0x001e0000 },
+	{ _MMIO(0x9888), 0x221f0015 },
+	{ _MMIO(0x9888), 0x041f0000 },
+	{ _MMIO(0x9888), 0x001f4000 },
+	{ _MMIO(0x9888), 0x021f0000 },
+	{ _MMIO(0x9888), 0x023a8000 },
+	{ _MMIO(0x9888), 0x0213c000 },
+	{ _MMIO(0x9888), 0x02164000 },
+	{ _MMIO(0x9888), 0x24170012 },
+	{ _MMIO(0x9888), 0x04170000 },
+	{ _MMIO(0x9888), 0x07910005 },
+	{ _MMIO(0x9888), 0x05910000 },
+	{ _MMIO(0x9888), 0x01911500 },
+	{ _MMIO(0x9888), 0x03910501 },
+	{ _MMIO(0x9888), 0x0d880002 },
+	{ _MMIO(0x9888), 0x1d880003 },
+	{ _MMIO(0x9888), 0x05880000 },
+	{ _MMIO(0x9888), 0x0b890032 },
+	{ _MMIO(0x9888), 0x1b890031 },
+	{ _MMIO(0x9888), 0x05890000 },
+	{ _MMIO(0x9888), 0x01890040 },
+	{ _MMIO(0x9888), 0x03890040 },
+	{ _MMIO(0x9888), 0x098a0000 },
+	{ _MMIO(0x9888), 0x198a0004 },
+	{ _MMIO(0x9888), 0x058a0000 },
+	{ _MMIO(0x9888), 0x018a8050 },
+	{ _MMIO(0x9888), 0x038a2050 },
+	{ _MMIO(0x9888), 0x018b95a9 },
+	{ _MMIO(0x9888), 0x038be5a9 },
+	{ _MMIO(0x9888), 0x018c1500 },
+	{ _MMIO(0x9888), 0x038c0501 },
+	{ _MMIO(0x9888), 0x178d0015 },
+	{ _MMIO(0x9888), 0x058d0000 },
+	{ _MMIO(0x9888), 0x138e0004 },
+	{ _MMIO(0x9888), 0x218e000c },
+	{ _MMIO(0x9888), 0x058e0000 },
+	{ _MMIO(0x9888), 0x018e0500 },
+	{ _MMIO(0x9888), 0x038e0101 },
+	{ _MMIO(0x9888), 0x0f8f0027 },
+	{ _MMIO(0x9888), 0x058f0000 },
+	{ _MMIO(0x9888), 0x018f0000 },
+	{ _MMIO(0x9888), 0x038f0001 },
+	{ _MMIO(0x9888), 0x11900013 },
+	{ _MMIO(0x9888), 0x1f900017 },
+	{ _MMIO(0x9888), 0x05900000 },
+	{ _MMIO(0x9888), 0x01900100 },
+	{ _MMIO(0x9888), 0x03900001 },
+	{ _MMIO(0x9888), 0x01845555 },
+	{ _MMIO(0x9888), 0x03845555 },
+	{ _MMIO(0x9888), 0x418000aa },
+	{ _MMIO(0x9888), 0x438000aa },
+	{ _MMIO(0x9888), 0x458000aa },
+	{ _MMIO(0x9888), 0x478000aa },
+	{ _MMIO(0x9888), 0x4980018c },
+	{ _MMIO(0x9888), 0x4b80014b },
+	{ _MMIO(0x9888), 0x4d800128 },
+	{ _MMIO(0x9888), 0x4f80012a },
+	{ _MMIO(0x9888), 0x51800187 },
+	{ _MMIO(0x9888), 0x5380014b },
+	{ _MMIO(0x9888), 0x55800149 },
+	{ _MMIO(0x9888), 0x5780010a },
+	{ _MMIO(0x9888), 0x59800000 },
+};
+
+static const struct i915_oa_reg *
+get_render_pipe_profile_mux_config(struct drm_i915_private *dev_priv,
+				   int *len)
+{
+	*len = ARRAY_SIZE(mux_config_render_pipe_profile);
+	return mux_config_render_pipe_profile;
+}
+
+static const struct i915_oa_reg b_counter_config_hdc_and_sf[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x10800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fff7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_hdc_and_sf[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_hdc_and_sf[] = {
+	{ _MMIO(0x9888), 0x105c0232 },
+	{ _MMIO(0x9888), 0x10580232 },
+	{ _MMIO(0x9888), 0x10380232 },
+	{ _MMIO(0x9888), 0x10dc0232 },
+	{ _MMIO(0x9888), 0x10d80232 },
+	{ _MMIO(0x9888), 0x10b80232 },
+	{ _MMIO(0x9888), 0x118e4400 },
+	{ _MMIO(0x9888), 0x025c6080 },
+	{ _MMIO(0x9888), 0x045c004b },
+	{ _MMIO(0x9888), 0x005c8000 },
+	{ _MMIO(0x9888), 0x00582080 },
+	{ _MMIO(0x9888), 0x0258004b },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x045b4000 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f00aa },
+	{ _MMIO(0x9888), 0x04386080 },
+	{ _MMIO(0x9888), 0x0638404b },
+	{ _MMIO(0x9888), 0x02384000 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a380000 },
+	{ _MMIO(0x9888), 0x0c380000 },
+	{ _MMIO(0x9888), 0x00398000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x0cdc25c1 },
+	{ _MMIO(0x9888), 0x0adcc000 },
+	{ _MMIO(0x9888), 0x0ad825c1 },
+	{ _MMIO(0x9888), 0x18db4000 },
+	{ _MMIO(0x9888), 0x1adb0001 },
+	{ _MMIO(0x9888), 0x0e9f8000 },
+	{ _MMIO(0x9888), 0x109f02aa },
+	{ _MMIO(0x9888), 0x0eb825c1 },
+	{ _MMIO(0x9888), 0x18b80154 },
+	{ _MMIO(0x9888), 0x0ab9a000 },
+	{ _MMIO(0x9888), 0x0cb9a000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x0d88c000 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x258baa05 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x198c5400 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x098dc000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x098e05c0 },
+	{ _MMIO(0x9888), 0x058e0000 },
+	{ _MMIO(0x9888), 0x198f0020 },
+	{ _MMIO(0x9888), 0x2185aa0a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x19835000 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x09848000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x19808000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x51800040 },
+	{ _MMIO(0x9888), 0x43800400 },
+	{ _MMIO(0x9888), 0x45800800 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800c62 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f801042 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x418014a4 },
+};
+
+static const struct i915_oa_reg *
+get_hdc_and_sf_mux_config(struct drm_i915_private *dev_priv,
+			  int *len)
+{
+	*len = ARRAY_SIZE(mux_config_hdc_and_sf);
+	return mux_config_hdc_and_sf;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1[] = {
+	{ _MMIO(0x9888), 0x10bf03da },
+	{ _MMIO(0x9888), 0x14bf0001 },
+	{ _MMIO(0x9888), 0x12980340 },
+	{ _MMIO(0x9888), 0x12990340 },
+	{ _MMIO(0x9888), 0x0cbf1187 },
+	{ _MMIO(0x9888), 0x0ebf1205 },
+	{ _MMIO(0x9888), 0x00bf0500 },
+	{ _MMIO(0x9888), 0x02bf042b },
+	{ _MMIO(0x9888), 0x04bf002c },
+	{ _MMIO(0x9888), 0x0cdac000 },
+	{ _MMIO(0x9888), 0x0edac000 },
+	{ _MMIO(0x9888), 0x00da8000 },
+	{ _MMIO(0x9888), 0x02dac000 },
+	{ _MMIO(0x9888), 0x04da4000 },
+	{ _MMIO(0x9888), 0x04983400 },
+	{ _MMIO(0x9888), 0x10980000 },
+	{ _MMIO(0x9888), 0x06990034 },
+	{ _MMIO(0x9888), 0x10990000 },
+	{ _MMIO(0x9888), 0x0c9dc000 },
+	{ _MMIO(0x9888), 0x0e9dc000 },
+	{ _MMIO(0x9888), 0x009d8000 },
+	{ _MMIO(0x9888), 0x029dc000 },
+	{ _MMIO(0x9888), 0x049d4000 },
+	{ _MMIO(0x9888), 0x109f02a8 },
+	{ _MMIO(0x9888), 0x0c9fa000 },
+	{ _MMIO(0x9888), 0x0e9f00ba },
+	{ _MMIO(0x9888), 0x0cb88000 },
+	{ _MMIO(0x9888), 0x0cb95000 },
+	{ _MMIO(0x9888), 0x0eb95000 },
+	{ _MMIO(0x9888), 0x00b94000 },
+	{ _MMIO(0x9888), 0x02b95000 },
+	{ _MMIO(0x9888), 0x04b91000 },
+	{ _MMIO(0x9888), 0x06b92000 },
+	{ _MMIO(0x9888), 0x0cba4000 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x258b800a },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b5500 },
+	{ _MMIO(0x9888), 0x198c4000 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x018d8000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2185800a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x1b830154 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x47800000 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800060 },
+};
+
+static const struct i915_oa_reg *
+get_l3_1_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_1);
+	return mux_config_l3_1;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_2[] = {
+	{ _MMIO(0x9888), 0x103f03da },
+	{ _MMIO(0x9888), 0x143f0001 },
+	{ _MMIO(0x9888), 0x12180340 },
+	{ _MMIO(0x9888), 0x12190340 },
+	{ _MMIO(0x9888), 0x0c3f1187 },
+	{ _MMIO(0x9888), 0x0e3f1205 },
+	{ _MMIO(0x9888), 0x003f0500 },
+	{ _MMIO(0x9888), 0x023f042b },
+	{ _MMIO(0x9888), 0x043f002c },
+	{ _MMIO(0x9888), 0x0c5ac000 },
+	{ _MMIO(0x9888), 0x0e5ac000 },
+	{ _MMIO(0x9888), 0x005a8000 },
+	{ _MMIO(0x9888), 0x025ac000 },
+	{ _MMIO(0x9888), 0x045a4000 },
+	{ _MMIO(0x9888), 0x04183400 },
+	{ _MMIO(0x9888), 0x10180000 },
+	{ _MMIO(0x9888), 0x06190034 },
+	{ _MMIO(0x9888), 0x10190000 },
+	{ _MMIO(0x9888), 0x0c1dc000 },
+	{ _MMIO(0x9888), 0x0e1dc000 },
+	{ _MMIO(0x9888), 0x001d8000 },
+	{ _MMIO(0x9888), 0x021dc000 },
+	{ _MMIO(0x9888), 0x041d4000 },
+	{ _MMIO(0x9888), 0x101f02a8 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f00ba },
+	{ _MMIO(0x9888), 0x0c388000 },
+	{ _MMIO(0x9888), 0x0c395000 },
+	{ _MMIO(0x9888), 0x0e395000 },
+	{ _MMIO(0x9888), 0x00394000 },
+	{ _MMIO(0x9888), 0x02395000 },
+	{ _MMIO(0x9888), 0x04391000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x0c3a4000 },
+	{ _MMIO(0x9888), 0x1b8aa800 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x258b4005 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x2185800a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x1b830154 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x47800000 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800060 },
+};
+
+static const struct i915_oa_reg *
+get_l3_2_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_2);
+	return mux_config_l3_2;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_3[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_3[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_3[] = {
+	{ _MMIO(0x9888), 0x121b0340 },
+	{ _MMIO(0x9888), 0x103f0274 },
+	{ _MMIO(0x9888), 0x123f0000 },
+	{ _MMIO(0x9888), 0x129b0340 },
+	{ _MMIO(0x9888), 0x10bf0274 },
+	{ _MMIO(0x9888), 0x12bf0000 },
+	{ _MMIO(0x9888), 0x041b3400 },
+	{ _MMIO(0x9888), 0x101b0000 },
+	{ _MMIO(0x9888), 0x045c8000 },
+	{ _MMIO(0x9888), 0x0a3d4000 },
+	{ _MMIO(0x9888), 0x003f0080 },
+	{ _MMIO(0x9888), 0x023f0793 },
+	{ _MMIO(0x9888), 0x043f0014 },
+	{ _MMIO(0x9888), 0x04588000 },
+	{ _MMIO(0x9888), 0x005a8000 },
+	{ _MMIO(0x9888), 0x025ac000 },
+	{ _MMIO(0x9888), 0x045a4000 },
+	{ _MMIO(0x9888), 0x0a5b4000 },
+	{ _MMIO(0x9888), 0x001d8000 },
+	{ _MMIO(0x9888), 0x021dc000 },
+	{ _MMIO(0x9888), 0x041d4000 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f002a },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x00394000 },
+	{ _MMIO(0x9888), 0x02395000 },
+	{ _MMIO(0x9888), 0x04399000 },
+	{ _MMIO(0x9888), 0x069b0034 },
+	{ _MMIO(0x9888), 0x109b0000 },
+	{ _MMIO(0x9888), 0x06dc4000 },
+	{ _MMIO(0x9888), 0x0cbd4000 },
+	{ _MMIO(0x9888), 0x0cbf0981 },
+	{ _MMIO(0x9888), 0x0ebf0a0f },
+	{ _MMIO(0x9888), 0x06d84000 },
+	{ _MMIO(0x9888), 0x0cdac000 },
+	{ _MMIO(0x9888), 0x0edac000 },
+	{ _MMIO(0x9888), 0x0cdb4000 },
+	{ _MMIO(0x9888), 0x0c9dc000 },
+	{ _MMIO(0x9888), 0x0e9dc000 },
+	{ _MMIO(0x9888), 0x109f02a8 },
+	{ _MMIO(0x9888), 0x0e9f0080 },
+	{ _MMIO(0x9888), 0x0cb84000 },
+	{ _MMIO(0x9888), 0x0cb95000 },
+	{ _MMIO(0x9888), 0x0eb95000 },
+	{ _MMIO(0x9888), 0x06b92000 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x258b8009 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x198c4000 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2185800a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x1b830154 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x45800c00 },
+	{ _MMIO(0x9888), 0x47800c63 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f8014a5 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800045 },
+};
+
+static const struct i915_oa_reg *
+get_l3_3_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_3);
+	return mux_config_l3_3;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_4[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_4[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_4[] = {
+	{ _MMIO(0x9888), 0x121a0340 },
+	{ _MMIO(0x9888), 0x103f0017 },
+	{ _MMIO(0x9888), 0x123f0020 },
+	{ _MMIO(0x9888), 0x129a0340 },
+	{ _MMIO(0x9888), 0x10bf0017 },
+	{ _MMIO(0x9888), 0x12bf0020 },
+	{ _MMIO(0x9888), 0x041a3400 },
+	{ _MMIO(0x9888), 0x101a0000 },
+	{ _MMIO(0x9888), 0x043b8000 },
+	{ _MMIO(0x9888), 0x0a3e0010 },
+	{ _MMIO(0x9888), 0x003f0200 },
+	{ _MMIO(0x9888), 0x023f0113 },
+	{ _MMIO(0x9888), 0x043f0014 },
+	{ _MMIO(0x9888), 0x02592000 },
+	{ _MMIO(0x9888), 0x005a8000 },
+	{ _MMIO(0x9888), 0x025ac000 },
+	{ _MMIO(0x9888), 0x045a4000 },
+	{ _MMIO(0x9888), 0x0a1c8000 },
+	{ _MMIO(0x9888), 0x001d8000 },
+	{ _MMIO(0x9888), 0x021dc000 },
+	{ _MMIO(0x9888), 0x041d4000 },
+	{ _MMIO(0x9888), 0x0a1e8000 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f001a },
+	{ _MMIO(0x9888), 0x00394000 },
+	{ _MMIO(0x9888), 0x02395000 },
+	{ _MMIO(0x9888), 0x04391000 },
+	{ _MMIO(0x9888), 0x069a0034 },
+	{ _MMIO(0x9888), 0x109a0000 },
+	{ _MMIO(0x9888), 0x06bb4000 },
+	{ _MMIO(0x9888), 0x0abe0040 },
+	{ _MMIO(0x9888), 0x0cbf0984 },
+	{ _MMIO(0x9888), 0x0ebf0a02 },
+	{ _MMIO(0x9888), 0x02d94000 },
+	{ _MMIO(0x9888), 0x0cdac000 },
+	{ _MMIO(0x9888), 0x0edac000 },
+	{ _MMIO(0x9888), 0x0c9c0400 },
+	{ _MMIO(0x9888), 0x0c9dc000 },
+	{ _MMIO(0x9888), 0x0e9dc000 },
+	{ _MMIO(0x9888), 0x0c9e0400 },
+	{ _MMIO(0x9888), 0x109f02a8 },
+	{ _MMIO(0x9888), 0x0e9f0040 },
+	{ _MMIO(0x9888), 0x0cb95000 },
+	{ _MMIO(0x9888), 0x0eb95000 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x258b8009 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x198c4000 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2185800a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x1b830154 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x45800800 },
+	{ _MMIO(0x9888), 0x47800842 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f801084 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800044 },
+};
+
+static const struct i915_oa_reg *
+get_l3_4_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_4);
+	return mux_config_l3_4;
+}
+
+static const struct i915_oa_reg b_counter_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00006000 },
+	{ _MMIO(0x2774), 0x0000f3ff },
+	{ _MMIO(0x2778), 0x00001800 },
+	{ _MMIO(0x277c), 0x0000fcff },
+	{ _MMIO(0x2780), 0x00000600 },
+	{ _MMIO(0x2784), 0x0000ff3f },
+	{ _MMIO(0x2788), 0x00000180 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000060 },
+	{ _MMIO(0x2794), 0x0000fff3 },
+	{ _MMIO(0x2798), 0x00000018 },
+	{ _MMIO(0x279c), 0x0000fffc },
+};
+
+static const struct i915_oa_reg flex_eu_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x9888), 0x143b000e },
+	{ _MMIO(0x9888), 0x043c55c0 },
+	{ _MMIO(0x9888), 0x0a1e0280 },
+	{ _MMIO(0x9888), 0x0c1e0408 },
+	{ _MMIO(0x9888), 0x10390000 },
+	{ _MMIO(0x9888), 0x12397a1f },
+	{ _MMIO(0x9888), 0x14bb000e },
+	{ _MMIO(0x9888), 0x04bc5000 },
+	{ _MMIO(0x9888), 0x0a9e0296 },
+	{ _MMIO(0x9888), 0x0c9e0008 },
+	{ _MMIO(0x9888), 0x10b90000 },
+	{ _MMIO(0x9888), 0x12b97a1f },
+	{ _MMIO(0x9888), 0x063b0042 },
+	{ _MMIO(0x9888), 0x103b0000 },
+	{ _MMIO(0x9888), 0x083c0000 },
+	{ _MMIO(0x9888), 0x0a3e0040 },
+	{ _MMIO(0x9888), 0x043f8000 },
+	{ _MMIO(0x9888), 0x02594000 },
+	{ _MMIO(0x9888), 0x045a8000 },
+	{ _MMIO(0x9888), 0x0c1c0400 },
+	{ _MMIO(0x9888), 0x041d8000 },
+	{ _MMIO(0x9888), 0x081e02c0 },
+	{ _MMIO(0x9888), 0x0e1e0000 },
+	{ _MMIO(0x9888), 0x0c1fa800 },
+	{ _MMIO(0x9888), 0x0e1f0260 },
+	{ _MMIO(0x9888), 0x101f0014 },
+	{ _MMIO(0x9888), 0x003905e0 },
+	{ _MMIO(0x9888), 0x06390bc0 },
+	{ _MMIO(0x9888), 0x02390018 },
+	{ _MMIO(0x9888), 0x04394000 },
+	{ _MMIO(0x9888), 0x04bb0042 },
+	{ _MMIO(0x9888), 0x10bb0000 },
+	{ _MMIO(0x9888), 0x02bc05c0 },
+	{ _MMIO(0x9888), 0x08bc0000 },
+	{ _MMIO(0x9888), 0x0abe0004 },
+	{ _MMIO(0x9888), 0x02bf8000 },
+	{ _MMIO(0x9888), 0x02d91000 },
+	{ _MMIO(0x9888), 0x02da8000 },
+	{ _MMIO(0x9888), 0x089c8000 },
+	{ _MMIO(0x9888), 0x029d8000 },
+	{ _MMIO(0x9888), 0x089e8000 },
+	{ _MMIO(0x9888), 0x0e9e0000 },
+	{ _MMIO(0x9888), 0x0e9fa806 },
+	{ _MMIO(0x9888), 0x109f0142 },
+	{ _MMIO(0x9888), 0x08b90617 },
+	{ _MMIO(0x9888), 0x0ab90be0 },
+	{ _MMIO(0x9888), 0x02b94000 },
+	{ _MMIO(0x9888), 0x0d88f000 },
+	{ _MMIO(0x9888), 0x0f88000c },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x1b8a2800 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x238b52a0 },
+	{ _MMIO(0x9888), 0x258b6a95 },
+	{ _MMIO(0x9888), 0x278b0029 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c1500 },
+	{ _MMIO(0x9888), 0x1b8c0014 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x038d8000 },
+	{ _MMIO(0x9888), 0x058d2000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x4d800444 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f804000 },
+	{ _MMIO(0x9888), 0x43801080 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800084 },
+	{ _MMIO(0x9888), 0x53800044 },
+	{ _MMIO(0x9888), 0x47801080 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800000 },
+	{ _MMIO(0x9888), 0x41800840 },
+};
+
+static const struct i915_oa_reg *
+get_rasterizer_and_pixel_backend_mux_config(struct drm_i915_private *dev_priv,
+					    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_rasterizer_and_pixel_backend);
+	return mux_config_rasterizer_and_pixel_backend;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler_1[] = {
+	{ _MMIO(0x9888), 0x18921400 },
+	{ _MMIO(0x9888), 0x149500ab },
+	{ _MMIO(0x9888), 0x18b21400 },
+	{ _MMIO(0x9888), 0x14b500ab },
+	{ _MMIO(0x9888), 0x18d21400 },
+	{ _MMIO(0x9888), 0x14d500ab },
+	{ _MMIO(0x9888), 0x0cdc8000 },
+	{ _MMIO(0x9888), 0x0edc4000 },
+	{ _MMIO(0x9888), 0x02dcc000 },
+	{ _MMIO(0x9888), 0x04dcc000 },
+	{ _MMIO(0x9888), 0x1abd00a0 },
+	{ _MMIO(0x9888), 0x0abd8000 },
+	{ _MMIO(0x9888), 0x0cd88000 },
+	{ _MMIO(0x9888), 0x0ed84000 },
+	{ _MMIO(0x9888), 0x04d88000 },
+	{ _MMIO(0x9888), 0x1adb0050 },
+	{ _MMIO(0x9888), 0x04db8000 },
+	{ _MMIO(0x9888), 0x06db8000 },
+	{ _MMIO(0x9888), 0x08db8000 },
+	{ _MMIO(0x9888), 0x0adb4000 },
+	{ _MMIO(0x9888), 0x109f02a0 },
+	{ _MMIO(0x9888), 0x0c9fa000 },
+	{ _MMIO(0x9888), 0x0e9f00aa },
+	{ _MMIO(0x9888), 0x18b82500 },
+	{ _MMIO(0x9888), 0x02b88000 },
+	{ _MMIO(0x9888), 0x04b84000 },
+	{ _MMIO(0x9888), 0x06b84000 },
+	{ _MMIO(0x9888), 0x08b84000 },
+	{ _MMIO(0x9888), 0x0ab84000 },
+	{ _MMIO(0x9888), 0x0cb88000 },
+	{ _MMIO(0x9888), 0x0cb98000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x00b98000 },
+	{ _MMIO(0x9888), 0x02b9a000 },
+	{ _MMIO(0x9888), 0x04b9a000 },
+	{ _MMIO(0x9888), 0x06b92000 },
+	{ _MMIO(0x9888), 0x1aba0200 },
+	{ _MMIO(0x9888), 0x02ba8000 },
+	{ _MMIO(0x9888), 0x0cba8000 },
+	{ _MMIO(0x9888), 0x04908000 },
+	{ _MMIO(0x9888), 0x04918000 },
+	{ _MMIO(0x9888), 0x04927300 },
+	{ _MMIO(0x9888), 0x10920000 },
+	{ _MMIO(0x9888), 0x1893000a },
+	{ _MMIO(0x9888), 0x0a934000 },
+	{ _MMIO(0x9888), 0x0a946000 },
+	{ _MMIO(0x9888), 0x0c959000 },
+	{ _MMIO(0x9888), 0x0e950098 },
+	{ _MMIO(0x9888), 0x10950000 },
+	{ _MMIO(0x9888), 0x04b04000 },
+	{ _MMIO(0x9888), 0x04b14000 },
+	{ _MMIO(0x9888), 0x04b20073 },
+	{ _MMIO(0x9888), 0x10b20000 },
+	{ _MMIO(0x9888), 0x04b38000 },
+	{ _MMIO(0x9888), 0x06b38000 },
+	{ _MMIO(0x9888), 0x08b34000 },
+	{ _MMIO(0x9888), 0x04b4c000 },
+	{ _MMIO(0x9888), 0x02b59890 },
+	{ _MMIO(0x9888), 0x10b50000 },
+	{ _MMIO(0x9888), 0x06d04000 },
+	{ _MMIO(0x9888), 0x06d14000 },
+	{ _MMIO(0x9888), 0x06d20073 },
+	{ _MMIO(0x9888), 0x10d20000 },
+	{ _MMIO(0x9888), 0x18d30020 },
+	{ _MMIO(0x9888), 0x02d38000 },
+	{ _MMIO(0x9888), 0x0cd34000 },
+	{ _MMIO(0x9888), 0x0ad48000 },
+	{ _MMIO(0x9888), 0x04d42000 },
+	{ _MMIO(0x9888), 0x0ed59000 },
+	{ _MMIO(0x9888), 0x00d59800 },
+	{ _MMIO(0x9888), 0x10d50000 },
+	{ _MMIO(0x9888), 0x0f88000e },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b5500 },
+	{ _MMIO(0x9888), 0x258b000a },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x0d8d8000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x018d8000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x2185000a },
+	{ _MMIO(0x9888), 0x1b830150 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d848000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d808000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47801021 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800c64 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800c02 },
+};
+
+static const struct i915_oa_reg *
+get_sampler_1_mux_config(struct drm_i915_private *dev_priv,
+			 int *len)
+{
+	*len = ARRAY_SIZE(mux_config_sampler_1);
+	return mux_config_sampler_1;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler_2[] = {
+	{ _MMIO(0x9888), 0x18121400 },
+	{ _MMIO(0x9888), 0x141500ab },
+	{ _MMIO(0x9888), 0x18321400 },
+	{ _MMIO(0x9888), 0x143500ab },
+	{ _MMIO(0x9888), 0x18521400 },
+	{ _MMIO(0x9888), 0x145500ab },
+	{ _MMIO(0x9888), 0x0c5c8000 },
+	{ _MMIO(0x9888), 0x0e5c4000 },
+	{ _MMIO(0x9888), 0x025cc000 },
+	{ _MMIO(0x9888), 0x045cc000 },
+	{ _MMIO(0x9888), 0x1a3d00a0 },
+	{ _MMIO(0x9888), 0x0a3d8000 },
+	{ _MMIO(0x9888), 0x0c588000 },
+	{ _MMIO(0x9888), 0x0e584000 },
+	{ _MMIO(0x9888), 0x04588000 },
+	{ _MMIO(0x9888), 0x1a5b0050 },
+	{ _MMIO(0x9888), 0x045b8000 },
+	{ _MMIO(0x9888), 0x065b8000 },
+	{ _MMIO(0x9888), 0x085b8000 },
+	{ _MMIO(0x9888), 0x0a5b4000 },
+	{ _MMIO(0x9888), 0x101f02a0 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f00aa },
+	{ _MMIO(0x9888), 0x18382500 },
+	{ _MMIO(0x9888), 0x02388000 },
+	{ _MMIO(0x9888), 0x04384000 },
+	{ _MMIO(0x9888), 0x06384000 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0c388000 },
+	{ _MMIO(0x9888), 0x0c398000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x00398000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x1a3a0200 },
+	{ _MMIO(0x9888), 0x023a8000 },
+	{ _MMIO(0x9888), 0x0c3a8000 },
+	{ _MMIO(0x9888), 0x04108000 },
+	{ _MMIO(0x9888), 0x04118000 },
+	{ _MMIO(0x9888), 0x04127300 },
+	{ _MMIO(0x9888), 0x10120000 },
+	{ _MMIO(0x9888), 0x1813000a },
+	{ _MMIO(0x9888), 0x0a134000 },
+	{ _MMIO(0x9888), 0x0a146000 },
+	{ _MMIO(0x9888), 0x0c159000 },
+	{ _MMIO(0x9888), 0x0e150098 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x04304000 },
+	{ _MMIO(0x9888), 0x04314000 },
+	{ _MMIO(0x9888), 0x04320073 },
+	{ _MMIO(0x9888), 0x10320000 },
+	{ _MMIO(0x9888), 0x04338000 },
+	{ _MMIO(0x9888), 0x06338000 },
+	{ _MMIO(0x9888), 0x08334000 },
+	{ _MMIO(0x9888), 0x0434c000 },
+	{ _MMIO(0x9888), 0x02359890 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x06504000 },
+	{ _MMIO(0x9888), 0x06514000 },
+	{ _MMIO(0x9888), 0x06520073 },
+	{ _MMIO(0x9888), 0x10520000 },
+	{ _MMIO(0x9888), 0x18530020 },
+	{ _MMIO(0x9888), 0x02538000 },
+	{ _MMIO(0x9888), 0x0c534000 },
+	{ _MMIO(0x9888), 0x0a548000 },
+	{ _MMIO(0x9888), 0x04542000 },
+	{ _MMIO(0x9888), 0x0e559000 },
+	{ _MMIO(0x9888), 0x00559800 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x1b8aa000 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x258b0005 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x2185000a },
+	{ _MMIO(0x9888), 0x1b830150 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d848000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d808000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47801021 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800c64 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800c02 },
+};
+
+static const struct i915_oa_reg *
+get_sampler_2_mux_config(struct drm_i915_private *dev_priv,
+			 int *len)
+{
+	*len = ARRAY_SIZE(mux_config_sampler_2);
+	return mux_config_sampler_2;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x0000fe7f },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000ffbf },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fff7 },
+	{ _MMIO(0x2798), 0x00000000 },
+	{ _MMIO(0x279c), 0x0000fff9 },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_1[] = {
+	{ _MMIO(0x9888), 0x16154d60 },
+	{ _MMIO(0x9888), 0x16352e60 },
+	{ _MMIO(0x9888), 0x16554d60 },
+	{ _MMIO(0x9888), 0x16950000 },
+	{ _MMIO(0x9888), 0x16b50000 },
+	{ _MMIO(0x9888), 0x16d50000 },
+	{ _MMIO(0x9888), 0x005c8000 },
+	{ _MMIO(0x9888), 0x045cc000 },
+	{ _MMIO(0x9888), 0x065c4000 },
+	{ _MMIO(0x9888), 0x083d8000 },
+	{ _MMIO(0x9888), 0x0a3d8000 },
+	{ _MMIO(0x9888), 0x0458c000 },
+	{ _MMIO(0x9888), 0x025b8000 },
+	{ _MMIO(0x9888), 0x085b4000 },
+	{ _MMIO(0x9888), 0x0a5b4000 },
+	{ _MMIO(0x9888), 0x0c5b8000 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f00aa },
+	{ _MMIO(0x9888), 0x02384000 },
+	{ _MMIO(0x9888), 0x04388000 },
+	{ _MMIO(0x9888), 0x06388000 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0c384000 },
+	{ _MMIO(0x9888), 0x00398000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x043a8000 },
+	{ _MMIO(0x9888), 0x063a8000 },
+	{ _MMIO(0x9888), 0x08138000 },
+	{ _MMIO(0x9888), 0x0a138000 },
+	{ _MMIO(0x9888), 0x06143000 },
+	{ _MMIO(0x9888), 0x0415cfc7 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x02338000 },
+	{ _MMIO(0x9888), 0x0c338000 },
+	{ _MMIO(0x9888), 0x04342000 },
+	{ _MMIO(0x9888), 0x06344000 },
+	{ _MMIO(0x9888), 0x0035c700 },
+	{ _MMIO(0x9888), 0x063500cf },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x04538000 },
+	{ _MMIO(0x9888), 0x06538000 },
+	{ _MMIO(0x9888), 0x0454c000 },
+	{ _MMIO(0x9888), 0x0255cfc7 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x06dc8000 },
+	{ _MMIO(0x9888), 0x08dc4000 },
+	{ _MMIO(0x9888), 0x0cdcc000 },
+	{ _MMIO(0x9888), 0x0edcc000 },
+	{ _MMIO(0x9888), 0x1abd00a8 },
+	{ _MMIO(0x9888), 0x0cd8c000 },
+	{ _MMIO(0x9888), 0x0ed84000 },
+	{ _MMIO(0x9888), 0x0edb8000 },
+	{ _MMIO(0x9888), 0x18db0800 },
+	{ _MMIO(0x9888), 0x1adb0254 },
+	{ _MMIO(0x9888), 0x0e9faa00 },
+	{ _MMIO(0x9888), 0x109f02aa },
+	{ _MMIO(0x9888), 0x0eb84000 },
+	{ _MMIO(0x9888), 0x16b84000 },
+	{ _MMIO(0x9888), 0x18b8156a },
+	{ _MMIO(0x9888), 0x06b98000 },
+	{ _MMIO(0x9888), 0x08b9a000 },
+	{ _MMIO(0x9888), 0x0ab9a000 },
+	{ _MMIO(0x9888), 0x0cb9a000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x18baa000 },
+	{ _MMIO(0x9888), 0x1aba0002 },
+	{ _MMIO(0x9888), 0x16934000 },
+	{ _MMIO(0x9888), 0x1893000a },
+	{ _MMIO(0x9888), 0x0a947000 },
+	{ _MMIO(0x9888), 0x0c95c5c1 },
+	{ _MMIO(0x9888), 0x0e9500c3 },
+	{ _MMIO(0x9888), 0x10950000 },
+	{ _MMIO(0x9888), 0x0eb38000 },
+	{ _MMIO(0x9888), 0x16b30040 },
+	{ _MMIO(0x9888), 0x18b30020 },
+	{ _MMIO(0x9888), 0x06b48000 },
+	{ _MMIO(0x9888), 0x08b41000 },
+	{ _MMIO(0x9888), 0x0ab48000 },
+	{ _MMIO(0x9888), 0x06b5c500 },
+	{ _MMIO(0x9888), 0x08b500c3 },
+	{ _MMIO(0x9888), 0x0eb5c100 },
+	{ _MMIO(0x9888), 0x10b50000 },
+	{ _MMIO(0x9888), 0x16d31500 },
+	{ _MMIO(0x9888), 0x08d4e000 },
+	{ _MMIO(0x9888), 0x08d5c100 },
+	{ _MMIO(0x9888), 0x0ad5c3c5 },
+	{ _MMIO(0x9888), 0x10d50000 },
+	{ _MMIO(0x9888), 0x0d88f800 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x258baaa5 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x0f8c4000 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c5500 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x078d8000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800c42 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800063 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800800 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f8014a4 },
+	{ _MMIO(0x9888), 0x41801042 },
+};
+
+static const struct i915_oa_reg *
+get_tdl_1_mux_config(struct drm_i915_private *dev_priv,
+		     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_tdl_1);
+	return mux_config_tdl_1;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x0000fe7f },
+	{ _MMIO(0x2780), 0x00000000 },
+	{ _MMIO(0x2784), 0x0000ff9f },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000ffe7 },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fffb },
+	{ _MMIO(0x2798), 0x00000002 },
+	{ _MMIO(0x279c), 0x0000fffd },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_2[] = {
+	{ _MMIO(0x9888), 0x16150000 },
+	{ _MMIO(0x9888), 0x16350000 },
+	{ _MMIO(0x9888), 0x16550000 },
+	{ _MMIO(0x9888), 0x16952e60 },
+	{ _MMIO(0x9888), 0x16b54d60 },
+	{ _MMIO(0x9888), 0x16d52e60 },
+	{ _MMIO(0x9888), 0x065c8000 },
+	{ _MMIO(0x9888), 0x085cc000 },
+	{ _MMIO(0x9888), 0x0a5cc000 },
+	{ _MMIO(0x9888), 0x0c5c4000 },
+	{ _MMIO(0x9888), 0x0e3d8000 },
+	{ _MMIO(0x9888), 0x183da000 },
+	{ _MMIO(0x9888), 0x06588000 },
+	{ _MMIO(0x9888), 0x08588000 },
+	{ _MMIO(0x9888), 0x0a584000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x185b5800 },
+	{ _MMIO(0x9888), 0x1a5b000a },
+	{ _MMIO(0x9888), 0x0e1faa00 },
+	{ _MMIO(0x9888), 0x101f02aa },
+	{ _MMIO(0x9888), 0x0e384000 },
+	{ _MMIO(0x9888), 0x16384000 },
+	{ _MMIO(0x9888), 0x18382a55 },
+	{ _MMIO(0x9888), 0x06398000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0c39a000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x1a3a02a0 },
+	{ _MMIO(0x9888), 0x0e138000 },
+	{ _MMIO(0x9888), 0x16130500 },
+	{ _MMIO(0x9888), 0x06148000 },
+	{ _MMIO(0x9888), 0x08146000 },
+	{ _MMIO(0x9888), 0x0615c100 },
+	{ _MMIO(0x9888), 0x0815c500 },
+	{ _MMIO(0x9888), 0x0a1500c3 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x16335040 },
+	{ _MMIO(0x9888), 0x08349000 },
+	{ _MMIO(0x9888), 0x0a341000 },
+	{ _MMIO(0x9888), 0x083500c1 },
+	{ _MMIO(0x9888), 0x0a35c500 },
+	{ _MMIO(0x9888), 0x0c3500c3 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x1853002a },
+	{ _MMIO(0x9888), 0x0a54e000 },
+	{ _MMIO(0x9888), 0x0c55c500 },
+	{ _MMIO(0x9888), 0x0e55c1c3 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x00dc8000 },
+	{ _MMIO(0x9888), 0x02dcc000 },
+	{ _MMIO(0x9888), 0x04dc4000 },
+	{ _MMIO(0x9888), 0x04bd8000 },
+	{ _MMIO(0x9888), 0x06bd8000 },
+	{ _MMIO(0x9888), 0x02d8c000 },
+	{ _MMIO(0x9888), 0x02db8000 },
+	{ _MMIO(0x9888), 0x04db4000 },
+	{ _MMIO(0x9888), 0x06db4000 },
+	{ _MMIO(0x9888), 0x08db8000 },
+	{ _MMIO(0x9888), 0x0c9fa000 },
+	{ _MMIO(0x9888), 0x0e9f00aa },
+	{ _MMIO(0x9888), 0x02b84000 },
+	{ _MMIO(0x9888), 0x04b84000 },
+	{ _MMIO(0x9888), 0x06b84000 },
+	{ _MMIO(0x9888), 0x08b84000 },
+	{ _MMIO(0x9888), 0x0ab88000 },
+	{ _MMIO(0x9888), 0x0cb88000 },
+	{ _MMIO(0x9888), 0x00b98000 },
+	{ _MMIO(0x9888), 0x02b9a000 },
+	{ _MMIO(0x9888), 0x04b9a000 },
+	{ _MMIO(0x9888), 0x06b92000 },
+	{ _MMIO(0x9888), 0x0aba8000 },
+	{ _MMIO(0x9888), 0x0cba8000 },
+	{ _MMIO(0x9888), 0x04938000 },
+	{ _MMIO(0x9888), 0x06938000 },
+	{ _MMIO(0x9888), 0x0494c000 },
+	{ _MMIO(0x9888), 0x0295cfc7 },
+	{ _MMIO(0x9888), 0x10950000 },
+	{ _MMIO(0x9888), 0x02b38000 },
+	{ _MMIO(0x9888), 0x08b38000 },
+	{ _MMIO(0x9888), 0x04b42000 },
+	{ _MMIO(0x9888), 0x06b41000 },
+	{ _MMIO(0x9888), 0x00b5c700 },
+	{ _MMIO(0x9888), 0x04b500cf },
+	{ _MMIO(0x9888), 0x10b50000 },
+	{ _MMIO(0x9888), 0x0ad38000 },
+	{ _MMIO(0x9888), 0x0cd38000 },
+	{ _MMIO(0x9888), 0x06d46000 },
+	{ _MMIO(0x9888), 0x04d5c700 },
+	{ _MMIO(0x9888), 0x06d500cf },
+	{ _MMIO(0x9888), 0x10d50000 },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8aaaa0 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x258b555a },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x238b5500 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x018d8000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800882 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45801082 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x478014a5 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800002 },
+	{ _MMIO(0x9888), 0x41800c62 },
+};
+
+static const struct i915_oa_reg *
+get_tdl_2_mux_config(struct drm_i915_private *dev_priv,
+		     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_tdl_2);
+	return mux_config_tdl_2;
+}
+
+static const struct i915_oa_reg b_counter_config_test_oa[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2770), 0x00000004 },
+	{ _MMIO(0x2774), 0x00000000 },
+	{ _MMIO(0x2778), 0x00000003 },
+	{ _MMIO(0x277c), 0x00000000 },
+	{ _MMIO(0x2780), 0x00000007 },
+	{ _MMIO(0x2784), 0x00000000 },
+	{ _MMIO(0x2788), 0x00100002 },
+	{ _MMIO(0x278c), 0x0000fff7 },
+	{ _MMIO(0x2790), 0x00100002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00100082 },
+	{ _MMIO(0x279c), 0x0000ffef },
+	{ _MMIO(0x27a0), 0x001000c2 },
+	{ _MMIO(0x27a4), 0x0000ffe7 },
+	{ _MMIO(0x27a8), 0x00100001 },
+	{ _MMIO(0x27ac), 0x0000ffe7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_test_oa[] = {
+};
+
+static const struct i915_oa_reg mux_config_test_oa[] = {
+	{ _MMIO(0x9888), 0x59800000 },
+	{ _MMIO(0x9888), 0x59800001 },
+	{ _MMIO(0x9888), 0x338b0000 },
+	{ _MMIO(0x9888), 0x258b0066 },
+	{ _MMIO(0x9888), 0x058b0000 },
+	{ _MMIO(0x9888), 0x038b0000 },
+	{ _MMIO(0x9888), 0x03844000 },
+	{ _MMIO(0x9888), 0x47800080 },
+	{ _MMIO(0x9888), 0x57800000 },
+	{ _MMIO(0x1823a4), 0x00000000 },
+	{ _MMIO(0x9888), 0x59800000 },
+};
+
+static const struct i915_oa_reg *
+get_test_oa_mux_config(struct drm_i915_private *dev_priv,
+		       int *len)
+{
+	*len = ARRAY_SIZE(mux_config_test_oa);
+	return mux_config_test_oa;
+}
+
 int i915_oa_select_metric_set_chv(struct drm_i915_private *dev_priv)
 {
 	dev_priv->perf.oa.mux_regs = NULL;
@@ -144,13 +1908,288 @@ int i915_oa_select_metric_set_chv(struct drm_i915_private *dev_priv)
 	dev_priv->perf.oa.flex_regs = NULL;
 	dev_priv->perf.oa.flex_regs_len = 0;

-	switch (dev_priv->perf.oa.metrics_set) {
-	case METRIC_SET_ID_RENDER_BASIC:
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_render_basic_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_basic_mux_config(dev_priv,
+						     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
+		return 0;
+	case METRIC_SET_ID_RENDER_PIPE_PROFILE:
+		dev_priv->perf.oa.mux_regs =
+			get_render_pipe_profile_mux_config(dev_priv,
+							   &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_PIPE_PROFILE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_pipe_profile;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_pipe_profile);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_pipe_profile;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_pipe_profile);
+
+		return 0;
+	case METRIC_SET_ID_HDC_AND_SF:
+		dev_priv->perf.oa.mux_regs =
+			get_hdc_and_sf_mux_config(dev_priv,
+						  &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"HDC_AND_SF\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_hdc_and_sf;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_hdc_and_sf);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_hdc_and_sf;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_hdc_and_sf);
+
+		return 0;
+	case METRIC_SET_ID_L3_1:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_1_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_1);
+
+		return 0;
+	case METRIC_SET_ID_L3_2:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_2_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_2);
+
+		return 0;
+	case METRIC_SET_ID_L3_3:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_3_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_3\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_3;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_3);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_3;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_3);
+
+		return 0;
+	case METRIC_SET_ID_L3_4:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_4_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_4\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_4;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_4);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_4;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_4);
+
+		return 0;
+	case METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND:
+		dev_priv->perf.oa.mux_regs =
+			get_rasterizer_and_pixel_backend_mux_config(dev_priv,
+								    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RASTERIZER_AND_PIXEL_BACKEND\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_rasterizer_and_pixel_backend);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_rasterizer_and_pixel_backend);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER_1:
+		dev_priv->perf.oa.mux_regs =
+			get_sampler_1_mux_config(dev_priv,
+						 &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler_1);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER_2:
+		dev_priv->perf.oa.mux_regs =
+			get_sampler_2_mux_config(dev_priv,
+						 &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler_2);
+
+		return 0;
+	case METRIC_SET_ID_TDL_1:
 		dev_priv->perf.oa.mux_regs =
-			get_render_basic_mux_config(dev_priv,
-						    &dev_priv->perf.oa.mux_regs_len);
+			get_tdl_1_mux_config(dev_priv,
+					     &dev_priv->perf.oa.mux_regs_len);
 		if (!dev_priv->perf.oa.mux_regs) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_1\" metric set\n");

 			/* EINVAL because *_register_sysfs already checked this
 			 * and so it wouldn't have been advertised to userspace and
@@ -160,14 +2199,64 @@ int i915_oa_select_metric_set_chv(struct drm_i915_private *dev_priv)
 		}

 		dev_priv->perf.oa.b_counter_regs =
-			b_counter_config_render_basic;
+			b_counter_config_tdl_1;
 		dev_priv->perf.oa.b_counter_regs_len =
-			ARRAY_SIZE(b_counter_config_render_basic);
+			ARRAY_SIZE(b_counter_config_tdl_1);

 		dev_priv->perf.oa.flex_regs =
-			flex_eu_config_render_basic;
+			flex_eu_config_tdl_1;
 		dev_priv->perf.oa.flex_regs_len =
-			ARRAY_SIZE(flex_eu_config_render_basic);
+			ARRAY_SIZE(flex_eu_config_tdl_1);
+
+		return 0;
+	case METRIC_SET_ID_TDL_2:
+		dev_priv->perf.oa.mux_regs =
+			get_tdl_2_mux_config(dev_priv,
+					     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_2);
+
+		return 0;
+	case METRIC_SET_ID_TEST_OA:
+		dev_priv->perf.oa.mux_regs =
+			get_test_oa_mux_config(dev_priv,
+					       &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TEST_OA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_test_oa;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_test_oa);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_test_oa;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_test_oa);

 		return 0;
 	default:
@@ -197,6 +2286,292 @@ static struct attribute_group group_render_basic = {
 	.attrs =  attrs_render_basic,
 };

+static ssize_t
+show_compute_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_BASIC);
+}
+
+static struct device_attribute dev_attr_compute_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_basic[] = {
+	&dev_attr_compute_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_basic = {
+	.name = "f522a89c-ecd1-4522-8331-3383c54af5f5",
+	.attrs =  attrs_compute_basic,
+};
+
+static ssize_t
+show_render_pipe_profile_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_PIPE_PROFILE);
+}
+
+static struct device_attribute dev_attr_render_pipe_profile_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_pipe_profile_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_pipe_profile[] = {
+	&dev_attr_render_pipe_profile_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_pipe_profile = {
+	.name = "a9ccc03d-a943-4e6b-9cd6-13e063075927",
+	.attrs =  attrs_render_pipe_profile,
+};
+
+static ssize_t
+show_hdc_and_sf_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_HDC_AND_SF);
+}
+
+static struct device_attribute dev_attr_hdc_and_sf_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_hdc_and_sf_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_hdc_and_sf[] = {
+	&dev_attr_hdc_and_sf_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_hdc_and_sf = {
+	.name = "2cf0c064-68df-4fac-9b3f-57f51ca8a069",
+	.attrs =  attrs_hdc_and_sf,
+};
+
+static ssize_t
+show_l3_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_1);
+}
+
+static struct device_attribute dev_attr_l3_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_1[] = {
+	&dev_attr_l3_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_1 = {
+	.name = "78a87ff9-543a-49ce-95ea-26d86071ea93",
+	.attrs =  attrs_l3_1,
+};
+
+static ssize_t
+show_l3_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_2);
+}
+
+static struct device_attribute dev_attr_l3_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_2[] = {
+	&dev_attr_l3_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_2 = {
+	.name = "9f2cece5-7bfe-4320-ad66-8c7cc526bec5",
+	.attrs =  attrs_l3_2,
+};
+
+static ssize_t
+show_l3_3_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_3);
+}
+
+static struct device_attribute dev_attr_l3_3_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_3_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_3[] = {
+	&dev_attr_l3_3_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_3 = {
+	.name = "d890ef38-d309-47e4-b8b5-aa779bb19ab0",
+	.attrs =  attrs_l3_3,
+};
+
+static ssize_t
+show_l3_4_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_4);
+}
+
+static struct device_attribute dev_attr_l3_4_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_4_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_4[] = {
+	&dev_attr_l3_4_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_4 = {
+	.name = "5fdff4a6-9dc8-45e1-bfda-ef54869fbdd4",
+	.attrs =  attrs_l3_4,
+};
+
+static ssize_t
+show_rasterizer_and_pixel_backend_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND);
+}
+
+static struct device_attribute dev_attr_rasterizer_and_pixel_backend_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_rasterizer_and_pixel_backend_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_rasterizer_and_pixel_backend[] = {
+	&dev_attr_rasterizer_and_pixel_backend_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_rasterizer_and_pixel_backend = {
+	.name = "2c0e45e1-7e2c-4a14-ae00-0b7ec868b8aa",
+	.attrs =  attrs_rasterizer_and_pixel_backend,
+};
+
+static ssize_t
+show_sampler_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER_1);
+}
+
+static struct device_attribute dev_attr_sampler_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler_1[] = {
+	&dev_attr_sampler_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler_1 = {
+	.name = "71148d78-baf5-474f-878a-e23158d0265d",
+	.attrs =  attrs_sampler_1,
+};
+
+static ssize_t
+show_sampler_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER_2);
+}
+
+static struct device_attribute dev_attr_sampler_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler_2[] = {
+	&dev_attr_sampler_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler_2 = {
+	.name = "b996a2b7-c59c-492d-877a-8cd54fd6df84",
+	.attrs =  attrs_sampler_2,
+};
+
+static ssize_t
+show_tdl_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_1);
+}
+
+static struct device_attribute dev_attr_tdl_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_1[] = {
+	&dev_attr_tdl_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_1 = {
+	.name = "eb2fecba-b431-42e7-8261-fe9429a6e67a",
+	.attrs =  attrs_tdl_1,
+};
+
+static ssize_t
+show_tdl_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_2);
+}
+
+static struct device_attribute dev_attr_tdl_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_2[] = {
+	&dev_attr_tdl_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_2 = {
+	.name = "60749470-a648-4a4b-9f10-dbfe1e36e44d",
+	.attrs =  attrs_tdl_2,
+};
+
+static ssize_t
+show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TEST_OA);
+}
+
+static struct device_attribute dev_attr_test_oa_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_test_oa_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_test_oa[] = {
+	&dev_attr_test_oa_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_test_oa = {
+	.name = "4a534b07-cba3-414d-8d60-874830e883aa",
+	.attrs =  attrs_test_oa,
+};
+
 int
 i915_perf_register_sysfs_chv(struct drm_i915_private *dev_priv)
 {
@@ -208,9 +2583,113 @@ i915_perf_register_sysfs_chv(struct drm_i915_private *dev_priv)
 		if (ret)
 			goto error_render_basic;
 	}
+	if (get_compute_basic_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+		if (ret)
+			goto error_compute_basic;
+	}
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+		if (ret)
+			goto error_render_pipe_profile;
+	}
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+		if (ret)
+			goto error_hdc_and_sf;
+	}
+	if (get_l3_1_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+		if (ret)
+			goto error_l3_1;
+	}
+	if (get_l3_2_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+		if (ret)
+			goto error_l3_2;
+	}
+	if (get_l3_3_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+		if (ret)
+			goto error_l3_3;
+	}
+	if (get_l3_4_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_4);
+		if (ret)
+			goto error_l3_4;
+	}
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+		if (ret)
+			goto error_rasterizer_and_pixel_backend;
+	}
+	if (get_sampler_1_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler_1);
+		if (ret)
+			goto error_sampler_1;
+	}
+	if (get_sampler_2_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler_2);
+		if (ret)
+			goto error_sampler_2;
+	}
+	if (get_tdl_1_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+		if (ret)
+			goto error_tdl_1;
+	}
+	if (get_tdl_2_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+		if (ret)
+			goto error_tdl_2;
+	}
+	if (get_test_oa_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+		if (ret)
+			goto error_test_oa;
+	}

 	return 0;

+error_test_oa:
+	if (get_tdl_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+error_tdl_2:
+	if (get_tdl_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+error_tdl_1:
+	if (get_sampler_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_2);
+error_sampler_2:
+	if (get_sampler_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_1);
+error_sampler_1:
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+error_rasterizer_and_pixel_backend:
+	if (get_l3_4_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_4);
+error_l3_4:
+	if (get_l3_3_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+error_l3_3:
+	if (get_l3_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+error_l3_2:
+	if (get_l3_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+error_l3_1:
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+error_hdc_and_sf:
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+error_render_pipe_profile:
+	if (get_compute_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+error_compute_basic:
+	if (get_render_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
 error_render_basic:
 	return ret;
 }
@@ -222,4 +2701,30 @@ i915_perf_unregister_sysfs_chv(struct drm_i915_private *dev_priv)

 	if (get_render_basic_mux_config(dev_priv, &mux_len))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+	if (get_compute_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+	if (get_l3_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+	if (get_l3_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+	if (get_l3_3_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+	if (get_l3_4_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_4);
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+	if (get_sampler_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_1);
+	if (get_sampler_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_2);
+	if (get_tdl_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+	if (get_tdl_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+	if (get_test_oa_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_test_oa);
 }
diff --git a/drivers/gpu/drm/i915/i915_oa_hsw.c b/drivers/gpu/drm/i915/i915_oa_hsw.c
index 4ddf756add31..7247d539ff04 100644
--- a/drivers/gpu/drm/i915/i915_oa_hsw.c
+++ b/drivers/gpu/drm/i915/i915_oa_hsw.c
@@ -47,6 +47,9 @@ static const struct i915_oa_reg b_counter_config_render_basic[] = {
 	{ _MMIO(0x2710), 0x00000000 },
 };

+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+};
+
 static const struct i915_oa_reg mux_config_render_basic[] = {
 	{ _MMIO(0x253a4), 0x01600000 },
 	{ _MMIO(0x25440), 0x00100000 },
@@ -137,6 +140,9 @@ static const struct i915_oa_reg b_counter_config_compute_basic[] = {
 	{ _MMIO(0x236c), 0x00000000 },
 };

+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+};
+
 static const struct i915_oa_reg mux_config_compute_basic[] = {
 	{ _MMIO(0x253a4), 0x00000000 },
 	{ _MMIO(0x2681c), 0x01f00800 },
@@ -203,6 +209,9 @@ static const struct i915_oa_reg b_counter_config_compute_extended[] = {
 	{ _MMIO(0x27ac), 0x0000fffe },
 };

+static const struct i915_oa_reg flex_eu_config_compute_extended[] = {
+};
+
 static const struct i915_oa_reg mux_config_compute_extended[] = {
 	{ _MMIO(0x2681c), 0x3eb00800 },
 	{ _MMIO(0x26820), 0x00900000 },
@@ -260,6 +269,9 @@ static const struct i915_oa_reg b_counter_config_memory_reads[] = {
 	{ _MMIO(0x27ac), 0x0000fc00 },
 };

+static const struct i915_oa_reg flex_eu_config_memory_reads[] = {
+};
+
 static const struct i915_oa_reg mux_config_memory_reads[] = {
 	{ _MMIO(0x253a4), 0x34300000 },
 	{ _MMIO(0x25440), 0x2d800000 },
@@ -320,6 +332,9 @@ static const struct i915_oa_reg b_counter_config_memory_writes[] = {
 	{ _MMIO(0x27ac), 0x0000fc00 },
 };

+static const struct i915_oa_reg flex_eu_config_memory_writes[] = {
+};
+
 static const struct i915_oa_reg mux_config_memory_writes[] = {
 	{ _MMIO(0x253a4), 0x34300000 },
 	{ _MMIO(0x25440), 0x01500000 },
@@ -358,6 +373,9 @@ static const struct i915_oa_reg b_counter_config_sampler_balance[] = {
 	{ _MMIO(0x2724), 0x00800000 },
 };

+static const struct i915_oa_reg flex_eu_config_sampler_balance[] = {
+};
+
 static const struct i915_oa_reg mux_config_sampler_balance[] = {
 	{ _MMIO(0x2eb9c), 0x01906400 },
 	{ _MMIO(0x2fb9c), 0x01906400 },
@@ -436,6 +454,11 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 		dev_priv->perf.oa.b_counter_regs_len =
 			ARRAY_SIZE(b_counter_config_render_basic);

+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
 		return 0;
 	case METRIC_SET_ID_COMPUTE_BASIC:
 		dev_priv->perf.oa.mux_regs =
@@ -456,6 +479,11 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 		dev_priv->perf.oa.b_counter_regs_len =
 			ARRAY_SIZE(b_counter_config_compute_basic);

+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
 		return 0;
 	case METRIC_SET_ID_COMPUTE_EXTENDED:
 		dev_priv->perf.oa.mux_regs =
@@ -476,6 +504,11 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 		dev_priv->perf.oa.b_counter_regs_len =
 			ARRAY_SIZE(b_counter_config_compute_extended);

+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extended;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extended);
+
 		return 0;
 	case METRIC_SET_ID_MEMORY_READS:
 		dev_priv->perf.oa.mux_regs =
@@ -496,6 +529,11 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 		dev_priv->perf.oa.b_counter_regs_len =
 			ARRAY_SIZE(b_counter_config_memory_reads);

+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_reads;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_reads);
+
 		return 0;
 	case METRIC_SET_ID_MEMORY_WRITES:
 		dev_priv->perf.oa.mux_regs =
@@ -516,6 +554,11 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 		dev_priv->perf.oa.b_counter_regs_len =
 			ARRAY_SIZE(b_counter_config_memory_writes);

+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_writes;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_writes);
+
 		return 0;
 	case METRIC_SET_ID_SAMPLER_BALANCE:
 		dev_priv->perf.oa.mux_regs =
@@ -536,6 +579,11 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 		dev_priv->perf.oa.b_counter_regs_len =
 			ARRAY_SIZE(b_counter_config_sampler_balance);

+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler_balance;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler_balance);
+
 		return 0;
 	default:
 		return -ENODEV;
@@ -714,19 +762,19 @@ i915_perf_register_sysfs_hsw(struct drm_i915_private *dev_priv)
 	return 0;

 error_sampler_balance:
-	if (get_sampler_balance_mux_config(dev_priv, &mux_len))
+	if (get_memory_writes_mux_config(dev_priv, &mux_len))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
 error_memory_writes:
-	if (get_sampler_balance_mux_config(dev_priv, &mux_len))
+	if (get_memory_reads_mux_config(dev_priv, &mux_len))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
 error_memory_reads:
-	if (get_sampler_balance_mux_config(dev_priv, &mux_len))
+	if (get_compute_extended_mux_config(dev_priv, &mux_len))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
 error_compute_extended:
-	if (get_sampler_balance_mux_config(dev_priv, &mux_len))
+	if (get_compute_basic_mux_config(dev_priv, &mux_len))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
 error_compute_basic:
-	if (get_sampler_balance_mux_config(dev_priv, &mux_len))
+	if (get_render_basic_mux_config(dev_priv, &mux_len))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
 error_render_basic:
 	return ret;
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt2.c b/drivers/gpu/drm/i915/i915_oa_sklgt2.c
index ea0317a4781c..742ff11f9ec4 100644
--- a/drivers/gpu/drm/i915/i915_oa_sklgt2.c
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt2.c
@@ -31,9 +31,26 @@

 enum metric_set_id {
 	METRIC_SET_ID_RENDER_BASIC = 1,
+	METRIC_SET_ID_COMPUTE_BASIC,
+	METRIC_SET_ID_RENDER_PIPE_PROFILE,
+	METRIC_SET_ID_MEMORY_READS,
+	METRIC_SET_ID_MEMORY_WRITES,
+	METRIC_SET_ID_COMPUTE_EXTENDED,
+	METRIC_SET_ID_COMPUTE_L3_CACHE,
+	METRIC_SET_ID_HDC_AND_SF,
+	METRIC_SET_ID_L3_1,
+	METRIC_SET_ID_L3_2,
+	METRIC_SET_ID_L3_3,
+	METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND,
+	METRIC_SET_ID_SAMPLER,
+	METRIC_SET_ID_TDL_1,
+	METRIC_SET_ID_TDL_2,
+	METRIC_SET_ID_COMPUTE_EXTRA,
+	METRIC_SET_ID_VME_PIPE,
+	METRIC_SET_ID_TEST_OA,
 };

-int i915_oa_n_builtin_metric_sets_sklgt2 = 1;
+int i915_oa_n_builtin_metric_sets_sklgt2 = 18;

 static const struct i915_oa_reg b_counter_config_render_basic[] = {
 	{ _MMIO(0x2710), 0x00000000 },
@@ -138,66 +155,2953 @@ get_render_basic_mux_config(struct drm_i915_private *dev_priv,
 	return NULL;
 }

+static const struct i915_oa_reg b_counter_config_compute_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic_0_slices_0x01_and_sku_lt_0x02[] = {
+	{ _MMIO(0x9888), 0x104f00e0 },
+	{ _MMIO(0x9888), 0x124f1c00 },
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f901403 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4e8200 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x004f0db2 },
+	{ _MMIO(0x9888), 0x064f0900 },
+	{ _MMIO(0x9888), 0x084f1880 },
+	{ _MMIO(0x9888), 0x0a4f0011 },
+	{ _MMIO(0x9888), 0x0c4f0e3c },
+	{ _MMIO(0x9888), 0x0e4f1d80 },
+	{ _MMIO(0x9888), 0x086c0002 },
+	{ _MMIO(0x9888), 0x0a6c0100 },
+	{ _MMIO(0x9888), 0x0e6c000c },
+	{ _MMIO(0x9888), 0x026c000b },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x081b4000 },
+	{ _MMIO(0x9888), 0x0a1b8000 },
+	{ _MMIO(0x9888), 0x0e1b4000 },
+	{ _MMIO(0x9888), 0x021b4000 },
+	{ _MMIO(0x9888), 0x1a1c4000 },
+	{ _MMIO(0x9888), 0x1c1c0012 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x005bc000 },
+	{ _MMIO(0x9888), 0x065b8000 },
+	{ _MMIO(0x9888), 0x085b8000 },
+	{ _MMIO(0x9888), 0x0a5b4000 },
+	{ _MMIO(0x9888), 0x0c5bc000 },
+	{ _MMIO(0x9888), 0x0e5b8000 },
+	{ _MMIO(0x9888), 0x105c8000 },
+	{ _MMIO(0x9888), 0x1a5ca000 },
+	{ _MMIO(0x9888), 0x1c5c002d },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x0a4c0800 },
+	{ _MMIO(0x9888), 0x0c4c0082 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020d2000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5500 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002cc000 },
+	{ _MMIO(0x9888), 0x0e2cc000 },
+	{ _MMIO(0x9888), 0x162cbe00 },
+	{ _MMIO(0x9888), 0x182c00ef },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x19900157 },
+	{ _MMIO(0x9888), 0x1b900167 },
+	{ _MMIO(0x9888), 0x1d900105 },
+	{ _MMIO(0x9888), 0x1f900103 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0xd28), 0x00000000 },
+	{ _MMIO(0x9888), 0x11900fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900840 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900842 },
+	{ _MMIO(0x9888), 0x47900840 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900840 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900040 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900840 },
+	{ _MMIO(0x9888), 0x53901111 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic_0_slices_0x01_and_sku_gte_0x02[] = {
+	{ _MMIO(0x9888), 0x104f00e0 },
+	{ _MMIO(0x9888), 0x124f1c00 },
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f901403 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x1a4e0820 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x064f0900 },
+	{ _MMIO(0x9888), 0x084f0032 },
+	{ _MMIO(0x9888), 0x0a4f1810 },
+	{ _MMIO(0x9888), 0x0c4f0e00 },
+	{ _MMIO(0x9888), 0x0e4f003c },
+	{ _MMIO(0x9888), 0x004f0d80 },
+	{ _MMIO(0x9888), 0x024f003b },
+	{ _MMIO(0x9888), 0x006c0002 },
+	{ _MMIO(0x9888), 0x086c0000 },
+	{ _MMIO(0x9888), 0x0c6c000c },
+	{ _MMIO(0x9888), 0x0e6c0b00 },
+	{ _MMIO(0x9888), 0x186c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x081b8000 },
+	{ _MMIO(0x9888), 0x0c1b4000 },
+	{ _MMIO(0x9888), 0x0e1b8000 },
+	{ _MMIO(0x9888), 0x101c8000 },
+	{ _MMIO(0x9888), 0x1a1c8000 },
+	{ _MMIO(0x9888), 0x1c1c0024 },
+	{ _MMIO(0x9888), 0x065b8000 },
+	{ _MMIO(0x9888), 0x085b4000 },
+	{ _MMIO(0x9888), 0x0a5bc000 },
+	{ _MMIO(0x9888), 0x0c5b8000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x005b8000 },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x1a5c6000 },
+	{ _MMIO(0x9888), 0x1c5c001b },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x145c8000 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4c2000 },
+	{ _MMIO(0x9888), 0x0c4c0208 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020d2000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5500 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2cc000 },
+	{ _MMIO(0x9888), 0x162cfb00 },
+	{ _MMIO(0x9888), 0x182c00be },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042cc000 },
+	{ _MMIO(0x9888), 0x19900157 },
+	{ _MMIO(0x9888), 0x1b900167 },
+	{ _MMIO(0x9888), 0x1d900105 },
+	{ _MMIO(0x9888), 0x1f900103 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x11900fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900842 },
+	{ _MMIO(0x9888), 0x47900802 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900802 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900002 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53901111 },
+};
+
+static const struct i915_oa_reg *
+get_compute_basic_mux_config(struct drm_i915_private *dev_priv,
+			     int *len)
+{
+	if ((INTEL_INFO(dev_priv)->sseu.slice_mask & 0x01) &&
+	    (dev_priv->drm.pdev->revision < 0x02)) {
+		*len = ARRAY_SIZE(mux_config_compute_basic_0_slices_0x01_and_sku_lt_0x02);
+		return mux_config_compute_basic_0_slices_0x01_and_sku_lt_0x02;
+	} else if ((INTEL_INFO(dev_priv)->sseu.slice_mask & 0x01) &&
+		   (dev_priv->drm.pdev->revision >= 0x02)) {
+		*len = ARRAY_SIZE(mux_config_compute_basic_0_slices_0x01_and_sku_gte_0x02);
+		return mux_config_compute_basic_0_slices_0x01_and_sku_gte_0x02;
+	}
+
+	*len = 0;
+	return NULL;
+}
+
+static const struct i915_oa_reg b_counter_config_render_pipe_profile[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ffea },
+	{ _MMIO(0x2774), 0x00007ffc },
+	{ _MMIO(0x2778), 0x0007affa },
+	{ _MMIO(0x277c), 0x0000f5fd },
+	{ _MMIO(0x2780), 0x00079ffa },
+	{ _MMIO(0x2784), 0x0000f3fb },
+	{ _MMIO(0x2788), 0x0007bf7a },
+	{ _MMIO(0x278c), 0x0000f7e7 },
+	{ _MMIO(0x2790), 0x0007fefa },
+	{ _MMIO(0x2794), 0x0000f7cf },
+	{ _MMIO(0x2798), 0x00077ffa },
+	{ _MMIO(0x279c), 0x0000efdf },
+	{ _MMIO(0x27a0), 0x0006fffa },
+	{ _MMIO(0x27a4), 0x0000cfbf },
+	{ _MMIO(0x27a8), 0x0003fffa },
+	{ _MMIO(0x27ac), 0x00005f7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_pipe_profile[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile_0_sku_lt_0x02[] = {
+	{ _MMIO(0x9888), 0x0c0e001f },
+	{ _MMIO(0x9888), 0x0a0f0000 },
+	{ _MMIO(0x9888), 0x10116800 },
+	{ _MMIO(0x9888), 0x178a03e0 },
+	{ _MMIO(0x9888), 0x11824c00 },
+	{ _MMIO(0x9888), 0x11830020 },
+	{ _MMIO(0x9888), 0x13840020 },
+	{ _MMIO(0x9888), 0x11850019 },
+	{ _MMIO(0x9888), 0x11860007 },
+	{ _MMIO(0x9888), 0x01870c40 },
+	{ _MMIO(0x9888), 0x17880000 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x0a4c0040 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x040d4000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020e5400 },
+	{ _MMIO(0x9888), 0x000e0000 },
+	{ _MMIO(0x9888), 0x080f0040 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x100f0000 },
+	{ _MMIO(0x9888), 0x0e0f0040 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06110012 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x01898000 },
+	{ _MMIO(0x9888), 0x0d890100 },
+	{ _MMIO(0x9888), 0x03898000 },
+	{ _MMIO(0x9888), 0x09808000 },
+	{ _MMIO(0x9888), 0x0b808000 },
+	{ _MMIO(0x9888), 0x0380c000 },
+	{ _MMIO(0x9888), 0x0f8a0075 },
+	{ _MMIO(0x9888), 0x1d8a0000 },
+	{ _MMIO(0x9888), 0x118a8000 },
+	{ _MMIO(0x9888), 0x1b8a4000 },
+	{ _MMIO(0x9888), 0x138a8000 },
+	{ _MMIO(0x9888), 0x1d81a000 },
+	{ _MMIO(0x9888), 0x15818000 },
+	{ _MMIO(0x9888), 0x17818000 },
+	{ _MMIO(0x9888), 0x0b820030 },
+	{ _MMIO(0x9888), 0x07828000 },
+	{ _MMIO(0x9888), 0x0d824000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x05824000 },
+	{ _MMIO(0x9888), 0x0d830003 },
+	{ _MMIO(0x9888), 0x0583000c },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x03838000 },
+	{ _MMIO(0x9888), 0x07838000 },
+	{ _MMIO(0x9888), 0x0b840980 },
+	{ _MMIO(0x9888), 0x03844d80 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x09848000 },
+	{ _MMIO(0x9888), 0x09850080 },
+	{ _MMIO(0x9888), 0x03850003 },
+	{ _MMIO(0x9888), 0x01850000 },
+	{ _MMIO(0x9888), 0x07860000 },
+	{ _MMIO(0x9888), 0x0f860400 },
+	{ _MMIO(0x9888), 0x09870032 },
+	{ _MMIO(0x9888), 0x01888052 },
+	{ _MMIO(0x9888), 0x11880000 },
+	{ _MMIO(0x9888), 0x09884000 },
+	{ _MMIO(0x9888), 0x15968000 },
+	{ _MMIO(0x9888), 0x17968000 },
+	{ _MMIO(0x9888), 0x0f96c000 },
+	{ _MMIO(0x9888), 0x1f950011 },
+	{ _MMIO(0x9888), 0x1d950014 },
+	{ _MMIO(0x9888), 0x0592c000 },
+	{ _MMIO(0x9888), 0x0b928000 },
+	{ _MMIO(0x9888), 0x0d924000 },
+	{ _MMIO(0x9888), 0x0f924000 },
+	{ _MMIO(0x9888), 0x11928000 },
+	{ _MMIO(0x9888), 0x1392c000 },
+	{ _MMIO(0x9888), 0x09924000 },
+	{ _MMIO(0x9888), 0x01985000 },
+	{ _MMIO(0x9888), 0x07988000 },
+	{ _MMIO(0x9888), 0x09981000 },
+	{ _MMIO(0x9888), 0x0b982000 },
+	{ _MMIO(0x9888), 0x0d982000 },
+	{ _MMIO(0x9888), 0x0f989000 },
+	{ _MMIO(0x9888), 0x05982000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x0b978000 },
+	{ _MMIO(0x9888), 0x0f974000 },
+	{ _MMIO(0x9888), 0x11974000 },
+	{ _MMIO(0x9888), 0x13978000 },
+	{ _MMIO(0x9888), 0x09974000 },
+	{ _MMIO(0xd28), 0x00000000 },
+	{ _MMIO(0x9888), 0x1190c080 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x419010a0 },
+	{ _MMIO(0x9888), 0x55904000 },
+	{ _MMIO(0x9888), 0x45901000 },
+	{ _MMIO(0x9888), 0x47900084 },
+	{ _MMIO(0x9888), 0x57904400 },
+	{ _MMIO(0x9888), 0x499000a5 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900081 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x439014a4 },
+	{ _MMIO(0x9888), 0x53900400 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile_0_sku_gte_0x02[] = {
+	{ _MMIO(0x9888), 0x0c0e001f },
+	{ _MMIO(0x9888), 0x0a0f0000 },
+	{ _MMIO(0x9888), 0x10116800 },
+	{ _MMIO(0x9888), 0x178a03e0 },
+	{ _MMIO(0x9888), 0x11824c00 },
+	{ _MMIO(0x9888), 0x11830020 },
+	{ _MMIO(0x9888), 0x13840020 },
+	{ _MMIO(0x9888), 0x11850019 },
+	{ _MMIO(0x9888), 0x11860007 },
+	{ _MMIO(0x9888), 0x01870c40 },
+	{ _MMIO(0x9888), 0x17880000 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x0a4c0040 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x040d4000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020e5400 },
+	{ _MMIO(0x9888), 0x000e0000 },
+	{ _MMIO(0x9888), 0x080f0040 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x100f0000 },
+	{ _MMIO(0x9888), 0x0e0f0040 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06110012 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x01898000 },
+	{ _MMIO(0x9888), 0x0d890100 },
+	{ _MMIO(0x9888), 0x03898000 },
+	{ _MMIO(0x9888), 0x09808000 },
+	{ _MMIO(0x9888), 0x0b808000 },
+	{ _MMIO(0x9888), 0x0380c000 },
+	{ _MMIO(0x9888), 0x0f8a0075 },
+	{ _MMIO(0x9888), 0x1d8a0000 },
+	{ _MMIO(0x9888), 0x118a8000 },
+	{ _MMIO(0x9888), 0x1b8a4000 },
+	{ _MMIO(0x9888), 0x138a8000 },
+	{ _MMIO(0x9888), 0x1d81a000 },
+	{ _MMIO(0x9888), 0x15818000 },
+	{ _MMIO(0x9888), 0x17818000 },
+	{ _MMIO(0x9888), 0x0b820030 },
+	{ _MMIO(0x9888), 0x07828000 },
+	{ _MMIO(0x9888), 0x0d824000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x05824000 },
+	{ _MMIO(0x9888), 0x0d830003 },
+	{ _MMIO(0x9888), 0x0583000c },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x03838000 },
+	{ _MMIO(0x9888), 0x07838000 },
+	{ _MMIO(0x9888), 0x0b840980 },
+	{ _MMIO(0x9888), 0x03844d80 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x09848000 },
+	{ _MMIO(0x9888), 0x09850080 },
+	{ _MMIO(0x9888), 0x03850003 },
+	{ _MMIO(0x9888), 0x01850000 },
+	{ _MMIO(0x9888), 0x07860000 },
+	{ _MMIO(0x9888), 0x0f860400 },
+	{ _MMIO(0x9888), 0x09870032 },
+	{ _MMIO(0x9888), 0x01888052 },
+	{ _MMIO(0x9888), 0x11880000 },
+	{ _MMIO(0x9888), 0x09884000 },
+	{ _MMIO(0x9888), 0x1b931001 },
+	{ _MMIO(0x9888), 0x1d930001 },
+	{ _MMIO(0x9888), 0x19934000 },
+	{ _MMIO(0x9888), 0x1b958000 },
+	{ _MMIO(0x9888), 0x1d950094 },
+	{ _MMIO(0x9888), 0x19958000 },
+	{ _MMIO(0x9888), 0x05e5a000 },
+	{ _MMIO(0x9888), 0x01e5c000 },
+	{ _MMIO(0x9888), 0x0592c000 },
+	{ _MMIO(0x9888), 0x0b928000 },
+	{ _MMIO(0x9888), 0x0d924000 },
+	{ _MMIO(0x9888), 0x0f924000 },
+	{ _MMIO(0x9888), 0x11928000 },
+	{ _MMIO(0x9888), 0x1392c000 },
+	{ _MMIO(0x9888), 0x09924000 },
+	{ _MMIO(0x9888), 0x01985000 },
+	{ _MMIO(0x9888), 0x07988000 },
+	{ _MMIO(0x9888), 0x09981000 },
+	{ _MMIO(0x9888), 0x0b982000 },
+	{ _MMIO(0x9888), 0x0d982000 },
+	{ _MMIO(0x9888), 0x0f989000 },
+	{ _MMIO(0x9888), 0x05982000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1190c080 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x419010a0 },
+	{ _MMIO(0x9888), 0x55904000 },
+	{ _MMIO(0x9888), 0x45901000 },
+	{ _MMIO(0x9888), 0x47900084 },
+	{ _MMIO(0x9888), 0x57904400 },
+	{ _MMIO(0x9888), 0x499000a5 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900081 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x439014a4 },
+	{ _MMIO(0x9888), 0x53900400 },
+};
+
+static const struct i915_oa_reg *
+get_render_pipe_profile_mux_config(struct drm_i915_private *dev_priv,
+				   int *len)
+{
+	if (dev_priv->drm.pdev->revision < 0x02) {
+		*len = ARRAY_SIZE(mux_config_render_pipe_profile_0_sku_lt_0x02);
+		return mux_config_render_pipe_profile_0_sku_lt_0x02;
+	} else if (dev_priv->drm.pdev->revision >= 0x02) {
+		*len = ARRAY_SIZE(mux_config_render_pipe_profile_0_sku_gte_0x02);
+		return mux_config_render_pipe_profile_0_sku_gte_0x02;
+	}
+
+	*len = 0;
+	return NULL;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_reads[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f872 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_reads[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads_0_slices_0x01_and_sku_lt_0x02[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x13946000 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x0f968000 },
+	{ _MMIO(0x9888), 0x1196c000 },
+	{ _MMIO(0x9888), 0x13964000 },
+	{ _MMIO(0x9888), 0x11938000 },
+	{ _MMIO(0x9888), 0x1b93fe00 },
+	{ _MMIO(0x9888), 0x01940010 },
+	{ _MMIO(0x9888), 0x07941100 },
+	{ _MMIO(0x9888), 0x09941312 },
+	{ _MMIO(0x9888), 0x0b941514 },
+	{ _MMIO(0x9888), 0x0d941716 },
+	{ _MMIO(0x9888), 0x11940000 },
+	{ _MMIO(0x9888), 0x19940000 },
+	{ _MMIO(0x9888), 0x1b940000 },
+	{ _MMIO(0x9888), 0x1d940000 },
+	{ _MMIO(0x9888), 0x1b954000 },
+	{ _MMIO(0x9888), 0x1d95a550 },
+	{ _MMIO(0x9888), 0x1f9502aa },
+	{ _MMIO(0x9888), 0x2f900157 },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13908000 },
+	{ _MMIO(0x9888), 0x21908000 },
+	{ _MMIO(0x9888), 0x23908000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27908000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0xd28), 0x00000000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads_0_sku_lt_0x05_and_sku_gte_0x02[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x13946000 },
+	{ _MMIO(0x9888), 0x15940016 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x19930800 },
+	{ _MMIO(0x9888), 0x1b93aa55 },
+	{ _MMIO(0x9888), 0x1d9300aa },
+	{ _MMIO(0x9888), 0x01940010 },
+	{ _MMIO(0x9888), 0x07941100 },
+	{ _MMIO(0x9888), 0x09941312 },
+	{ _MMIO(0x9888), 0x0b941514 },
+	{ _MMIO(0x9888), 0x0d941716 },
+	{ _MMIO(0x9888), 0x0f940018 },
+	{ _MMIO(0x9888), 0x1b940000 },
+	{ _MMIO(0x9888), 0x11940000 },
+	{ _MMIO(0x9888), 0x01e58000 },
+	{ _MMIO(0x9888), 0x03e57000 },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13908000 },
+	{ _MMIO(0x9888), 0x21908000 },
+	{ _MMIO(0x9888), 0x23908000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27908000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d908000 },
+	{ _MMIO(0x9888), 0x2f908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c20 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900400 },
+	{ _MMIO(0x9888), 0x47900421 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900421 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900061 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads_0_sku_gte_0x05[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f900064 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930055 },
+	{ _MMIO(0x9888), 0x03e58000 },
+	{ _MMIO(0x9888), 0x05e5c000 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x13900150 },
+	{ _MMIO(0x9888), 0x21900151 },
+	{ _MMIO(0x9888), 0x23900152 },
+	{ _MMIO(0x9888), 0x25900153 },
+	{ _MMIO(0x9888), 0x27900154 },
+	{ _MMIO(0x9888), 0x29900155 },
+	{ _MMIO(0x9888), 0x2b900156 },
+	{ _MMIO(0x9888), 0x2d900157 },
+	{ _MMIO(0x9888), 0x2f90015f },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c60 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900c63 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c63 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg *
+get_memory_reads_mux_config(struct drm_i915_private *dev_priv,
+			    int *len)
+{
+	if ((INTEL_INFO(dev_priv)->sseu.slice_mask & 0x01) &&
+	    (dev_priv->drm.pdev->revision < 0x02)) {
+		*len = ARRAY_SIZE(mux_config_memory_reads_0_slices_0x01_and_sku_lt_0x02);
+		return mux_config_memory_reads_0_slices_0x01_and_sku_lt_0x02;
+	} else if ((dev_priv->drm.pdev->revision < 0x05) &&
+		   (dev_priv->drm.pdev->revision >= 0x02)) {
+		*len = ARRAY_SIZE(mux_config_memory_reads_0_sku_lt_0x05_and_sku_gte_0x02);
+		return mux_config_memory_reads_0_sku_lt_0x05_and_sku_gte_0x02;
+	} else if (dev_priv->drm.pdev->revision >= 0x05) {
+		*len = ARRAY_SIZE(mux_config_memory_reads_0_sku_gte_0x05);
+		return mux_config_memory_reads_0_sku_gte_0x05;
+	}
+
+	*len = 0;
+	return NULL;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_writes[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f822 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_writes[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes_0_slices_0x01_and_sku_lt_0x02[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x13945400 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f901400 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x0f968000 },
+	{ _MMIO(0x9888), 0x1196c000 },
+	{ _MMIO(0x9888), 0x13964000 },
+	{ _MMIO(0x9888), 0x11938000 },
+	{ _MMIO(0x9888), 0x1b93fe00 },
+	{ _MMIO(0x9888), 0x01940010 },
+	{ _MMIO(0x9888), 0x07941100 },
+	{ _MMIO(0x9888), 0x09941312 },
+	{ _MMIO(0x9888), 0x0b941514 },
+	{ _MMIO(0x9888), 0x0d941716 },
+	{ _MMIO(0x9888), 0x11940000 },
+	{ _MMIO(0x9888), 0x19940000 },
+	{ _MMIO(0x9888), 0x1b940000 },
+	{ _MMIO(0x9888), 0x1d940000 },
+	{ _MMIO(0x9888), 0x1b954000 },
+	{ _MMIO(0x9888), 0x1d95a550 },
+	{ _MMIO(0x9888), 0x1f9502aa },
+	{ _MMIO(0x9888), 0x2f900167 },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13908000 },
+	{ _MMIO(0x9888), 0x21908000 },
+	{ _MMIO(0x9888), 0x23908000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27908000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0xd28), 0x00000000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes_0_sku_lt_0x05_and_sku_gte_0x02[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x13945400 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f901400 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x19930800 },
+	{ _MMIO(0x9888), 0x1b93aa55 },
+	{ _MMIO(0x9888), 0x1d93002a },
+	{ _MMIO(0x9888), 0x01940010 },
+	{ _MMIO(0x9888), 0x07941100 },
+	{ _MMIO(0x9888), 0x09941312 },
+	{ _MMIO(0x9888), 0x0b941514 },
+	{ _MMIO(0x9888), 0x0d941716 },
+	{ _MMIO(0x9888), 0x1b940000 },
+	{ _MMIO(0x9888), 0x11940000 },
+	{ _MMIO(0x9888), 0x01e58000 },
+	{ _MMIO(0x9888), 0x03e57000 },
+	{ _MMIO(0x9888), 0x2f900167 },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13908000 },
+	{ _MMIO(0x9888), 0x21908000 },
+	{ _MMIO(0x9888), 0x23908000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27908000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c20 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900400 },
+	{ _MMIO(0x9888), 0x47900421 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900421 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes_0_sku_gte_0x05[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f901000 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930055 },
+	{ _MMIO(0x9888), 0x03e58000 },
+	{ _MMIO(0x9888), 0x05e5c000 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x13900160 },
+	{ _MMIO(0x9888), 0x21900161 },
+	{ _MMIO(0x9888), 0x23900162 },
+	{ _MMIO(0x9888), 0x25900163 },
+	{ _MMIO(0x9888), 0x27900164 },
+	{ _MMIO(0x9888), 0x29900165 },
+	{ _MMIO(0x9888), 0x2b900166 },
+	{ _MMIO(0x9888), 0x2d900167 },
+	{ _MMIO(0x9888), 0x2f900150 },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c60 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900c63 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c63 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg *
+get_memory_writes_mux_config(struct drm_i915_private *dev_priv,
+			     int *len)
+{
+	if ((INTEL_INFO(dev_priv)->sseu.slice_mask & 0x01) &&
+	    (dev_priv->drm.pdev->revision < 0x02)) {
+		*len = ARRAY_SIZE(mux_config_memory_writes_0_slices_0x01_and_sku_lt_0x02);
+		return mux_config_memory_writes_0_slices_0x01_and_sku_lt_0x02;
+	} else if ((dev_priv->drm.pdev->revision < 0x05) &&
+		   (dev_priv->drm.pdev->revision >= 0x02)) {
+		*len = ARRAY_SIZE(mux_config_memory_writes_0_sku_lt_0x05_and_sku_gte_0x02);
+		return mux_config_memory_writes_0_sku_lt_0x05_and_sku_gte_0x02;
+	} else if (dev_priv->drm.pdev->revision >= 0x05) {
+		*len = ARRAY_SIZE(mux_config_memory_writes_0_sku_gte_0x05);
+		return mux_config_memory_writes_0_sku_gte_0x05;
+	}
+
+	*len = 0;
+	return NULL;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extended[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fc2a },
+	{ _MMIO(0x2774), 0x0000bf00 },
+	{ _MMIO(0x2778), 0x0007fc6a },
+	{ _MMIO(0x277c), 0x0000bf00 },
+	{ _MMIO(0x2780), 0x0007fc92 },
+	{ _MMIO(0x2784), 0x0000bf00 },
+	{ _MMIO(0x2788), 0x0007fca2 },
+	{ _MMIO(0x278c), 0x0000bf00 },
+	{ _MMIO(0x2790), 0x0007fc32 },
+	{ _MMIO(0x2794), 0x0000bf00 },
+	{ _MMIO(0x2798), 0x0007fc9a },
+	{ _MMIO(0x279c), 0x0000bf00 },
+	{ _MMIO(0x27a0), 0x0007fe6a },
+	{ _MMIO(0x27a4), 0x0000bf00 },
+	{ _MMIO(0x27a8), 0x0007fe7a },
+	{ _MMIO(0x27ac), 0x0000bf00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extended[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended_0_subslices_0x01[] = {
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x141c8160 },
+	{ _MMIO(0x9888), 0x161c8015 },
+	{ _MMIO(0x9888), 0x181c0120 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4eaaa0 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0e6c0b01 },
+	{ _MMIO(0x9888), 0x006c0200 },
+	{ _MMIO(0x9888), 0x026c000c },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x001b8000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x001c0041 },
+	{ _MMIO(0x9888), 0x061c4200 },
+	{ _MMIO(0x9888), 0x081c4443 },
+	{ _MMIO(0x9888), 0x0a1c4645 },
+	{ _MMIO(0x9888), 0x0c1c7647 },
+	{ _MMIO(0x9888), 0x041c7357 },
+	{ _MMIO(0x9888), 0x1c1c0030 },
+	{ _MMIO(0x9888), 0x101c0000 },
+	{ _MMIO(0x9888), 0x1a1c0000 },
+	{ _MMIO(0x9888), 0x121c8000 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4caa2a },
+	{ _MMIO(0x9888), 0x0c4c02aa },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5515 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162caa00 },
+	{ _MMIO(0x9888), 0x182c00aa },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0xd28), 0x00000000 },
+	{ _MMIO(0x9888), 0x11907fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900040 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900802 },
+	{ _MMIO(0x9888), 0x47900842 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900842 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900800 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg *
+get_compute_extended_mux_config(struct drm_i915_private *dev_priv,
+				int *len)
+{
+	if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x01) {
+		*len = ARRAY_SIZE(mux_config_compute_extended_0_subslices_0x01);
+		return mux_config_compute_extended_0_subslices_0x01;
+	}
+
+	*len = 0;
+	return NULL;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_l3_cache[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fffa },
+	{ _MMIO(0x2774), 0x0000fefe },
+	{ _MMIO(0x2778), 0x0007fffa },
+	{ _MMIO(0x277c), 0x0000fefd },
+	{ _MMIO(0x2790), 0x0007fffa },
+	{ _MMIO(0x2794), 0x0000fbef },
+	{ _MMIO(0x2798), 0x0007fffa },
+	{ _MMIO(0x279c), 0x0000fbdf },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_l3_cache[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00101100 },
+	{ _MMIO(0xe45c), 0x00201200 },
+	{ _MMIO(0xe55c), 0x00301300 },
+	{ _MMIO(0xe65c), 0x00401400 },
+};
+
+static const struct i915_oa_reg mux_config_compute_l3_cache[] = {
+	{ _MMIO(0x9888), 0x166c0760 },
+	{ _MMIO(0x9888), 0x1593001e },
+	{ _MMIO(0x9888), 0x3f901403 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4e8020 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x006c0051 },
+	{ _MMIO(0x9888), 0x066c5000 },
+	{ _MMIO(0x9888), 0x086c5c5d },
+	{ _MMIO(0x9888), 0x0e6c5e5f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x186c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x061b8000 },
+	{ _MMIO(0x9888), 0x081bc000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x101c8000 },
+	{ _MMIO(0x9888), 0x1a1ce000 },
+	{ _MMIO(0x9888), 0x1c1c0030 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4c2a00 },
+	{ _MMIO(0x9888), 0x0c4c0280 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f1500 },
+	{ _MMIO(0x9888), 0x100f0140 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162c0a00 },
+	{ _MMIO(0x9888), 0x182c00a0 },
+	{ _MMIO(0x9888), 0x03933300 },
+	{ _MMIO(0x9888), 0x05930032 },
+	{ _MMIO(0x9888), 0x11930000 },
+	{ _MMIO(0x9888), 0x1b930000 },
+	{ _MMIO(0x9888), 0x1d900157 },
+	{ _MMIO(0x9888), 0x1f900167 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1190030f },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900042 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x53901111 },
+	{ _MMIO(0x9888), 0x43900420 },
+};
+
+static const struct i915_oa_reg *
+get_compute_l3_cache_mux_config(struct drm_i915_private *dev_priv,
+				int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_l3_cache);
+	return mux_config_compute_l3_cache;
+}
+
+static const struct i915_oa_reg b_counter_config_hdc_and_sf[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x10800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+};
+
+static const struct i915_oa_reg flex_eu_config_hdc_and_sf[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_hdc_and_sf[] = {
+	{ _MMIO(0x9888), 0x104f0232 },
+	{ _MMIO(0x9888), 0x124f4640 },
+	{ _MMIO(0x9888), 0x106c0232 },
+	{ _MMIO(0x9888), 0x11834400 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x004f1880 },
+	{ _MMIO(0x9888), 0x024f08bb },
+	{ _MMIO(0x9888), 0x044f001b },
+	{ _MMIO(0x9888), 0x046c0100 },
+	{ _MMIO(0x9888), 0x066c000b },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x041b8000 },
+	{ _MMIO(0x9888), 0x061b4000 },
+	{ _MMIO(0x9888), 0x1a1c1800 },
+	{ _MMIO(0x9888), 0x005b8000 },
+	{ _MMIO(0x9888), 0x025bc000 },
+	{ _MMIO(0x9888), 0x045b4000 },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x145c8000 },
+	{ _MMIO(0x9888), 0x165c8000 },
+	{ _MMIO(0x9888), 0x185c8000 },
+	{ _MMIO(0x9888), 0x0a4c00a0 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042cc000 },
+	{ _MMIO(0x9888), 0x062cc000 },
+	{ _MMIO(0x9888), 0x082cc000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x0f8305c0 },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x07830000 },
+	{ _MMIO(0x9888), 0x1d950080 },
+	{ _MMIO(0x9888), 0x13928000 },
+	{ _MMIO(0x9888), 0x0f988000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b9000a0 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_hdc_and_sf_mux_config(struct drm_i915_private *dev_priv,
+			  int *len)
+{
+	*len = ARRAY_SIZE(mux_config_hdc_and_sf);
+	return mux_config_hdc_and_sf;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1[] = {
+	{ _MMIO(0x9888), 0x126c7b40 },
+	{ _MMIO(0x9888), 0x166c0020 },
+	{ _MMIO(0x9888), 0x0a603444 },
+	{ _MMIO(0x9888), 0x0a613400 },
+	{ _MMIO(0x9888), 0x1a4ea800 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x064f4000 },
+	{ _MMIO(0x9888), 0x0c6c5327 },
+	{ _MMIO(0x9888), 0x0e6c5425 },
+	{ _MMIO(0x9888), 0x006c2a00 },
+	{ _MMIO(0x9888), 0x026c285b },
+	{ _MMIO(0x9888), 0x046c005c },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0800 },
+	{ _MMIO(0x9888), 0x0c1bc000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x001b8000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x1c1c003c },
+	{ _MMIO(0x9888), 0x121c8000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c0800 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x1a5c1000 },
+	{ _MMIO(0x9888), 0x10600000 },
+	{ _MMIO(0x9888), 0x04600000 },
+	{ _MMIO(0x9888), 0x0c610044 },
+	{ _MMIO(0x9888), 0x10610000 },
+	{ _MMIO(0x9888), 0x06610000 },
+	{ _MMIO(0x9888), 0x0c4c02a8 },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x0a4c002a },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0154 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x182c00aa },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2cc000 },
+	{ _MMIO(0x9888), 0x1190ffc0 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900420 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900021 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900400 },
+	{ _MMIO(0x9888), 0x43900421 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+};
+
+static const struct i915_oa_reg *
+get_l3_1_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_1);
+	return mux_config_l3_1;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00028002 },
+	{ _MMIO(0x277c), 0x000087ff },
+	{ _MMIO(0x2780), 0x00020002 },
+	{ _MMIO(0x2784), 0x00008fff },
+	{ _MMIO(0x2788), 0x00008002 },
+	{ _MMIO(0x278c), 0x0000a7ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_2[] = {
+	{ _MMIO(0x9888), 0x126c02e0 },
+	{ _MMIO(0x9888), 0x146c0001 },
+	{ _MMIO(0x9888), 0x0a623400 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x064f4000 },
+	{ _MMIO(0x9888), 0x026c3324 },
+	{ _MMIO(0x9888), 0x046c3422 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c0800 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x1a5c1000 },
+	{ _MMIO(0x9888), 0x06614000 },
+	{ _MMIO(0x9888), 0x0c620044 },
+	{ _MMIO(0x9888), 0x10620000 },
+	{ _MMIO(0x9888), 0x06620000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c002a },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f4000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2cc000 },
+	{ _MMIO(0x9888), 0x1190f800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_l3_2_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_2);
+	return mux_config_l3_2;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_3[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00028002 },
+	{ _MMIO(0x277c), 0x000087ff },
+	{ _MMIO(0x2780), 0x00020002 },
+	{ _MMIO(0x2784), 0x00008fff },
+	{ _MMIO(0x2788), 0x00008002 },
+	{ _MMIO(0x278c), 0x0000a7ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_3[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_3[] = {
+	{ _MMIO(0x9888), 0x126c4e80 },
+	{ _MMIO(0x9888), 0x146c0000 },
+	{ _MMIO(0x9888), 0x0a633400 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x026c3321 },
+	{ _MMIO(0x9888), 0x046c342f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1a6c2000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x061b4000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c1800 },
+	{ _MMIO(0x9888), 0x06604000 },
+	{ _MMIO(0x9888), 0x0c630044 },
+	{ _MMIO(0x9888), 0x10630000 },
+	{ _MMIO(0x9888), 0x06630000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c00aa },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f4000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x1190f800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900002 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_l3_3_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_3);
+	return mux_config_l3_3;
+}
+
+static const struct i915_oa_reg b_counter_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000efff },
+	{ _MMIO(0x2778), 0x00006000 },
+	{ _MMIO(0x277c), 0x0000f3ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x9888), 0x102f3800 },
+	{ _MMIO(0x9888), 0x144d0500 },
+	{ _MMIO(0x9888), 0x120d03c0 },
+	{ _MMIO(0x9888), 0x140d03cf },
+	{ _MMIO(0x9888), 0x0c0f0004 },
+	{ _MMIO(0x9888), 0x0c4e4000 },
+	{ _MMIO(0x9888), 0x042f0480 },
+	{ _MMIO(0x9888), 0x082f0000 },
+	{ _MMIO(0x9888), 0x022f0000 },
+	{ _MMIO(0x9888), 0x0a4c0090 },
+	{ _MMIO(0x9888), 0x064d0027 },
+	{ _MMIO(0x9888), 0x004d0000 },
+	{ _MMIO(0x9888), 0x000d0d40 },
+	{ _MMIO(0x9888), 0x020d803f },
+	{ _MMIO(0x9888), 0x040d8023 },
+	{ _MMIO(0x9888), 0x100d0000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020f0010 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x0e0f0050 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41901400 },
+	{ _MMIO(0x9888), 0x43901485 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900001 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_rasterizer_and_pixel_backend_mux_config(struct drm_i915_private *dev_priv,
+					    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_rasterizer_and_pixel_backend);
+	return mux_config_rasterizer_and_pixel_backend;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler[] = {
+	{ _MMIO(0x9888), 0x14152c00 },
+	{ _MMIO(0x9888), 0x16150005 },
+	{ _MMIO(0x9888), 0x121600a0 },
+	{ _MMIO(0x9888), 0x14352c00 },
+	{ _MMIO(0x9888), 0x16350005 },
+	{ _MMIO(0x9888), 0x123600a0 },
+	{ _MMIO(0x9888), 0x14552c00 },
+	{ _MMIO(0x9888), 0x16550005 },
+	{ _MMIO(0x9888), 0x125600a0 },
+	{ _MMIO(0x9888), 0x062f6000 },
+	{ _MMIO(0x9888), 0x022f2000 },
+	{ _MMIO(0x9888), 0x0c4c0050 },
+	{ _MMIO(0x9888), 0x0a4c0010 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0350 },
+	{ _MMIO(0x9888), 0x0c0fb000 },
+	{ _MMIO(0x9888), 0x0e0f00da },
+	{ _MMIO(0x9888), 0x182c0028 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x022dc000 },
+	{ _MMIO(0x9888), 0x042d4000 },
+	{ _MMIO(0x9888), 0x0c138000 },
+	{ _MMIO(0x9888), 0x0e132000 },
+	{ _MMIO(0x9888), 0x0413c000 },
+	{ _MMIO(0x9888), 0x1c140018 },
+	{ _MMIO(0x9888), 0x0c157000 },
+	{ _MMIO(0x9888), 0x0e150078 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x04162180 },
+	{ _MMIO(0x9888), 0x02160000 },
+	{ _MMIO(0x9888), 0x04174000 },
+	{ _MMIO(0x9888), 0x0233a000 },
+	{ _MMIO(0x9888), 0x04333000 },
+	{ _MMIO(0x9888), 0x14348000 },
+	{ _MMIO(0x9888), 0x16348000 },
+	{ _MMIO(0x9888), 0x02357870 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x04360043 },
+	{ _MMIO(0x9888), 0x02360000 },
+	{ _MMIO(0x9888), 0x04371000 },
+	{ _MMIO(0x9888), 0x0e538000 },
+	{ _MMIO(0x9888), 0x00538000 },
+	{ _MMIO(0x9888), 0x06533000 },
+	{ _MMIO(0x9888), 0x1c540020 },
+	{ _MMIO(0x9888), 0x12548000 },
+	{ _MMIO(0x9888), 0x0e557000 },
+	{ _MMIO(0x9888), 0x00557800 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x06560043 },
+	{ _MMIO(0x9888), 0x02560000 },
+	{ _MMIO(0x9888), 0x06571000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900060 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900060 },
+};
+
+static const struct i915_oa_reg *
+get_sampler_mux_config(struct drm_i915_private *dev_priv,
+		       int *len)
+{
+	*len = ARRAY_SIZE(mux_config_sampler);
+	return mux_config_sampler;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x00007fff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x00009fff },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000efff },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000f3ff },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fdff },
+	{ _MMIO(0x2798), 0x00000000 },
+	{ _MMIO(0x279c), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_1[] = {
+	{ _MMIO(0x9888), 0x12120000 },
+	{ _MMIO(0x9888), 0x12320000 },
+	{ _MMIO(0x9888), 0x12520000 },
+	{ _MMIO(0x9888), 0x002f8000 },
+	{ _MMIO(0x9888), 0x022f3000 },
+	{ _MMIO(0x9888), 0x0a4c0015 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f03a0 },
+	{ _MMIO(0x9888), 0x0c0ff000 },
+	{ _MMIO(0x9888), 0x0e0f0095 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2d8000 },
+	{ _MMIO(0x9888), 0x0e2d4000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x02108000 },
+	{ _MMIO(0x9888), 0x0410c000 },
+	{ _MMIO(0x9888), 0x02118000 },
+	{ _MMIO(0x9888), 0x0411c000 },
+	{ _MMIO(0x9888), 0x02121880 },
+	{ _MMIO(0x9888), 0x041219b5 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x02134000 },
+	{ _MMIO(0x9888), 0x04135000 },
+	{ _MMIO(0x9888), 0x0c308000 },
+	{ _MMIO(0x9888), 0x0e304000 },
+	{ _MMIO(0x9888), 0x06304000 },
+	{ _MMIO(0x9888), 0x0c318000 },
+	{ _MMIO(0x9888), 0x0e314000 },
+	{ _MMIO(0x9888), 0x06314000 },
+	{ _MMIO(0x9888), 0x0c321a80 },
+	{ _MMIO(0x9888), 0x0e320033 },
+	{ _MMIO(0x9888), 0x06320031 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x0c334000 },
+	{ _MMIO(0x9888), 0x0e331000 },
+	{ _MMIO(0x9888), 0x06331000 },
+	{ _MMIO(0x9888), 0x0e508000 },
+	{ _MMIO(0x9888), 0x00508000 },
+	{ _MMIO(0x9888), 0x02504000 },
+	{ _MMIO(0x9888), 0x0e518000 },
+	{ _MMIO(0x9888), 0x00518000 },
+	{ _MMIO(0x9888), 0x02514000 },
+	{ _MMIO(0x9888), 0x0e521880 },
+	{ _MMIO(0x9888), 0x00521a80 },
+	{ _MMIO(0x9888), 0x02520033 },
+	{ _MMIO(0x9888), 0x0e534000 },
+	{ _MMIO(0x9888), 0x00534000 },
+	{ _MMIO(0x9888), 0x02531000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900062 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+};
+
+static const struct i915_oa_reg *
+get_tdl_1_mux_config(struct drm_i915_private *dev_priv,
+		     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_tdl_1);
+	return mux_config_tdl_1;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_2[] = {
+	{ _MMIO(0x9888), 0x12124d60 },
+	{ _MMIO(0x9888), 0x12322e60 },
+	{ _MMIO(0x9888), 0x12524d60 },
+	{ _MMIO(0x9888), 0x022f3000 },
+	{ _MMIO(0x9888), 0x0a4c0014 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0fe000 },
+	{ _MMIO(0x9888), 0x0e0f0097 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x002d8000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x0410c000 },
+	{ _MMIO(0x9888), 0x0411c000 },
+	{ _MMIO(0x9888), 0x04121fb7 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x04135000 },
+	{ _MMIO(0x9888), 0x00308000 },
+	{ _MMIO(0x9888), 0x06304000 },
+	{ _MMIO(0x9888), 0x00318000 },
+	{ _MMIO(0x9888), 0x06314000 },
+	{ _MMIO(0x9888), 0x00321b80 },
+	{ _MMIO(0x9888), 0x0632003f },
+	{ _MMIO(0x9888), 0x00334000 },
+	{ _MMIO(0x9888), 0x06331000 },
+	{ _MMIO(0x9888), 0x0250c000 },
+	{ _MMIO(0x9888), 0x0251c000 },
+	{ _MMIO(0x9888), 0x02521fb7 },
+	{ _MMIO(0x9888), 0x00520000 },
+	{ _MMIO(0x9888), 0x02535000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x43900063 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_tdl_2_mux_config(struct drm_i915_private *dev_priv,
+		     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_tdl_2);
+	return mux_config_tdl_2;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extra[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extra[] = {
+	{ _MMIO(0xe458), 0x00001000 },
+	{ _MMIO(0xe558), 0x00003002 },
+	{ _MMIO(0xe658), 0x00005004 },
+	{ _MMIO(0xe758), 0x00011010 },
+	{ _MMIO(0xe45c), 0x00050012 },
+	{ _MMIO(0xe55c), 0x00052051 },
+	{ _MMIO(0xe65c), 0x00000008 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extra[] = {
+	{ _MMIO(0x9888), 0x121203e0 },
+	{ _MMIO(0x9888), 0x123203e0 },
+	{ _MMIO(0x9888), 0x125203e0 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x0a4c0040 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0e0f006c },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x042d8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06114000 },
+	{ _MMIO(0x9888), 0x06120033 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x04308000 },
+	{ _MMIO(0x9888), 0x04318000 },
+	{ _MMIO(0x9888), 0x04321980 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x04334000 },
+	{ _MMIO(0x9888), 0x04504000 },
+	{ _MMIO(0x9888), 0x04514000 },
+	{ _MMIO(0x9888), 0x04520033 },
+	{ _MMIO(0x9888), 0x00520000 },
+	{ _MMIO(0x9888), 0x04531000 },
+	{ _MMIO(0x9888), 0x1190e000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900c00 },
+	{ _MMIO(0x9888), 0x45900002 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_compute_extra_mux_config(struct drm_i915_private *dev_priv,
+			     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_extra);
+	return mux_config_compute_extra;
+}
+
+static const struct i915_oa_reg b_counter_config_vme_pipe[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00100030 },
+	{ _MMIO(0x2774), 0x0000fff9 },
+	{ _MMIO(0x2778), 0x00000002 },
+	{ _MMIO(0x277c), 0x0000fffc },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000fff3 },
+	{ _MMIO(0x2788), 0x00100180 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00000002 },
+	{ _MMIO(0x279c), 0x0000ff3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_vme_pipe[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00008003 },
+};
+
+static const struct i915_oa_reg mux_config_vme_pipe[] = {
+	{ _MMIO(0x9888), 0x141a5800 },
+	{ _MMIO(0x9888), 0x161a00c0 },
+	{ _MMIO(0x9888), 0x12180240 },
+	{ _MMIO(0x9888), 0x14180002 },
+	{ _MMIO(0x9888), 0x143a5800 },
+	{ _MMIO(0x9888), 0x163a00c0 },
+	{ _MMIO(0x9888), 0x12380240 },
+	{ _MMIO(0x9888), 0x14380002 },
+	{ _MMIO(0x9888), 0x002f1000 },
+	{ _MMIO(0x9888), 0x022f8000 },
+	{ _MMIO(0x9888), 0x042f3000 },
+	{ _MMIO(0x9888), 0x004c4000 },
+	{ _MMIO(0x9888), 0x0a4c1500 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f9500 },
+	{ _MMIO(0x9888), 0x100f002a },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162c0a00 },
+	{ _MMIO(0x9888), 0x0a2dc000 },
+	{ _MMIO(0x9888), 0x0c2dc000 },
+	{ _MMIO(0x9888), 0x04193000 },
+	{ _MMIO(0x9888), 0x081a28c1 },
+	{ _MMIO(0x9888), 0x001a0000 },
+	{ _MMIO(0x9888), 0x00133000 },
+	{ _MMIO(0x9888), 0x0613c000 },
+	{ _MMIO(0x9888), 0x0813f000 },
+	{ _MMIO(0x9888), 0x00172000 },
+	{ _MMIO(0x9888), 0x06178000 },
+	{ _MMIO(0x9888), 0x0817a000 },
+	{ _MMIO(0x9888), 0x00180037 },
+	{ _MMIO(0x9888), 0x06180940 },
+	{ _MMIO(0x9888), 0x08180000 },
+	{ _MMIO(0x9888), 0x02180000 },
+	{ _MMIO(0x9888), 0x04183000 },
+	{ _MMIO(0x9888), 0x06393000 },
+	{ _MMIO(0x9888), 0x0c3a28c1 },
+	{ _MMIO(0x9888), 0x003a0000 },
+	{ _MMIO(0x9888), 0x0a33f000 },
+	{ _MMIO(0x9888), 0x0c33f000 },
+	{ _MMIO(0x9888), 0x0a37a000 },
+	{ _MMIO(0x9888), 0x0c37a000 },
+	{ _MMIO(0x9888), 0x0a380977 },
+	{ _MMIO(0x9888), 0x08380000 },
+	{ _MMIO(0x9888), 0x04380000 },
+	{ _MMIO(0x9888), 0x06383000 },
+	{ _MMIO(0x9888), 0x119000ff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900040 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900800 },
+	{ _MMIO(0x9888), 0x47901000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900844 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_vme_pipe_mux_config(struct drm_i915_private *dev_priv,
+			int *len)
+{
+	*len = ARRAY_SIZE(mux_config_vme_pipe);
+	return mux_config_vme_pipe;
+}
+
+static const struct i915_oa_reg b_counter_config_test_oa[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2770), 0x00000004 },
+	{ _MMIO(0x2774), 0x00000000 },
+	{ _MMIO(0x2778), 0x00000003 },
+	{ _MMIO(0x277c), 0x00000000 },
+	{ _MMIO(0x2780), 0x00000007 },
+	{ _MMIO(0x2784), 0x00000000 },
+	{ _MMIO(0x2788), 0x00100002 },
+	{ _MMIO(0x278c), 0x0000fff7 },
+	{ _MMIO(0x2790), 0x00100002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00100082 },
+	{ _MMIO(0x279c), 0x0000ffef },
+	{ _MMIO(0x27a0), 0x001000c2 },
+	{ _MMIO(0x27a4), 0x0000ffe7 },
+	{ _MMIO(0x27a8), 0x00100001 },
+	{ _MMIO(0x27ac), 0x0000ffe7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_test_oa[] = {
+};
+
+static const struct i915_oa_reg mux_config_test_oa[] = {
+	{ _MMIO(0x9888), 0x11810000 },
+	{ _MMIO(0x9888), 0x07810016 },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930040 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_test_oa_mux_config(struct drm_i915_private *dev_priv,
+		       int *len)
+{
+	*len = ARRAY_SIZE(mux_config_test_oa);
+	return mux_config_test_oa;
+}
+
 int i915_oa_select_metric_set_sklgt2(struct drm_i915_private *dev_priv)
 {
-	dev_priv->perf.oa.mux_regs = NULL;
-	dev_priv->perf.oa.mux_regs_len = 0;
-	dev_priv->perf.oa.b_counter_regs = NULL;
-	dev_priv->perf.oa.b_counter_regs_len = 0;
-	dev_priv->perf.oa.flex_regs = NULL;
-	dev_priv->perf.oa.flex_regs_len = 0;
+	dev_priv->perf.oa.mux_regs = NULL;
+	dev_priv->perf.oa.mux_regs_len = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_render_basic_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_basic_mux_config(dev_priv,
+						     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
+		return 0;
+	case METRIC_SET_ID_RENDER_PIPE_PROFILE:
+		dev_priv->perf.oa.mux_regs =
+			get_render_pipe_profile_mux_config(dev_priv,
+							   &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_PIPE_PROFILE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_pipe_profile;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_pipe_profile);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_pipe_profile;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_pipe_profile);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_READS:
+		dev_priv->perf.oa.mux_regs =
+			get_memory_reads_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_READS\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_reads;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_reads);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_reads;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_reads);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_WRITES:
+		dev_priv->perf.oa.mux_regs =
+			get_memory_writes_mux_config(dev_priv,
+						     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_WRITES\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_writes;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_writes);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_writes;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_writes);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTENDED:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_extended_mux_config(dev_priv,
+							&dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTENDED\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extended;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extended);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extended;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extended);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_L3_CACHE:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_l3_cache_mux_config(dev_priv,
+							&dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_L3_CACHE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_l3_cache;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_l3_cache);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_l3_cache;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_l3_cache);
+
+		return 0;
+	case METRIC_SET_ID_HDC_AND_SF:
+		dev_priv->perf.oa.mux_regs =
+			get_hdc_and_sf_mux_config(dev_priv,
+						  &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"HDC_AND_SF\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_hdc_and_sf;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_hdc_and_sf);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_hdc_and_sf;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_hdc_and_sf);
+
+		return 0;
+	case METRIC_SET_ID_L3_1:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_1_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_1);
+
+		return 0;
+	case METRIC_SET_ID_L3_2:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_2_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_2);
+
+		return 0;
+	case METRIC_SET_ID_L3_3:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_3_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_3\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_3;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_3);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_3;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_3);
+
+		return 0;
+	case METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND:
+		dev_priv->perf.oa.mux_regs =
+			get_rasterizer_and_pixel_backend_mux_config(dev_priv,
+								    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RASTERIZER_AND_PIXEL_BACKEND\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_rasterizer_and_pixel_backend);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_rasterizer_and_pixel_backend);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER:
+		dev_priv->perf.oa.mux_regs =
+			get_sampler_mux_config(dev_priv,
+					       &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler);
+
+		return 0;
+	case METRIC_SET_ID_TDL_1:
+		dev_priv->perf.oa.mux_regs =
+			get_tdl_1_mux_config(dev_priv,
+					     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_1);
+
+		return 0;
+	case METRIC_SET_ID_TDL_2:
+		dev_priv->perf.oa.mux_regs =
+			get_tdl_2_mux_config(dev_priv,
+					     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_2);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTRA:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_extra_mux_config(dev_priv,
+						     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTRA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extra;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extra);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extra;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extra);
+
+		return 0;
+	case METRIC_SET_ID_VME_PIPE:
+		dev_priv->perf.oa.mux_regs =
+			get_vme_pipe_mux_config(dev_priv,
+						&dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"VME_PIPE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_vme_pipe;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_vme_pipe);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_vme_pipe;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_vme_pipe);
+
+		return 0;
+	case METRIC_SET_ID_TEST_OA:
+		dev_priv->perf.oa.mux_regs =
+			get_test_oa_mux_config(dev_priv,
+					       &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TEST_OA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_test_oa;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_test_oa);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_test_oa;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_test_oa);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "f519e481-24d2-4d42-87c9-3fdd12c00202",
+	.attrs =  attrs_render_basic,
+};
+
+static ssize_t
+show_compute_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_BASIC);
+}
+
+static struct device_attribute dev_attr_compute_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_basic[] = {
+	&dev_attr_compute_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_basic = {
+	.name = "fe47b29d-ae51-423e-bff4-27d965a95b60",
+	.attrs =  attrs_compute_basic,
+};
+
+static ssize_t
+show_render_pipe_profile_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_PIPE_PROFILE);
+}
+
+static struct device_attribute dev_attr_render_pipe_profile_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_pipe_profile_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_pipe_profile[] = {
+	&dev_attr_render_pipe_profile_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_pipe_profile = {
+	.name = "e0ad5ae0-84ba-4f29-a723-1906c12cb774",
+	.attrs =  attrs_render_pipe_profile,
+};
+
+static ssize_t
+show_memory_reads_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_READS);
+}

-	switch (dev_priv->perf.oa.metrics_set) {
-	case METRIC_SET_ID_RENDER_BASIC:
-		dev_priv->perf.oa.mux_regs =
-			get_render_basic_mux_config(dev_priv,
-						    &dev_priv->perf.oa.mux_regs_len);
-		if (!dev_priv->perf.oa.mux_regs) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+static struct device_attribute dev_attr_memory_reads_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_reads_id,
+	.store = NULL,
+};

-			/* EINVAL because *_register_sysfs already checked this
-			 * and so it wouldn't have been advertised to userspace and
-			 * so shouldn't have been requested
-			 */
-			return -EINVAL;
-		}
+static struct attribute *attrs_memory_reads[] = {
+	&dev_attr_memory_reads_id.attr,
+	NULL,
+};

-		dev_priv->perf.oa.b_counter_regs =
-			b_counter_config_render_basic;
-		dev_priv->perf.oa.b_counter_regs_len =
-			ARRAY_SIZE(b_counter_config_render_basic);
+static struct attribute_group group_memory_reads = {
+	.name = "9bc436dd-6130-4add-affc-283eb6eaa864",
+	.attrs =  attrs_memory_reads,
+};

-		dev_priv->perf.oa.flex_regs =
-			flex_eu_config_render_basic;
-		dev_priv->perf.oa.flex_regs_len =
-			ARRAY_SIZE(flex_eu_config_render_basic);
+static ssize_t
+show_memory_writes_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_WRITES);
+}

-		return 0;
-	default:
-		return -ENODEV;
-	}
+static struct device_attribute dev_attr_memory_writes_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_writes_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_writes[] = {
+	&dev_attr_memory_writes_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_writes = {
+	.name = "2ea0da8f-3527-4669-9d9d-13099a7435bf",
+	.attrs =  attrs_memory_writes,
+};
+
+static ssize_t
+show_compute_extended_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTENDED);
+}
+
+static struct device_attribute dev_attr_compute_extended_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extended_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extended[] = {
+	&dev_attr_compute_extended_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extended = {
+	.name = "d97d16af-028b-4cd1-a672-6210cb5513dd",
+	.attrs =  attrs_compute_extended,
+};
+
+static ssize_t
+show_compute_l3_cache_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_L3_CACHE);
 }

+static struct device_attribute dev_attr_compute_l3_cache_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_l3_cache_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_l3_cache[] = {
+	&dev_attr_compute_l3_cache_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_l3_cache = {
+	.name = "9fb22842-e708-43f7-9752-e0e41670c39e",
+	.attrs =  attrs_compute_l3_cache,
+};
+
 static ssize_t
-show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+show_hdc_and_sf_id(struct device *kdev, struct device_attribute *attr, char *buf)
 {
-	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+	return sprintf(buf, "%d\n", METRIC_SET_ID_HDC_AND_SF);
 }

-static struct device_attribute dev_attr_render_basic_id = {
+static struct device_attribute dev_attr_hdc_and_sf_id = {
 	.attr = { .name = "id", .mode = 0444 },
-	.show = show_render_basic_id,
+	.show = show_hdc_and_sf_id,
 	.store = NULL,
 };

-static struct attribute *attrs_render_basic[] = {
-	&dev_attr_render_basic_id.attr,
+static struct attribute *attrs_hdc_and_sf[] = {
+	&dev_attr_hdc_and_sf_id.attr,
 	NULL,
 };

-static struct attribute_group group_render_basic = {
-	.name = "f519e481-24d2-4d42-87c9-3fdd12c00202",
-	.attrs =  attrs_render_basic,
+static struct attribute_group group_hdc_and_sf = {
+	.name = "5378e2a1-4248-4188-a4ae-da25a794c603",
+	.attrs =  attrs_hdc_and_sf,
+};
+
+static ssize_t
+show_l3_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_1);
+}
+
+static struct device_attribute dev_attr_l3_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_1[] = {
+	&dev_attr_l3_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_1 = {
+	.name = "f42cdd6a-b000-42cb-870f-5eb423a7f514",
+	.attrs =  attrs_l3_1,
+};
+
+static ssize_t
+show_l3_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_2);
+}
+
+static struct device_attribute dev_attr_l3_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_2[] = {
+	&dev_attr_l3_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_2 = {
+	.name = "b9bf2423-d88c-4a7b-a051-627611d00dcc",
+	.attrs =  attrs_l3_2,
+};
+
+static ssize_t
+show_l3_3_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_3);
+}
+
+static struct device_attribute dev_attr_l3_3_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_3_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_3[] = {
+	&dev_attr_l3_3_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_3 = {
+	.name = "2414a93d-d84f-406e-99c0-472161194b40",
+	.attrs =  attrs_l3_3,
+};
+
+static ssize_t
+show_rasterizer_and_pixel_backend_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND);
+}
+
+static struct device_attribute dev_attr_rasterizer_and_pixel_backend_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_rasterizer_and_pixel_backend_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_rasterizer_and_pixel_backend[] = {
+	&dev_attr_rasterizer_and_pixel_backend_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_rasterizer_and_pixel_backend = {
+	.name = "53a45d2d-170b-4cf5-b7bb-585120c8e2f5",
+	.attrs =  attrs_rasterizer_and_pixel_backend,
+};
+
+static ssize_t
+show_sampler_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER);
+}
+
+static struct device_attribute dev_attr_sampler_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler[] = {
+	&dev_attr_sampler_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler = {
+	.name = "b4cff514-a91e-4798-a0b3-426ca13fc9c1",
+	.attrs =  attrs_sampler,
+};
+
+static ssize_t
+show_tdl_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_1);
+}
+
+static struct device_attribute dev_attr_tdl_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_1[] = {
+	&dev_attr_tdl_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_1 = {
+	.name = "7821d13b-9b8b-4405-9618-78cd56b62cce",
+	.attrs =  attrs_tdl_1,
+};
+
+static ssize_t
+show_tdl_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_2);
+}
+
+static struct device_attribute dev_attr_tdl_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_2[] = {
+	&dev_attr_tdl_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_2 = {
+	.name = "893f1a4d-919d-4388-8cb7-746d73ea7259",
+	.attrs =  attrs_tdl_2,
+};
+
+static ssize_t
+show_compute_extra_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTRA);
+}
+
+static struct device_attribute dev_attr_compute_extra_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extra_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extra[] = {
+	&dev_attr_compute_extra_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extra = {
+	.name = "41a24047-7484-4ead-ae37-de907e5ff2b2",
+	.attrs =  attrs_compute_extra,
+};
+
+static ssize_t
+show_vme_pipe_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_VME_PIPE);
+}
+
+static struct device_attribute dev_attr_vme_pipe_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_vme_pipe_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_vme_pipe[] = {
+	&dev_attr_vme_pipe_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_vme_pipe = {
+	.name = "95910492-943f-44bd-9461-390240f243fd",
+	.attrs =  attrs_vme_pipe,
+};
+
+static ssize_t
+show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TEST_OA);
+}
+
+static struct device_attribute dev_attr_test_oa_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_test_oa_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_test_oa[] = {
+	&dev_attr_test_oa_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_test_oa = {
+	.name = "1651949f-0ac0-4cb1-a06f-dafd74a407d1",
+	.attrs =  attrs_test_oa,
 };

 int
@@ -211,9 +3115,145 @@ i915_perf_register_sysfs_sklgt2(struct drm_i915_private *dev_priv)
 		if (ret)
 			goto error_render_basic;
 	}
+	if (get_compute_basic_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+		if (ret)
+			goto error_compute_basic;
+	}
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+		if (ret)
+			goto error_render_pipe_profile;
+	}
+	if (get_memory_reads_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+		if (ret)
+			goto error_memory_reads;
+	}
+	if (get_memory_writes_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+		if (ret)
+			goto error_memory_writes;
+	}
+	if (get_compute_extended_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+		if (ret)
+			goto error_compute_extended;
+	}
+	if (get_compute_l3_cache_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+		if (ret)
+			goto error_compute_l3_cache;
+	}
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+		if (ret)
+			goto error_hdc_and_sf;
+	}
+	if (get_l3_1_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+		if (ret)
+			goto error_l3_1;
+	}
+	if (get_l3_2_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+		if (ret)
+			goto error_l3_2;
+	}
+	if (get_l3_3_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+		if (ret)
+			goto error_l3_3;
+	}
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+		if (ret)
+			goto error_rasterizer_and_pixel_backend;
+	}
+	if (get_sampler_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler);
+		if (ret)
+			goto error_sampler;
+	}
+	if (get_tdl_1_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+		if (ret)
+			goto error_tdl_1;
+	}
+	if (get_tdl_2_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+		if (ret)
+			goto error_tdl_2;
+	}
+	if (get_compute_extra_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+		if (ret)
+			goto error_compute_extra;
+	}
+	if (get_vme_pipe_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+		if (ret)
+			goto error_vme_pipe;
+	}
+	if (get_test_oa_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+		if (ret)
+			goto error_test_oa;
+	}

 	return 0;

+error_test_oa:
+	if (get_vme_pipe_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+error_vme_pipe:
+	if (get_compute_extra_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+error_compute_extra:
+	if (get_tdl_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+error_tdl_2:
+	if (get_tdl_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+error_tdl_1:
+	if (get_sampler_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+error_sampler:
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+error_rasterizer_and_pixel_backend:
+	if (get_l3_3_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+error_l3_3:
+	if (get_l3_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+error_l3_2:
+	if (get_l3_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+error_l3_1:
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+error_hdc_and_sf:
+	if (get_compute_l3_cache_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+error_compute_l3_cache:
+	if (get_compute_extended_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+error_compute_extended:
+	if (get_memory_writes_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+error_memory_writes:
+	if (get_memory_reads_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+error_memory_reads:
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+error_render_pipe_profile:
+	if (get_compute_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+error_compute_basic:
+	if (get_render_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
 error_render_basic:
 	return ret;
 }
@@ -225,4 +3265,38 @@ i915_perf_unregister_sysfs_sklgt2(struct drm_i915_private *dev_priv)

 	if (get_render_basic_mux_config(dev_priv, &mux_len))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+	if (get_compute_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+	if (get_memory_reads_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+	if (get_memory_writes_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+	if (get_compute_extended_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+	if (get_compute_l3_cache_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+	if (get_l3_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+	if (get_l3_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+	if (get_l3_3_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+	if (get_sampler_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+	if (get_tdl_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+	if (get_tdl_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+	if (get_compute_extra_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+	if (get_vme_pipe_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+	if (get_test_oa_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_test_oa);
 }
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt3.c b/drivers/gpu/drm/i915/i915_oa_sklgt3.c
index a21211136191..bbc01a877eea 100644
--- a/drivers/gpu/drm/i915/i915_oa_sklgt3.c
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt3.c
@@ -31,9 +31,26 @@

 enum metric_set_id {
 	METRIC_SET_ID_RENDER_BASIC = 1,
+	METRIC_SET_ID_COMPUTE_BASIC,
+	METRIC_SET_ID_RENDER_PIPE_PROFILE,
+	METRIC_SET_ID_MEMORY_READS,
+	METRIC_SET_ID_MEMORY_WRITES,
+	METRIC_SET_ID_COMPUTE_EXTENDED,
+	METRIC_SET_ID_COMPUTE_L3_CACHE,
+	METRIC_SET_ID_HDC_AND_SF,
+	METRIC_SET_ID_L3_1,
+	METRIC_SET_ID_L3_2,
+	METRIC_SET_ID_L3_3,
+	METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND,
+	METRIC_SET_ID_SAMPLER,
+	METRIC_SET_ID_TDL_1,
+	METRIC_SET_ID_TDL_2,
+	METRIC_SET_ID_COMPUTE_EXTRA,
+	METRIC_SET_ID_VME_PIPE,
+	METRIC_SET_ID_TEST_OA,
 };

-int i915_oa_n_builtin_metric_sets_sklgt3 = 1;
+int i915_oa_n_builtin_metric_sets_sklgt3 = 18;

 static const struct i915_oa_reg b_counter_config_render_basic[] = {
 	{ _MMIO(0x2710), 0x00000000 },
@@ -146,66 +163,2499 @@ get_render_basic_mux_config(struct drm_i915_private *dev_priv,
 	return mux_config_render_basic;
 }

+static const struct i915_oa_reg b_counter_config_compute_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic[] = {
+	{ _MMIO(0x9888), 0x104f00e0 },
+	{ _MMIO(0x9888), 0x124f1c00 },
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x1a4e0820 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x064f0900 },
+	{ _MMIO(0x9888), 0x084f0032 },
+	{ _MMIO(0x9888), 0x0a4f1891 },
+	{ _MMIO(0x9888), 0x0c4f0e00 },
+	{ _MMIO(0x9888), 0x0e4f003c },
+	{ _MMIO(0x9888), 0x004f0d80 },
+	{ _MMIO(0x9888), 0x024f003b },
+	{ _MMIO(0x9888), 0x006c0002 },
+	{ _MMIO(0x9888), 0x086c0100 },
+	{ _MMIO(0x9888), 0x0c6c000c },
+	{ _MMIO(0x9888), 0x0e6c0b00 },
+	{ _MMIO(0x9888), 0x186c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x081b8000 },
+	{ _MMIO(0x9888), 0x0c1b4000 },
+	{ _MMIO(0x9888), 0x0e1b8000 },
+	{ _MMIO(0x9888), 0x101c8000 },
+	{ _MMIO(0x9888), 0x1a1c8000 },
+	{ _MMIO(0x9888), 0x1c1c0024 },
+	{ _MMIO(0x9888), 0x065b8000 },
+	{ _MMIO(0x9888), 0x085b4000 },
+	{ _MMIO(0x9888), 0x0a5bc000 },
+	{ _MMIO(0x9888), 0x0c5b8000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x005b8000 },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x1a5c6000 },
+	{ _MMIO(0x9888), 0x1c5c001b },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x145c8000 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4c2000 },
+	{ _MMIO(0x9888), 0x0c4c0208 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020d2000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5500 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2cc000 },
+	{ _MMIO(0x9888), 0x162cfb00 },
+	{ _MMIO(0x9888), 0x182c00be },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042cc000 },
+	{ _MMIO(0x9888), 0x19900157 },
+	{ _MMIO(0x9888), 0x1b900158 },
+	{ _MMIO(0x9888), 0x1d900105 },
+	{ _MMIO(0x9888), 0x1f900103 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x11900fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900863 },
+	{ _MMIO(0x9888), 0x47900802 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900802 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900002 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900c62 },
+	{ _MMIO(0x9888), 0x53903333 },
+};
+
+static const struct i915_oa_reg *
+get_compute_basic_mux_config(struct drm_i915_private *dev_priv,
+			     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_basic);
+	return mux_config_compute_basic;
+}
+
+static const struct i915_oa_reg b_counter_config_render_pipe_profile[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ffea },
+	{ _MMIO(0x2774), 0x00007ffc },
+	{ _MMIO(0x2778), 0x0007affa },
+	{ _MMIO(0x277c), 0x0000f5fd },
+	{ _MMIO(0x2780), 0x00079ffa },
+	{ _MMIO(0x2784), 0x0000f3fb },
+	{ _MMIO(0x2788), 0x0007bf7a },
+	{ _MMIO(0x278c), 0x0000f7e7 },
+	{ _MMIO(0x2790), 0x0007fefa },
+	{ _MMIO(0x2794), 0x0000f7cf },
+	{ _MMIO(0x2798), 0x00077ffa },
+	{ _MMIO(0x279c), 0x0000efdf },
+	{ _MMIO(0x27a0), 0x0006fffa },
+	{ _MMIO(0x27a4), 0x0000cfbf },
+	{ _MMIO(0x27a8), 0x0003fffa },
+	{ _MMIO(0x27ac), 0x00005f7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_pipe_profile[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile[] = {
+	{ _MMIO(0x9888), 0x0c0e001f },
+	{ _MMIO(0x9888), 0x0a0f0000 },
+	{ _MMIO(0x9888), 0x10116800 },
+	{ _MMIO(0x9888), 0x178a03e0 },
+	{ _MMIO(0x9888), 0x11824c00 },
+	{ _MMIO(0x9888), 0x11830020 },
+	{ _MMIO(0x9888), 0x13840020 },
+	{ _MMIO(0x9888), 0x11850019 },
+	{ _MMIO(0x9888), 0x11860007 },
+	{ _MMIO(0x9888), 0x01870c40 },
+	{ _MMIO(0x9888), 0x17880000 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x0a4c0040 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x040d4000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020e5400 },
+	{ _MMIO(0x9888), 0x000e0000 },
+	{ _MMIO(0x9888), 0x080f0040 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x100f0000 },
+	{ _MMIO(0x9888), 0x0e0f0040 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06110012 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x01898000 },
+	{ _MMIO(0x9888), 0x0d890100 },
+	{ _MMIO(0x9888), 0x03898000 },
+	{ _MMIO(0x9888), 0x09808000 },
+	{ _MMIO(0x9888), 0x0b808000 },
+	{ _MMIO(0x9888), 0x0380c000 },
+	{ _MMIO(0x9888), 0x0f8a0075 },
+	{ _MMIO(0x9888), 0x1d8a0000 },
+	{ _MMIO(0x9888), 0x118a8000 },
+	{ _MMIO(0x9888), 0x1b8a4000 },
+	{ _MMIO(0x9888), 0x138a8000 },
+	{ _MMIO(0x9888), 0x1d81a000 },
+	{ _MMIO(0x9888), 0x15818000 },
+	{ _MMIO(0x9888), 0x17818000 },
+	{ _MMIO(0x9888), 0x0b820030 },
+	{ _MMIO(0x9888), 0x07828000 },
+	{ _MMIO(0x9888), 0x0d824000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x05824000 },
+	{ _MMIO(0x9888), 0x0d830003 },
+	{ _MMIO(0x9888), 0x0583000c },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x03838000 },
+	{ _MMIO(0x9888), 0x07838000 },
+	{ _MMIO(0x9888), 0x0b840980 },
+	{ _MMIO(0x9888), 0x03844d80 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x09848000 },
+	{ _MMIO(0x9888), 0x09850080 },
+	{ _MMIO(0x9888), 0x03850003 },
+	{ _MMIO(0x9888), 0x01850000 },
+	{ _MMIO(0x9888), 0x07860000 },
+	{ _MMIO(0x9888), 0x0f860400 },
+	{ _MMIO(0x9888), 0x09870032 },
+	{ _MMIO(0x9888), 0x01888052 },
+	{ _MMIO(0x9888), 0x11880000 },
+	{ _MMIO(0x9888), 0x09884000 },
+	{ _MMIO(0x9888), 0x1b931001 },
+	{ _MMIO(0x9888), 0x1d930001 },
+	{ _MMIO(0x9888), 0x19934000 },
+	{ _MMIO(0x9888), 0x1b958000 },
+	{ _MMIO(0x9888), 0x1d950094 },
+	{ _MMIO(0x9888), 0x19958000 },
+	{ _MMIO(0x9888), 0x09e58000 },
+	{ _MMIO(0x9888), 0x0be58000 },
+	{ _MMIO(0x9888), 0x03e5c000 },
+	{ _MMIO(0x9888), 0x0592c000 },
+	{ _MMIO(0x9888), 0x0b928000 },
+	{ _MMIO(0x9888), 0x0d924000 },
+	{ _MMIO(0x9888), 0x0f924000 },
+	{ _MMIO(0x9888), 0x11928000 },
+	{ _MMIO(0x9888), 0x1392c000 },
+	{ _MMIO(0x9888), 0x09924000 },
+	{ _MMIO(0x9888), 0x01985000 },
+	{ _MMIO(0x9888), 0x07988000 },
+	{ _MMIO(0x9888), 0x09981000 },
+	{ _MMIO(0x9888), 0x0b982000 },
+	{ _MMIO(0x9888), 0x0d982000 },
+	{ _MMIO(0x9888), 0x0f989000 },
+	{ _MMIO(0x9888), 0x05982000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1190c080 },
+	{ _MMIO(0x9888), 0x51901150 },
+	{ _MMIO(0x9888), 0x41901400 },
+	{ _MMIO(0x9888), 0x55905111 },
+	{ _MMIO(0x9888), 0x45901400 },
+	{ _MMIO(0x9888), 0x479004a5 },
+	{ _MMIO(0x9888), 0x57903455 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b9000a0 },
+	{ _MMIO(0x9888), 0x59900001 },
+	{ _MMIO(0x9888), 0x43900005 },
+	{ _MMIO(0x9888), 0x53900455 },
+};
+
+static const struct i915_oa_reg *
+get_render_pipe_profile_mux_config(struct drm_i915_private *dev_priv,
+				   int *len)
+{
+	*len = ARRAY_SIZE(mux_config_render_pipe_profile);
+	return mux_config_render_pipe_profile;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_reads[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f872 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_reads[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f900064 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930055 },
+	{ _MMIO(0x9888), 0x03e58000 },
+	{ _MMIO(0x9888), 0x05e5c000 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x13900150 },
+	{ _MMIO(0x9888), 0x21900151 },
+	{ _MMIO(0x9888), 0x23900152 },
+	{ _MMIO(0x9888), 0x25900153 },
+	{ _MMIO(0x9888), 0x27900154 },
+	{ _MMIO(0x9888), 0x29900155 },
+	{ _MMIO(0x9888), 0x2b900156 },
+	{ _MMIO(0x9888), 0x2d900157 },
+	{ _MMIO(0x9888), 0x2f90015f },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c60 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900c63 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c63 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg *
+get_memory_reads_mux_config(struct drm_i915_private *dev_priv,
+			    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_memory_reads);
+	return mux_config_memory_reads;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_writes[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f822 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_writes[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f901000 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930055 },
+	{ _MMIO(0x9888), 0x03e58000 },
+	{ _MMIO(0x9888), 0x05e5c000 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x13900160 },
+	{ _MMIO(0x9888), 0x21900161 },
+	{ _MMIO(0x9888), 0x23900162 },
+	{ _MMIO(0x9888), 0x25900163 },
+	{ _MMIO(0x9888), 0x27900164 },
+	{ _MMIO(0x9888), 0x29900165 },
+	{ _MMIO(0x9888), 0x2b900166 },
+	{ _MMIO(0x9888), 0x2d900167 },
+	{ _MMIO(0x9888), 0x2f900150 },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c60 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900c63 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c63 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg *
+get_memory_writes_mux_config(struct drm_i915_private *dev_priv,
+			     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_memory_writes);
+	return mux_config_memory_writes;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extended[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fc2a },
+	{ _MMIO(0x2774), 0x0000bf00 },
+	{ _MMIO(0x2778), 0x0007fc6a },
+	{ _MMIO(0x277c), 0x0000bf00 },
+	{ _MMIO(0x2780), 0x0007fc92 },
+	{ _MMIO(0x2784), 0x0000bf00 },
+	{ _MMIO(0x2788), 0x0007fca2 },
+	{ _MMIO(0x278c), 0x0000bf00 },
+	{ _MMIO(0x2790), 0x0007fc32 },
+	{ _MMIO(0x2794), 0x0000bf00 },
+	{ _MMIO(0x2798), 0x0007fc9a },
+	{ _MMIO(0x279c), 0x0000bf00 },
+	{ _MMIO(0x27a0), 0x0007fe6a },
+	{ _MMIO(0x27a4), 0x0000bf00 },
+	{ _MMIO(0x27a8), 0x0007fe7a },
+	{ _MMIO(0x27ac), 0x0000bf00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extended[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended[] = {
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x141c8160 },
+	{ _MMIO(0x9888), 0x161c8015 },
+	{ _MMIO(0x9888), 0x181c0120 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4eaaa0 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0e6c0b01 },
+	{ _MMIO(0x9888), 0x006c0200 },
+	{ _MMIO(0x9888), 0x026c000c },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x001b8000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x001c0041 },
+	{ _MMIO(0x9888), 0x061c4200 },
+	{ _MMIO(0x9888), 0x081c4443 },
+	{ _MMIO(0x9888), 0x0a1c4645 },
+	{ _MMIO(0x9888), 0x0c1c7647 },
+	{ _MMIO(0x9888), 0x041c7357 },
+	{ _MMIO(0x9888), 0x1c1c0030 },
+	{ _MMIO(0x9888), 0x101c0000 },
+	{ _MMIO(0x9888), 0x1a1c0000 },
+	{ _MMIO(0x9888), 0x121c8000 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4caa2a },
+	{ _MMIO(0x9888), 0x0c4c02aa },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5515 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162caa00 },
+	{ _MMIO(0x9888), 0x182c00aa },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x11907fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900040 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900802 },
+	{ _MMIO(0x9888), 0x47900842 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900842 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900800 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg *
+get_compute_extended_mux_config(struct drm_i915_private *dev_priv,
+				int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_extended);
+	return mux_config_compute_extended;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_l3_cache[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fffa },
+	{ _MMIO(0x2774), 0x0000fefe },
+	{ _MMIO(0x2778), 0x0007fffa },
+	{ _MMIO(0x277c), 0x0000fefd },
+	{ _MMIO(0x2790), 0x0007fffa },
+	{ _MMIO(0x2794), 0x0000fbef },
+	{ _MMIO(0x2798), 0x0007fffa },
+	{ _MMIO(0x279c), 0x0000fbdf },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_l3_cache[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00101100 },
+	{ _MMIO(0xe45c), 0x00201200 },
+	{ _MMIO(0xe55c), 0x00301300 },
+	{ _MMIO(0xe65c), 0x00401400 },
+};
+
+static const struct i915_oa_reg mux_config_compute_l3_cache[] = {
+	{ _MMIO(0x9888), 0x166c0760 },
+	{ _MMIO(0x9888), 0x1593001e },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4e8020 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x006c0051 },
+	{ _MMIO(0x9888), 0x066c5000 },
+	{ _MMIO(0x9888), 0x086c5c5d },
+	{ _MMIO(0x9888), 0x0e6c5e5f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x186c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x061b8000 },
+	{ _MMIO(0x9888), 0x081bc000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x101c8000 },
+	{ _MMIO(0x9888), 0x1a1ce000 },
+	{ _MMIO(0x9888), 0x1c1c0030 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4c2a00 },
+	{ _MMIO(0x9888), 0x0c4c0280 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f1500 },
+	{ _MMIO(0x9888), 0x100f0140 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162c0a00 },
+	{ _MMIO(0x9888), 0x182c00a0 },
+	{ _MMIO(0x9888), 0x03933300 },
+	{ _MMIO(0x9888), 0x05930032 },
+	{ _MMIO(0x9888), 0x11930000 },
+	{ _MMIO(0x9888), 0x1b930000 },
+	{ _MMIO(0x9888), 0x1d900157 },
+	{ _MMIO(0x9888), 0x1f900158 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1190030f },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900063 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x53903333 },
+	{ _MMIO(0x9888), 0x43900840 },
+};
+
+static const struct i915_oa_reg *
+get_compute_l3_cache_mux_config(struct drm_i915_private *dev_priv,
+				int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_l3_cache);
+	return mux_config_compute_l3_cache;
+}
+
+static const struct i915_oa_reg b_counter_config_hdc_and_sf[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x10800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+};
+
+static const struct i915_oa_reg flex_eu_config_hdc_and_sf[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_hdc_and_sf[] = {
+	{ _MMIO(0x9888), 0x104f0232 },
+	{ _MMIO(0x9888), 0x124f4640 },
+	{ _MMIO(0x9888), 0x106c0232 },
+	{ _MMIO(0x9888), 0x11834400 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x004f1880 },
+	{ _MMIO(0x9888), 0x024f08bb },
+	{ _MMIO(0x9888), 0x044f001b },
+	{ _MMIO(0x9888), 0x046c0100 },
+	{ _MMIO(0x9888), 0x066c000b },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x041b8000 },
+	{ _MMIO(0x9888), 0x061b4000 },
+	{ _MMIO(0x9888), 0x1a1c1800 },
+	{ _MMIO(0x9888), 0x005b8000 },
+	{ _MMIO(0x9888), 0x025bc000 },
+	{ _MMIO(0x9888), 0x045b4000 },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x145c8000 },
+	{ _MMIO(0x9888), 0x165c8000 },
+	{ _MMIO(0x9888), 0x185c8000 },
+	{ _MMIO(0x9888), 0x0a4c00a0 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042cc000 },
+	{ _MMIO(0x9888), 0x062cc000 },
+	{ _MMIO(0x9888), 0x082cc000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x0f8305c0 },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x07830000 },
+	{ _MMIO(0x9888), 0x1d950080 },
+	{ _MMIO(0x9888), 0x13928000 },
+	{ _MMIO(0x9888), 0x0f988000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x59900005 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_hdc_and_sf_mux_config(struct drm_i915_private *dev_priv,
+			  int *len)
+{
+	*len = ARRAY_SIZE(mux_config_hdc_and_sf);
+	return mux_config_hdc_and_sf;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1[] = {
+	{ _MMIO(0x9888), 0x126c7b40 },
+	{ _MMIO(0x9888), 0x166c0020 },
+	{ _MMIO(0x9888), 0x0a603444 },
+	{ _MMIO(0x9888), 0x0a613400 },
+	{ _MMIO(0x9888), 0x1a4ea800 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x064f4000 },
+	{ _MMIO(0x9888), 0x0c6c5327 },
+	{ _MMIO(0x9888), 0x0e6c5425 },
+	{ _MMIO(0x9888), 0x006c2a00 },
+	{ _MMIO(0x9888), 0x026c285b },
+	{ _MMIO(0x9888), 0x046c005c },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0800 },
+	{ _MMIO(0x9888), 0x0c1bc000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x001b8000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x1c1c003c },
+	{ _MMIO(0x9888), 0x121c8000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c0800 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x1a5c1000 },
+	{ _MMIO(0x9888), 0x10600000 },
+	{ _MMIO(0x9888), 0x04600000 },
+	{ _MMIO(0x9888), 0x0c610044 },
+	{ _MMIO(0x9888), 0x10610000 },
+	{ _MMIO(0x9888), 0x06610000 },
+	{ _MMIO(0x9888), 0x0c4c02a8 },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x0a4c002a },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0154 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x182c00aa },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2cc000 },
+	{ _MMIO(0x9888), 0x1190ffc0 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900420 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900021 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900400 },
+	{ _MMIO(0x9888), 0x43900421 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+};
+
+static const struct i915_oa_reg *
+get_l3_1_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_1);
+	return mux_config_l3_1;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00028002 },
+	{ _MMIO(0x277c), 0x000087ff },
+	{ _MMIO(0x2780), 0x00020002 },
+	{ _MMIO(0x2784), 0x00008fff },
+	{ _MMIO(0x2788), 0x00008002 },
+	{ _MMIO(0x278c), 0x0000a7ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_2[] = {
+	{ _MMIO(0x9888), 0x126c02e0 },
+	{ _MMIO(0x9888), 0x146c0001 },
+	{ _MMIO(0x9888), 0x0a623400 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x064f4000 },
+	{ _MMIO(0x9888), 0x026c3324 },
+	{ _MMIO(0x9888), 0x046c3422 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c0800 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x1a5c1000 },
+	{ _MMIO(0x9888), 0x06614000 },
+	{ _MMIO(0x9888), 0x0c620044 },
+	{ _MMIO(0x9888), 0x10620000 },
+	{ _MMIO(0x9888), 0x06620000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c002a },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f4000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2cc000 },
+	{ _MMIO(0x9888), 0x1190f800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_l3_2_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_2);
+	return mux_config_l3_2;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_3[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00028002 },
+	{ _MMIO(0x277c), 0x000087ff },
+	{ _MMIO(0x2780), 0x00020002 },
+	{ _MMIO(0x2784), 0x00008fff },
+	{ _MMIO(0x2788), 0x00008002 },
+	{ _MMIO(0x278c), 0x0000a7ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_3[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_3[] = {
+	{ _MMIO(0x9888), 0x126c4e80 },
+	{ _MMIO(0x9888), 0x146c0000 },
+	{ _MMIO(0x9888), 0x0a633400 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x026c3321 },
+	{ _MMIO(0x9888), 0x046c342f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1a6c2000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x061b4000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c1800 },
+	{ _MMIO(0x9888), 0x06604000 },
+	{ _MMIO(0x9888), 0x0c630044 },
+	{ _MMIO(0x9888), 0x10630000 },
+	{ _MMIO(0x9888), 0x06630000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c00aa },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f4000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x1190f800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900002 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_l3_3_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_3);
+	return mux_config_l3_3;
+}
+
+static const struct i915_oa_reg b_counter_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000efff },
+	{ _MMIO(0x2778), 0x00006000 },
+	{ _MMIO(0x277c), 0x0000f3ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x9888), 0x102f3800 },
+	{ _MMIO(0x9888), 0x144d0500 },
+	{ _MMIO(0x9888), 0x120d03c0 },
+	{ _MMIO(0x9888), 0x140d03cf },
+	{ _MMIO(0x9888), 0x0c0f0004 },
+	{ _MMIO(0x9888), 0x0c4e4000 },
+	{ _MMIO(0x9888), 0x042f0480 },
+	{ _MMIO(0x9888), 0x082f0000 },
+	{ _MMIO(0x9888), 0x022f0000 },
+	{ _MMIO(0x9888), 0x0a4c0090 },
+	{ _MMIO(0x9888), 0x064d0027 },
+	{ _MMIO(0x9888), 0x004d0000 },
+	{ _MMIO(0x9888), 0x000d0d40 },
+	{ _MMIO(0x9888), 0x020d803f },
+	{ _MMIO(0x9888), 0x040d8023 },
+	{ _MMIO(0x9888), 0x100d0000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020f0010 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x0e0f0050 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41901400 },
+	{ _MMIO(0x9888), 0x43901485 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900001 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_rasterizer_and_pixel_backend_mux_config(struct drm_i915_private *dev_priv,
+					    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_rasterizer_and_pixel_backend);
+	return mux_config_rasterizer_and_pixel_backend;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler[] = {
+	{ _MMIO(0x9888), 0x14152c00 },
+	{ _MMIO(0x9888), 0x16150005 },
+	{ _MMIO(0x9888), 0x121600a0 },
+	{ _MMIO(0x9888), 0x14352c00 },
+	{ _MMIO(0x9888), 0x16350005 },
+	{ _MMIO(0x9888), 0x123600a0 },
+	{ _MMIO(0x9888), 0x14552c00 },
+	{ _MMIO(0x9888), 0x16550005 },
+	{ _MMIO(0x9888), 0x125600a0 },
+	{ _MMIO(0x9888), 0x062f6000 },
+	{ _MMIO(0x9888), 0x022f2000 },
+	{ _MMIO(0x9888), 0x0c4c0050 },
+	{ _MMIO(0x9888), 0x0a4c0010 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0350 },
+	{ _MMIO(0x9888), 0x0c0fb000 },
+	{ _MMIO(0x9888), 0x0e0f00da },
+	{ _MMIO(0x9888), 0x182c0028 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x022dc000 },
+	{ _MMIO(0x9888), 0x042d4000 },
+	{ _MMIO(0x9888), 0x0c138000 },
+	{ _MMIO(0x9888), 0x0e132000 },
+	{ _MMIO(0x9888), 0x0413c000 },
+	{ _MMIO(0x9888), 0x1c140018 },
+	{ _MMIO(0x9888), 0x0c157000 },
+	{ _MMIO(0x9888), 0x0e150078 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x04162180 },
+	{ _MMIO(0x9888), 0x02160000 },
+	{ _MMIO(0x9888), 0x04174000 },
+	{ _MMIO(0x9888), 0x0233a000 },
+	{ _MMIO(0x9888), 0x04333000 },
+	{ _MMIO(0x9888), 0x14348000 },
+	{ _MMIO(0x9888), 0x16348000 },
+	{ _MMIO(0x9888), 0x02357870 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x04360043 },
+	{ _MMIO(0x9888), 0x02360000 },
+	{ _MMIO(0x9888), 0x04371000 },
+	{ _MMIO(0x9888), 0x0e538000 },
+	{ _MMIO(0x9888), 0x00538000 },
+	{ _MMIO(0x9888), 0x06533000 },
+	{ _MMIO(0x9888), 0x1c540020 },
+	{ _MMIO(0x9888), 0x12548000 },
+	{ _MMIO(0x9888), 0x0e557000 },
+	{ _MMIO(0x9888), 0x00557800 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x06560043 },
+	{ _MMIO(0x9888), 0x02560000 },
+	{ _MMIO(0x9888), 0x06571000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900060 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900060 },
+};
+
+static const struct i915_oa_reg *
+get_sampler_mux_config(struct drm_i915_private *dev_priv,
+		       int *len)
+{
+	*len = ARRAY_SIZE(mux_config_sampler);
+	return mux_config_sampler;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x00007fff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x00009fff },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000efff },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000f3ff },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fdff },
+	{ _MMIO(0x2798), 0x00000000 },
+	{ _MMIO(0x279c), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_1[] = {
+	{ _MMIO(0x9888), 0x12120000 },
+	{ _MMIO(0x9888), 0x12320000 },
+	{ _MMIO(0x9888), 0x12520000 },
+	{ _MMIO(0x9888), 0x002f8000 },
+	{ _MMIO(0x9888), 0x022f3000 },
+	{ _MMIO(0x9888), 0x0a4c0015 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f03a0 },
+	{ _MMIO(0x9888), 0x0c0ff000 },
+	{ _MMIO(0x9888), 0x0e0f0095 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2d8000 },
+	{ _MMIO(0x9888), 0x0e2d4000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x02108000 },
+	{ _MMIO(0x9888), 0x0410c000 },
+	{ _MMIO(0x9888), 0x02118000 },
+	{ _MMIO(0x9888), 0x0411c000 },
+	{ _MMIO(0x9888), 0x02121880 },
+	{ _MMIO(0x9888), 0x041219b5 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x02134000 },
+	{ _MMIO(0x9888), 0x04135000 },
+	{ _MMIO(0x9888), 0x0c308000 },
+	{ _MMIO(0x9888), 0x0e304000 },
+	{ _MMIO(0x9888), 0x06304000 },
+	{ _MMIO(0x9888), 0x0c318000 },
+	{ _MMIO(0x9888), 0x0e314000 },
+	{ _MMIO(0x9888), 0x06314000 },
+	{ _MMIO(0x9888), 0x0c321a80 },
+	{ _MMIO(0x9888), 0x0e320033 },
+	{ _MMIO(0x9888), 0x06320031 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x0c334000 },
+	{ _MMIO(0x9888), 0x0e331000 },
+	{ _MMIO(0x9888), 0x06331000 },
+	{ _MMIO(0x9888), 0x0e508000 },
+	{ _MMIO(0x9888), 0x00508000 },
+	{ _MMIO(0x9888), 0x02504000 },
+	{ _MMIO(0x9888), 0x0e518000 },
+	{ _MMIO(0x9888), 0x00518000 },
+	{ _MMIO(0x9888), 0x02514000 },
+	{ _MMIO(0x9888), 0x0e521880 },
+	{ _MMIO(0x9888), 0x00521a80 },
+	{ _MMIO(0x9888), 0x02520033 },
+	{ _MMIO(0x9888), 0x0e534000 },
+	{ _MMIO(0x9888), 0x00534000 },
+	{ _MMIO(0x9888), 0x02531000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900062 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+};
+
+static const struct i915_oa_reg *
+get_tdl_1_mux_config(struct drm_i915_private *dev_priv,
+		     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_tdl_1);
+	return mux_config_tdl_1;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_2[] = {
+	{ _MMIO(0x9888), 0x12124d60 },
+	{ _MMIO(0x9888), 0x12322e60 },
+	{ _MMIO(0x9888), 0x12524d60 },
+	{ _MMIO(0x9888), 0x022f3000 },
+	{ _MMIO(0x9888), 0x0a4c0014 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0fe000 },
+	{ _MMIO(0x9888), 0x0e0f0097 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x002d8000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x0410c000 },
+	{ _MMIO(0x9888), 0x0411c000 },
+	{ _MMIO(0x9888), 0x04121fb7 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x04135000 },
+	{ _MMIO(0x9888), 0x00308000 },
+	{ _MMIO(0x9888), 0x06304000 },
+	{ _MMIO(0x9888), 0x00318000 },
+	{ _MMIO(0x9888), 0x06314000 },
+	{ _MMIO(0x9888), 0x00321b80 },
+	{ _MMIO(0x9888), 0x0632003f },
+	{ _MMIO(0x9888), 0x00334000 },
+	{ _MMIO(0x9888), 0x06331000 },
+	{ _MMIO(0x9888), 0x0250c000 },
+	{ _MMIO(0x9888), 0x0251c000 },
+	{ _MMIO(0x9888), 0x02521fb7 },
+	{ _MMIO(0x9888), 0x00520000 },
+	{ _MMIO(0x9888), 0x02535000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x43900063 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_tdl_2_mux_config(struct drm_i915_private *dev_priv,
+		     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_tdl_2);
+	return mux_config_tdl_2;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extra[] = {
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extra[] = {
+};
+
+static const struct i915_oa_reg mux_config_compute_extra[] = {
+	{ _MMIO(0x9888), 0x121203e0 },
+	{ _MMIO(0x9888), 0x123203e0 },
+	{ _MMIO(0x9888), 0x125203e0 },
+	{ _MMIO(0x9888), 0x129203e0 },
+	{ _MMIO(0x9888), 0x12b203e0 },
+	{ _MMIO(0x9888), 0x12d203e0 },
+	{ _MMIO(0x9888), 0x024ec000 },
+	{ _MMIO(0x9888), 0x044ec000 },
+	{ _MMIO(0x9888), 0x064ec000 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x0a4c0042 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f006d },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x042d8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06114000 },
+	{ _MMIO(0x9888), 0x06120033 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x04308000 },
+	{ _MMIO(0x9888), 0x04318000 },
+	{ _MMIO(0x9888), 0x04321980 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x04334000 },
+	{ _MMIO(0x9888), 0x04504000 },
+	{ _MMIO(0x9888), 0x04514000 },
+	{ _MMIO(0x9888), 0x04520033 },
+	{ _MMIO(0x9888), 0x00520000 },
+	{ _MMIO(0x9888), 0x04531000 },
+	{ _MMIO(0x9888), 0x00af8000 },
+	{ _MMIO(0x9888), 0x0acc0001 },
+	{ _MMIO(0x9888), 0x008d8000 },
+	{ _MMIO(0x9888), 0x028da000 },
+	{ _MMIO(0x9888), 0x0c8fb000 },
+	{ _MMIO(0x9888), 0x0e8f0001 },
+	{ _MMIO(0x9888), 0x06ac8000 },
+	{ _MMIO(0x9888), 0x02ad4000 },
+	{ _MMIO(0x9888), 0x02908000 },
+	{ _MMIO(0x9888), 0x02918000 },
+	{ _MMIO(0x9888), 0x02921980 },
+	{ _MMIO(0x9888), 0x00920000 },
+	{ _MMIO(0x9888), 0x02934000 },
+	{ _MMIO(0x9888), 0x02b04000 },
+	{ _MMIO(0x9888), 0x02b14000 },
+	{ _MMIO(0x9888), 0x02b20033 },
+	{ _MMIO(0x9888), 0x00b20000 },
+	{ _MMIO(0x9888), 0x02b31000 },
+	{ _MMIO(0x9888), 0x00d08000 },
+	{ _MMIO(0x9888), 0x00d18000 },
+	{ _MMIO(0x9888), 0x00d21980 },
+	{ _MMIO(0x9888), 0x00d34000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900402 },
+	{ _MMIO(0x9888), 0x53901550 },
+	{ _MMIO(0x9888), 0x45900080 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_compute_extra_mux_config(struct drm_i915_private *dev_priv,
+			     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_extra);
+	return mux_config_compute_extra;
+}
+
+static const struct i915_oa_reg b_counter_config_vme_pipe[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00100030 },
+	{ _MMIO(0x2774), 0x0000fff9 },
+	{ _MMIO(0x2778), 0x00000002 },
+	{ _MMIO(0x277c), 0x0000fffc },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000fff3 },
+	{ _MMIO(0x2788), 0x00100180 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00000002 },
+	{ _MMIO(0x279c), 0x0000ff3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_vme_pipe[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00008003 },
+};
+
+static const struct i915_oa_reg mux_config_vme_pipe[] = {
+	{ _MMIO(0x9888), 0x141a5800 },
+	{ _MMIO(0x9888), 0x161a00c0 },
+	{ _MMIO(0x9888), 0x12180240 },
+	{ _MMIO(0x9888), 0x14180002 },
+	{ _MMIO(0x9888), 0x149a5800 },
+	{ _MMIO(0x9888), 0x169a00c0 },
+	{ _MMIO(0x9888), 0x12980240 },
+	{ _MMIO(0x9888), 0x14980002 },
+	{ _MMIO(0x9888), 0x1a4e3fc0 },
+	{ _MMIO(0x9888), 0x002f1000 },
+	{ _MMIO(0x9888), 0x022f8000 },
+	{ _MMIO(0x9888), 0x042f3000 },
+	{ _MMIO(0x9888), 0x004c4000 },
+	{ _MMIO(0x9888), 0x0a4c9500 },
+	{ _MMIO(0x9888), 0x0c4c002a },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f5500 },
+	{ _MMIO(0x9888), 0x100f0015 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162caa00 },
+	{ _MMIO(0x9888), 0x182c000a },
+	{ _MMIO(0x9888), 0x04193000 },
+	{ _MMIO(0x9888), 0x081a28c1 },
+	{ _MMIO(0x9888), 0x001a0000 },
+	{ _MMIO(0x9888), 0x00133000 },
+	{ _MMIO(0x9888), 0x0613c000 },
+	{ _MMIO(0x9888), 0x0813f000 },
+	{ _MMIO(0x9888), 0x00172000 },
+	{ _MMIO(0x9888), 0x06178000 },
+	{ _MMIO(0x9888), 0x0817a000 },
+	{ _MMIO(0x9888), 0x00180037 },
+	{ _MMIO(0x9888), 0x06180940 },
+	{ _MMIO(0x9888), 0x08180000 },
+	{ _MMIO(0x9888), 0x02180000 },
+	{ _MMIO(0x9888), 0x04183000 },
+	{ _MMIO(0x9888), 0x04afc000 },
+	{ _MMIO(0x9888), 0x06af3000 },
+	{ _MMIO(0x9888), 0x0acc4000 },
+	{ _MMIO(0x9888), 0x0ccc0015 },
+	{ _MMIO(0x9888), 0x0a8da000 },
+	{ _MMIO(0x9888), 0x0c8da000 },
+	{ _MMIO(0x9888), 0x0e8f4000 },
+	{ _MMIO(0x9888), 0x108f0015 },
+	{ _MMIO(0x9888), 0x16aca000 },
+	{ _MMIO(0x9888), 0x18ac000a },
+	{ _MMIO(0x9888), 0x06993000 },
+	{ _MMIO(0x9888), 0x0c9a28c1 },
+	{ _MMIO(0x9888), 0x009a0000 },
+	{ _MMIO(0x9888), 0x0a93f000 },
+	{ _MMIO(0x9888), 0x0c93f000 },
+	{ _MMIO(0x9888), 0x0a97a000 },
+	{ _MMIO(0x9888), 0x0c97a000 },
+	{ _MMIO(0x9888), 0x0a980977 },
+	{ _MMIO(0x9888), 0x08980000 },
+	{ _MMIO(0x9888), 0x04980000 },
+	{ _MMIO(0x9888), 0x06983000 },
+	{ _MMIO(0x9888), 0x119000ff },
+	{ _MMIO(0x9888), 0x51900050 },
+	{ _MMIO(0x9888), 0x41900000 },
+	{ _MMIO(0x9888), 0x55900115 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x47900884 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900002 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_vme_pipe_mux_config(struct drm_i915_private *dev_priv,
+			int *len)
+{
+	*len = ARRAY_SIZE(mux_config_vme_pipe);
+	return mux_config_vme_pipe;
+}
+
+static const struct i915_oa_reg b_counter_config_test_oa[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2770), 0x00000004 },
+	{ _MMIO(0x2774), 0x00000000 },
+	{ _MMIO(0x2778), 0x00000003 },
+	{ _MMIO(0x277c), 0x00000000 },
+	{ _MMIO(0x2780), 0x00000007 },
+	{ _MMIO(0x2784), 0x00000000 },
+	{ _MMIO(0x2788), 0x00100002 },
+	{ _MMIO(0x278c), 0x0000fff7 },
+	{ _MMIO(0x2790), 0x00100002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00100082 },
+	{ _MMIO(0x279c), 0x0000ffef },
+	{ _MMIO(0x27a0), 0x001000c2 },
+	{ _MMIO(0x27a4), 0x0000ffe7 },
+	{ _MMIO(0x27a8), 0x00100001 },
+	{ _MMIO(0x27ac), 0x0000ffe7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_test_oa[] = {
+};
+
+static const struct i915_oa_reg mux_config_test_oa[] = {
+	{ _MMIO(0x9888), 0x11810000 },
+	{ _MMIO(0x9888), 0x07810013 },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930040 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_test_oa_mux_config(struct drm_i915_private *dev_priv,
+		       int *len)
+{
+	*len = ARRAY_SIZE(mux_config_test_oa);
+	return mux_config_test_oa;
+}
+
 int i915_oa_select_metric_set_sklgt3(struct drm_i915_private *dev_priv)
 {
-	dev_priv->perf.oa.mux_regs = NULL;
-	dev_priv->perf.oa.mux_regs_len = 0;
-	dev_priv->perf.oa.b_counter_regs = NULL;
-	dev_priv->perf.oa.b_counter_regs_len = 0;
-	dev_priv->perf.oa.flex_regs = NULL;
-	dev_priv->perf.oa.flex_regs_len = 0;
+	dev_priv->perf.oa.mux_regs = NULL;
+	dev_priv->perf.oa.mux_regs_len = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_render_basic_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_basic_mux_config(dev_priv,
+						     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
+		return 0;
+	case METRIC_SET_ID_RENDER_PIPE_PROFILE:
+		dev_priv->perf.oa.mux_regs =
+			get_render_pipe_profile_mux_config(dev_priv,
+							   &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_PIPE_PROFILE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_pipe_profile;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_pipe_profile);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_pipe_profile;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_pipe_profile);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_READS:
+		dev_priv->perf.oa.mux_regs =
+			get_memory_reads_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_READS\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_reads;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_reads);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_reads;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_reads);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_WRITES:
+		dev_priv->perf.oa.mux_regs =
+			get_memory_writes_mux_config(dev_priv,
+						     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_WRITES\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_writes;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_writes);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_writes;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_writes);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTENDED:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_extended_mux_config(dev_priv,
+							&dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTENDED\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extended;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extended);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extended;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extended);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_L3_CACHE:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_l3_cache_mux_config(dev_priv,
+							&dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_L3_CACHE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_l3_cache;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_l3_cache);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_l3_cache;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_l3_cache);
+
+		return 0;
+	case METRIC_SET_ID_HDC_AND_SF:
+		dev_priv->perf.oa.mux_regs =
+			get_hdc_and_sf_mux_config(dev_priv,
+						  &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"HDC_AND_SF\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_hdc_and_sf;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_hdc_and_sf);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_hdc_and_sf;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_hdc_and_sf);
+
+		return 0;
+	case METRIC_SET_ID_L3_1:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_1_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_1);
+
+		return 0;
+	case METRIC_SET_ID_L3_2:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_2_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_2);
+
+		return 0;
+	case METRIC_SET_ID_L3_3:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_3_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_3\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_3;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_3);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_3;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_3);
+
+		return 0;
+	case METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND:
+		dev_priv->perf.oa.mux_regs =
+			get_rasterizer_and_pixel_backend_mux_config(dev_priv,
+								    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RASTERIZER_AND_PIXEL_BACKEND\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_rasterizer_and_pixel_backend);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_rasterizer_and_pixel_backend);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER:
+		dev_priv->perf.oa.mux_regs =
+			get_sampler_mux_config(dev_priv,
+					       &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler);
+
+		return 0;
+	case METRIC_SET_ID_TDL_1:
+		dev_priv->perf.oa.mux_regs =
+			get_tdl_1_mux_config(dev_priv,
+					     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_1);
+
+		return 0;
+	case METRIC_SET_ID_TDL_2:
+		dev_priv->perf.oa.mux_regs =
+			get_tdl_2_mux_config(dev_priv,
+					     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_2);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTRA:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_extra_mux_config(dev_priv,
+						     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTRA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extra;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extra);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extra;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extra);
+
+		return 0;
+	case METRIC_SET_ID_VME_PIPE:
+		dev_priv->perf.oa.mux_regs =
+			get_vme_pipe_mux_config(dev_priv,
+						&dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"VME_PIPE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_vme_pipe;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_vme_pipe);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_vme_pipe;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_vme_pipe);
+
+		return 0;
+	case METRIC_SET_ID_TEST_OA:
+		dev_priv->perf.oa.mux_regs =
+			get_test_oa_mux_config(dev_priv,
+					       &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TEST_OA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_test_oa;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_test_oa);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_test_oa;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_test_oa);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "4616d450-2393-4836-8146-53c5ed84d359",
+	.attrs =  attrs_render_basic,
+};
+
+static ssize_t
+show_compute_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_BASIC);
+}
+
+static struct device_attribute dev_attr_compute_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_basic[] = {
+	&dev_attr_compute_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_basic = {
+	.name = "4320492b-fd03-42ac-922f-dbe1ef3b7b58",
+	.attrs =  attrs_compute_basic,
+};
+
+static ssize_t
+show_render_pipe_profile_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_PIPE_PROFILE);
+}
+
+static struct device_attribute dev_attr_render_pipe_profile_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_pipe_profile_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_pipe_profile[] = {
+	&dev_attr_render_pipe_profile_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_pipe_profile = {
+	.name = "bd2d9cae-b9ec-4f5b-9d2f-934bed398a2d",
+	.attrs =  attrs_render_pipe_profile,
+};
+
+static ssize_t
+show_memory_reads_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_READS);
+}

-	switch (dev_priv->perf.oa.metrics_set) {
-	case METRIC_SET_ID_RENDER_BASIC:
-		dev_priv->perf.oa.mux_regs =
-			get_render_basic_mux_config(dev_priv,
-						    &dev_priv->perf.oa.mux_regs_len);
-		if (!dev_priv->perf.oa.mux_regs) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+static struct device_attribute dev_attr_memory_reads_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_reads_id,
+	.store = NULL,
+};

-			/* EINVAL because *_register_sysfs already checked this
-			 * and so it wouldn't have been advertised to userspace and
-			 * so shouldn't have been requested
-			 */
-			return -EINVAL;
-		}
+static struct attribute *attrs_memory_reads[] = {
+	&dev_attr_memory_reads_id.attr,
+	NULL,
+};

-		dev_priv->perf.oa.b_counter_regs =
-			b_counter_config_render_basic;
-		dev_priv->perf.oa.b_counter_regs_len =
-			ARRAY_SIZE(b_counter_config_render_basic);
+static struct attribute_group group_memory_reads = {
+	.name = "4ca0f3fe-7fd3-4924-98cb-1807d9879767",
+	.attrs =  attrs_memory_reads,
+};

-		dev_priv->perf.oa.flex_regs =
-			flex_eu_config_render_basic;
-		dev_priv->perf.oa.flex_regs_len =
-			ARRAY_SIZE(flex_eu_config_render_basic);
+static ssize_t
+show_memory_writes_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_WRITES);
+}

-		return 0;
-	default:
-		return -ENODEV;
-	}
+static struct device_attribute dev_attr_memory_writes_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_writes_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_writes[] = {
+	&dev_attr_memory_writes_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_writes = {
+	.name = "a0c0172c-ee13-403d-99ff-2bdf6936cf14",
+	.attrs =  attrs_memory_writes,
+};
+
+static ssize_t
+show_compute_extended_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTENDED);
+}
+
+static struct device_attribute dev_attr_compute_extended_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extended_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extended[] = {
+	&dev_attr_compute_extended_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extended = {
+	.name = "52435e0b-f188-42ea-8680-21a56ee20dee",
+	.attrs =  attrs_compute_extended,
+};
+
+static ssize_t
+show_compute_l3_cache_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_L3_CACHE);
 }

+static struct device_attribute dev_attr_compute_l3_cache_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_l3_cache_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_l3_cache[] = {
+	&dev_attr_compute_l3_cache_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_l3_cache = {
+	.name = "27076eeb-49f3-4fed-8423-c66506005c63",
+	.attrs =  attrs_compute_l3_cache,
+};
+
 static ssize_t
-show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+show_hdc_and_sf_id(struct device *kdev, struct device_attribute *attr, char *buf)
 {
-	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+	return sprintf(buf, "%d\n", METRIC_SET_ID_HDC_AND_SF);
 }

-static struct device_attribute dev_attr_render_basic_id = {
+static struct device_attribute dev_attr_hdc_and_sf_id = {
 	.attr = { .name = "id", .mode = 0444 },
-	.show = show_render_basic_id,
+	.show = show_hdc_and_sf_id,
 	.store = NULL,
 };

-static struct attribute *attrs_render_basic[] = {
-	&dev_attr_render_basic_id.attr,
+static struct attribute *attrs_hdc_and_sf[] = {
+	&dev_attr_hdc_and_sf_id.attr,
 	NULL,
 };

-static struct attribute_group group_render_basic = {
-	.name = "4616d450-2393-4836-8146-53c5ed84d359",
-	.attrs =  attrs_render_basic,
+static struct attribute_group group_hdc_and_sf = {
+	.name = "8071b409-c39a-4674-94d7-32962ecfb512",
+	.attrs =  attrs_hdc_and_sf,
+};
+
+static ssize_t
+show_l3_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_1);
+}
+
+static struct device_attribute dev_attr_l3_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_1[] = {
+	&dev_attr_l3_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_1 = {
+	.name = "5e0b391e-9ea8-4901-b2ff-b64ff616c7ed",
+	.attrs =  attrs_l3_1,
+};
+
+static ssize_t
+show_l3_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_2);
+}
+
+static struct device_attribute dev_attr_l3_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_2[] = {
+	&dev_attr_l3_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_2 = {
+	.name = "25dc828e-1d2d-426e-9546-a1d4233cdf16",
+	.attrs =  attrs_l3_2,
+};
+
+static ssize_t
+show_l3_3_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_3);
+}
+
+static struct device_attribute dev_attr_l3_3_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_3_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_3[] = {
+	&dev_attr_l3_3_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_3 = {
+	.name = "3dba9405-2d7e-4d70-8199-e734e82fd6bf",
+	.attrs =  attrs_l3_3,
+};
+
+static ssize_t
+show_rasterizer_and_pixel_backend_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND);
+}
+
+static struct device_attribute dev_attr_rasterizer_and_pixel_backend_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_rasterizer_and_pixel_backend_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_rasterizer_and_pixel_backend[] = {
+	&dev_attr_rasterizer_and_pixel_backend_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_rasterizer_and_pixel_backend = {
+	.name = "76935d7b-09c9-46bf-87f1-c18b4a86ebe5",
+	.attrs =  attrs_rasterizer_and_pixel_backend,
+};
+
+static ssize_t
+show_sampler_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER);
+}
+
+static struct device_attribute dev_attr_sampler_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler[] = {
+	&dev_attr_sampler_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler = {
+	.name = "1b34c0d6-4f4c-4d7b-833f-4aaf236d87a6",
+	.attrs =  attrs_sampler,
+};
+
+static ssize_t
+show_tdl_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_1);
+}
+
+static struct device_attribute dev_attr_tdl_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_1[] = {
+	&dev_attr_tdl_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_1 = {
+	.name = "b375c985-9953-455b-bda2-b03f7594e9db",
+	.attrs =  attrs_tdl_1,
+};
+
+static ssize_t
+show_tdl_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_2);
+}
+
+static struct device_attribute dev_attr_tdl_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_2[] = {
+	&dev_attr_tdl_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_2 = {
+	.name = "3e2be2bb-884a-49bb-82c5-2358e6bd5f2d",
+	.attrs =  attrs_tdl_2,
+};
+
+static ssize_t
+show_compute_extra_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTRA);
+}
+
+static struct device_attribute dev_attr_compute_extra_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extra_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extra[] = {
+	&dev_attr_compute_extra_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extra = {
+	.name = "2d80a648-7b5a-4e92-bbe7-3b5c76f2e221",
+	.attrs =  attrs_compute_extra,
+};
+
+static ssize_t
+show_vme_pipe_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_VME_PIPE);
+}
+
+static struct device_attribute dev_attr_vme_pipe_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_vme_pipe_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_vme_pipe[] = {
+	&dev_attr_vme_pipe_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_vme_pipe = {
+	.name = "cfae9232-6ffc-42cc-a703-9790016925f0",
+	.attrs =  attrs_vme_pipe,
+};
+
+static ssize_t
+show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TEST_OA);
+}
+
+static struct device_attribute dev_attr_test_oa_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_test_oa_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_test_oa[] = {
+	&dev_attr_test_oa_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_test_oa = {
+	.name = "2b985803-d3c9-4629-8a4f-634bfecba0e8",
+	.attrs =  attrs_test_oa,
 };

 int
@@ -219,9 +2669,145 @@ i915_perf_register_sysfs_sklgt3(struct drm_i915_private *dev_priv)
 		if (ret)
 			goto error_render_basic;
 	}
+	if (get_compute_basic_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+		if (ret)
+			goto error_compute_basic;
+	}
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+		if (ret)
+			goto error_render_pipe_profile;
+	}
+	if (get_memory_reads_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+		if (ret)
+			goto error_memory_reads;
+	}
+	if (get_memory_writes_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+		if (ret)
+			goto error_memory_writes;
+	}
+	if (get_compute_extended_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+		if (ret)
+			goto error_compute_extended;
+	}
+	if (get_compute_l3_cache_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+		if (ret)
+			goto error_compute_l3_cache;
+	}
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+		if (ret)
+			goto error_hdc_and_sf;
+	}
+	if (get_l3_1_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+		if (ret)
+			goto error_l3_1;
+	}
+	if (get_l3_2_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+		if (ret)
+			goto error_l3_2;
+	}
+	if (get_l3_3_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+		if (ret)
+			goto error_l3_3;
+	}
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+		if (ret)
+			goto error_rasterizer_and_pixel_backend;
+	}
+	if (get_sampler_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler);
+		if (ret)
+			goto error_sampler;
+	}
+	if (get_tdl_1_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+		if (ret)
+			goto error_tdl_1;
+	}
+	if (get_tdl_2_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+		if (ret)
+			goto error_tdl_2;
+	}
+	if (get_compute_extra_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+		if (ret)
+			goto error_compute_extra;
+	}
+	if (get_vme_pipe_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+		if (ret)
+			goto error_vme_pipe;
+	}
+	if (get_test_oa_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+		if (ret)
+			goto error_test_oa;
+	}

 	return 0;

+error_test_oa:
+	if (get_vme_pipe_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+error_vme_pipe:
+	if (get_compute_extra_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+error_compute_extra:
+	if (get_tdl_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+error_tdl_2:
+	if (get_tdl_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+error_tdl_1:
+	if (get_sampler_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+error_sampler:
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+error_rasterizer_and_pixel_backend:
+	if (get_l3_3_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+error_l3_3:
+	if (get_l3_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+error_l3_2:
+	if (get_l3_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+error_l3_1:
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+error_hdc_and_sf:
+	if (get_compute_l3_cache_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+error_compute_l3_cache:
+	if (get_compute_extended_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+error_compute_extended:
+	if (get_memory_writes_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+error_memory_writes:
+	if (get_memory_reads_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+error_memory_reads:
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+error_render_pipe_profile:
+	if (get_compute_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+error_compute_basic:
+	if (get_render_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
 error_render_basic:
 	return ret;
 }
@@ -233,4 +2819,38 @@ i915_perf_unregister_sysfs_sklgt3(struct drm_i915_private *dev_priv)

 	if (get_render_basic_mux_config(dev_priv, &mux_len))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+	if (get_compute_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+	if (get_memory_reads_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+	if (get_memory_writes_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+	if (get_compute_extended_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+	if (get_compute_l3_cache_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+	if (get_l3_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+	if (get_l3_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+	if (get_l3_3_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+	if (get_sampler_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+	if (get_tdl_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+	if (get_tdl_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+	if (get_compute_extra_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+	if (get_vme_pipe_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+	if (get_test_oa_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_test_oa);
 }
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt4.c b/drivers/gpu/drm/i915/i915_oa_sklgt4.c
index 1af4ebe7d51f..9acd016c7057 100644
--- a/drivers/gpu/drm/i915/i915_oa_sklgt4.c
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt4.c
@@ -31,9 +31,26 @@

 enum metric_set_id {
 	METRIC_SET_ID_RENDER_BASIC = 1,
+	METRIC_SET_ID_COMPUTE_BASIC,
+	METRIC_SET_ID_RENDER_PIPE_PROFILE,
+	METRIC_SET_ID_MEMORY_READS,
+	METRIC_SET_ID_MEMORY_WRITES,
+	METRIC_SET_ID_COMPUTE_EXTENDED,
+	METRIC_SET_ID_COMPUTE_L3_CACHE,
+	METRIC_SET_ID_HDC_AND_SF,
+	METRIC_SET_ID_L3_1,
+	METRIC_SET_ID_L3_2,
+	METRIC_SET_ID_L3_3,
+	METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND,
+	METRIC_SET_ID_SAMPLER,
+	METRIC_SET_ID_TDL_1,
+	METRIC_SET_ID_TDL_2,
+	METRIC_SET_ID_COMPUTE_EXTRA,
+	METRIC_SET_ID_VME_PIPE,
+	METRIC_SET_ID_TEST_OA,
 };

-int i915_oa_n_builtin_metric_sets_sklgt4 = 1;
+int i915_oa_n_builtin_metric_sets_sklgt4 = 18;

 static const struct i915_oa_reg b_counter_config_render_basic[] = {
 	{ _MMIO(0x2710), 0x00000000 },
@@ -157,66 +174,2542 @@ get_render_basic_mux_config(struct drm_i915_private *dev_priv,
 	return mux_config_render_basic;
 }

+static const struct i915_oa_reg b_counter_config_compute_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic[] = {
+	{ _MMIO(0x9888), 0x104f00e0 },
+	{ _MMIO(0x9888), 0x124f1c00 },
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x1a4e0820 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x064f0900 },
+	{ _MMIO(0x9888), 0x084f0032 },
+	{ _MMIO(0x9888), 0x0a4f1891 },
+	{ _MMIO(0x9888), 0x0c4f0e00 },
+	{ _MMIO(0x9888), 0x0e4f003c },
+	{ _MMIO(0x9888), 0x004f0d80 },
+	{ _MMIO(0x9888), 0x024f003b },
+	{ _MMIO(0x9888), 0x006c0002 },
+	{ _MMIO(0x9888), 0x086c0100 },
+	{ _MMIO(0x9888), 0x0c6c000c },
+	{ _MMIO(0x9888), 0x0e6c0b00 },
+	{ _MMIO(0x9888), 0x186c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x081b8000 },
+	{ _MMIO(0x9888), 0x0c1b4000 },
+	{ _MMIO(0x9888), 0x0e1b8000 },
+	{ _MMIO(0x9888), 0x101c8000 },
+	{ _MMIO(0x9888), 0x1a1c8000 },
+	{ _MMIO(0x9888), 0x1c1c0024 },
+	{ _MMIO(0x9888), 0x065b8000 },
+	{ _MMIO(0x9888), 0x085b4000 },
+	{ _MMIO(0x9888), 0x0a5bc000 },
+	{ _MMIO(0x9888), 0x0c5b8000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x005b8000 },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x1a5c6000 },
+	{ _MMIO(0x9888), 0x1c5c001b },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x145c8000 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4c2000 },
+	{ _MMIO(0x9888), 0x0c4c0208 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020d2000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5500 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2cc000 },
+	{ _MMIO(0x9888), 0x162cfb00 },
+	{ _MMIO(0x9888), 0x182c00be },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042cc000 },
+	{ _MMIO(0x9888), 0x19900157 },
+	{ _MMIO(0x9888), 0x1b900158 },
+	{ _MMIO(0x9888), 0x1d900105 },
+	{ _MMIO(0x9888), 0x1f900103 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x11900fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900821 },
+	{ _MMIO(0x9888), 0x47900802 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900802 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900002 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900422 },
+	{ _MMIO(0x9888), 0x53905555 },
+};
+
+static const struct i915_oa_reg *
+get_compute_basic_mux_config(struct drm_i915_private *dev_priv,
+			     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_basic);
+	return mux_config_compute_basic;
+}
+
+static const struct i915_oa_reg b_counter_config_render_pipe_profile[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ffea },
+	{ _MMIO(0x2774), 0x00007ffc },
+	{ _MMIO(0x2778), 0x0007affa },
+	{ _MMIO(0x277c), 0x0000f5fd },
+	{ _MMIO(0x2780), 0x00079ffa },
+	{ _MMIO(0x2784), 0x0000f3fb },
+	{ _MMIO(0x2788), 0x0007bf7a },
+	{ _MMIO(0x278c), 0x0000f7e7 },
+	{ _MMIO(0x2790), 0x0007fefa },
+	{ _MMIO(0x2794), 0x0000f7cf },
+	{ _MMIO(0x2798), 0x00077ffa },
+	{ _MMIO(0x279c), 0x0000efdf },
+	{ _MMIO(0x27a0), 0x0006fffa },
+	{ _MMIO(0x27a4), 0x0000cfbf },
+	{ _MMIO(0x27a8), 0x0003fffa },
+	{ _MMIO(0x27ac), 0x00005f7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_pipe_profile[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile[] = {
+	{ _MMIO(0x9888), 0x0c0e001f },
+	{ _MMIO(0x9888), 0x0a0f0000 },
+	{ _MMIO(0x9888), 0x10116800 },
+	{ _MMIO(0x9888), 0x178a03e0 },
+	{ _MMIO(0x9888), 0x11824c00 },
+	{ _MMIO(0x9888), 0x11830020 },
+	{ _MMIO(0x9888), 0x13840020 },
+	{ _MMIO(0x9888), 0x11850019 },
+	{ _MMIO(0x9888), 0x11860007 },
+	{ _MMIO(0x9888), 0x01870c40 },
+	{ _MMIO(0x9888), 0x17880000 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x0a4c0040 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x040d4000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020e5400 },
+	{ _MMIO(0x9888), 0x000e0000 },
+	{ _MMIO(0x9888), 0x080f0040 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x100f0000 },
+	{ _MMIO(0x9888), 0x0e0f0040 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06110012 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x01898000 },
+	{ _MMIO(0x9888), 0x0d890100 },
+	{ _MMIO(0x9888), 0x03898000 },
+	{ _MMIO(0x9888), 0x09808000 },
+	{ _MMIO(0x9888), 0x0b808000 },
+	{ _MMIO(0x9888), 0x0380c000 },
+	{ _MMIO(0x9888), 0x0f8a0075 },
+	{ _MMIO(0x9888), 0x1d8a0000 },
+	{ _MMIO(0x9888), 0x118a8000 },
+	{ _MMIO(0x9888), 0x1b8a4000 },
+	{ _MMIO(0x9888), 0x138a8000 },
+	{ _MMIO(0x9888), 0x1d81a000 },
+	{ _MMIO(0x9888), 0x15818000 },
+	{ _MMIO(0x9888), 0x17818000 },
+	{ _MMIO(0x9888), 0x0b820030 },
+	{ _MMIO(0x9888), 0x07828000 },
+	{ _MMIO(0x9888), 0x0d824000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x05824000 },
+	{ _MMIO(0x9888), 0x0d830003 },
+	{ _MMIO(0x9888), 0x0583000c },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x03838000 },
+	{ _MMIO(0x9888), 0x07838000 },
+	{ _MMIO(0x9888), 0x0b840980 },
+	{ _MMIO(0x9888), 0x03844d80 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x09848000 },
+	{ _MMIO(0x9888), 0x09850080 },
+	{ _MMIO(0x9888), 0x03850003 },
+	{ _MMIO(0x9888), 0x01850000 },
+	{ _MMIO(0x9888), 0x07860000 },
+	{ _MMIO(0x9888), 0x0f860400 },
+	{ _MMIO(0x9888), 0x09870032 },
+	{ _MMIO(0x9888), 0x01888052 },
+	{ _MMIO(0x9888), 0x11880000 },
+	{ _MMIO(0x9888), 0x09884000 },
+	{ _MMIO(0x9888), 0x1b931001 },
+	{ _MMIO(0x9888), 0x1d930001 },
+	{ _MMIO(0x9888), 0x19934000 },
+	{ _MMIO(0x9888), 0x1b958000 },
+	{ _MMIO(0x9888), 0x1d950094 },
+	{ _MMIO(0x9888), 0x19958000 },
+	{ _MMIO(0x9888), 0x09e58000 },
+	{ _MMIO(0x9888), 0x0be58000 },
+	{ _MMIO(0x9888), 0x03e5c000 },
+	{ _MMIO(0x9888), 0x0592c000 },
+	{ _MMIO(0x9888), 0x0b928000 },
+	{ _MMIO(0x9888), 0x0d924000 },
+	{ _MMIO(0x9888), 0x0f924000 },
+	{ _MMIO(0x9888), 0x11928000 },
+	{ _MMIO(0x9888), 0x1392c000 },
+	{ _MMIO(0x9888), 0x09924000 },
+	{ _MMIO(0x9888), 0x01985000 },
+	{ _MMIO(0x9888), 0x07988000 },
+	{ _MMIO(0x9888), 0x09981000 },
+	{ _MMIO(0x9888), 0x0b982000 },
+	{ _MMIO(0x9888), 0x0d982000 },
+	{ _MMIO(0x9888), 0x0f989000 },
+	{ _MMIO(0x9888), 0x05982000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1190c080 },
+	{ _MMIO(0x9888), 0x51901110 },
+	{ _MMIO(0x9888), 0x41900440 },
+	{ _MMIO(0x9888), 0x55901111 },
+	{ _MMIO(0x9888), 0x45900400 },
+	{ _MMIO(0x9888), 0x47900c21 },
+	{ _MMIO(0x9888), 0x57901411 },
+	{ _MMIO(0x9888), 0x49900042 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900024 },
+	{ _MMIO(0x9888), 0x59900001 },
+	{ _MMIO(0x9888), 0x43900841 },
+	{ _MMIO(0x9888), 0x53900411 },
+};
+
+static const struct i915_oa_reg *
+get_render_pipe_profile_mux_config(struct drm_i915_private *dev_priv,
+				   int *len)
+{
+	*len = ARRAY_SIZE(mux_config_render_pipe_profile);
+	return mux_config_render_pipe_profile;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_reads[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f872 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_reads[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f900064 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930055 },
+	{ _MMIO(0x9888), 0x03e58000 },
+	{ _MMIO(0x9888), 0x05e5c000 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x13900150 },
+	{ _MMIO(0x9888), 0x21900151 },
+	{ _MMIO(0x9888), 0x23900152 },
+	{ _MMIO(0x9888), 0x25900153 },
+	{ _MMIO(0x9888), 0x27900154 },
+	{ _MMIO(0x9888), 0x29900155 },
+	{ _MMIO(0x9888), 0x2b900156 },
+	{ _MMIO(0x9888), 0x2d900157 },
+	{ _MMIO(0x9888), 0x2f90015f },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c60 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900c63 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c63 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg *
+get_memory_reads_mux_config(struct drm_i915_private *dev_priv,
+			    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_memory_reads);
+	return mux_config_memory_reads;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_writes[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f822 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_writes[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f901000 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930055 },
+	{ _MMIO(0x9888), 0x03e58000 },
+	{ _MMIO(0x9888), 0x05e5c000 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x13900160 },
+	{ _MMIO(0x9888), 0x21900161 },
+	{ _MMIO(0x9888), 0x23900162 },
+	{ _MMIO(0x9888), 0x25900163 },
+	{ _MMIO(0x9888), 0x27900164 },
+	{ _MMIO(0x9888), 0x29900165 },
+	{ _MMIO(0x9888), 0x2b900166 },
+	{ _MMIO(0x9888), 0x2d900167 },
+	{ _MMIO(0x9888), 0x2f900150 },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c60 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900c63 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c63 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg *
+get_memory_writes_mux_config(struct drm_i915_private *dev_priv,
+			     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_memory_writes);
+	return mux_config_memory_writes;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extended[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fc2a },
+	{ _MMIO(0x2774), 0x0000bf00 },
+	{ _MMIO(0x2778), 0x0007fc6a },
+	{ _MMIO(0x277c), 0x0000bf00 },
+	{ _MMIO(0x2780), 0x0007fc92 },
+	{ _MMIO(0x2784), 0x0000bf00 },
+	{ _MMIO(0x2788), 0x0007fca2 },
+	{ _MMIO(0x278c), 0x0000bf00 },
+	{ _MMIO(0x2790), 0x0007fc32 },
+	{ _MMIO(0x2794), 0x0000bf00 },
+	{ _MMIO(0x2798), 0x0007fc9a },
+	{ _MMIO(0x279c), 0x0000bf00 },
+	{ _MMIO(0x27a0), 0x0007fe6a },
+	{ _MMIO(0x27a4), 0x0000bf00 },
+	{ _MMIO(0x27a8), 0x0007fe7a },
+	{ _MMIO(0x27ac), 0x0000bf00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extended[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended[] = {
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x141c8160 },
+	{ _MMIO(0x9888), 0x161c8015 },
+	{ _MMIO(0x9888), 0x181c0120 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4eaaa0 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0e6c0b01 },
+	{ _MMIO(0x9888), 0x006c0200 },
+	{ _MMIO(0x9888), 0x026c000c },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x001b8000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x001c0041 },
+	{ _MMIO(0x9888), 0x061c4200 },
+	{ _MMIO(0x9888), 0x081c4443 },
+	{ _MMIO(0x9888), 0x0a1c4645 },
+	{ _MMIO(0x9888), 0x0c1c7647 },
+	{ _MMIO(0x9888), 0x041c7357 },
+	{ _MMIO(0x9888), 0x1c1c0030 },
+	{ _MMIO(0x9888), 0x101c0000 },
+	{ _MMIO(0x9888), 0x1a1c0000 },
+	{ _MMIO(0x9888), 0x121c8000 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4caa2a },
+	{ _MMIO(0x9888), 0x0c4c02aa },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5515 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162caa00 },
+	{ _MMIO(0x9888), 0x182c00aa },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x11907fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900040 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900802 },
+	{ _MMIO(0x9888), 0x47900842 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900842 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900800 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg *
+get_compute_extended_mux_config(struct drm_i915_private *dev_priv,
+				int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_extended);
+	return mux_config_compute_extended;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_l3_cache[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fffa },
+	{ _MMIO(0x2774), 0x0000fefe },
+	{ _MMIO(0x2778), 0x0007fffa },
+	{ _MMIO(0x277c), 0x0000fefd },
+	{ _MMIO(0x2790), 0x0007fffa },
+	{ _MMIO(0x2794), 0x0000fbef },
+	{ _MMIO(0x2798), 0x0007fffa },
+	{ _MMIO(0x279c), 0x0000fbdf },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_l3_cache[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00101100 },
+	{ _MMIO(0xe45c), 0x00201200 },
+	{ _MMIO(0xe55c), 0x00301300 },
+	{ _MMIO(0xe65c), 0x00401400 },
+};
+
+static const struct i915_oa_reg mux_config_compute_l3_cache[] = {
+	{ _MMIO(0x9888), 0x166c0760 },
+	{ _MMIO(0x9888), 0x1593001e },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4e8020 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x006c0051 },
+	{ _MMIO(0x9888), 0x066c5000 },
+	{ _MMIO(0x9888), 0x086c5c5d },
+	{ _MMIO(0x9888), 0x0e6c5e5f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x186c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x061b8000 },
+	{ _MMIO(0x9888), 0x081bc000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x101c8000 },
+	{ _MMIO(0x9888), 0x1a1ce000 },
+	{ _MMIO(0x9888), 0x1c1c0030 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4c2a00 },
+	{ _MMIO(0x9888), 0x0c4c0280 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f1500 },
+	{ _MMIO(0x9888), 0x100f0140 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162c0a00 },
+	{ _MMIO(0x9888), 0x182c00a0 },
+	{ _MMIO(0x9888), 0x03933300 },
+	{ _MMIO(0x9888), 0x05930032 },
+	{ _MMIO(0x9888), 0x11930000 },
+	{ _MMIO(0x9888), 0x1b930000 },
+	{ _MMIO(0x9888), 0x1d900157 },
+	{ _MMIO(0x9888), 0x1f900158 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1190030f },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900021 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x53905555 },
+	{ _MMIO(0x9888), 0x43900000 },
+};
+
+static const struct i915_oa_reg *
+get_compute_l3_cache_mux_config(struct drm_i915_private *dev_priv,
+				int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_l3_cache);
+	return mux_config_compute_l3_cache;
+}
+
+static const struct i915_oa_reg b_counter_config_hdc_and_sf[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x10800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+};
+
+static const struct i915_oa_reg flex_eu_config_hdc_and_sf[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_hdc_and_sf[] = {
+	{ _MMIO(0x9888), 0x104f0232 },
+	{ _MMIO(0x9888), 0x124f4640 },
+	{ _MMIO(0x9888), 0x106c0232 },
+	{ _MMIO(0x9888), 0x11834400 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x004f1880 },
+	{ _MMIO(0x9888), 0x024f08bb },
+	{ _MMIO(0x9888), 0x044f001b },
+	{ _MMIO(0x9888), 0x046c0100 },
+	{ _MMIO(0x9888), 0x066c000b },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x041b8000 },
+	{ _MMIO(0x9888), 0x061b4000 },
+	{ _MMIO(0x9888), 0x1a1c1800 },
+	{ _MMIO(0x9888), 0x005b8000 },
+	{ _MMIO(0x9888), 0x025bc000 },
+	{ _MMIO(0x9888), 0x045b4000 },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x145c8000 },
+	{ _MMIO(0x9888), 0x165c8000 },
+	{ _MMIO(0x9888), 0x185c8000 },
+	{ _MMIO(0x9888), 0x0a4c00a0 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042cc000 },
+	{ _MMIO(0x9888), 0x062cc000 },
+	{ _MMIO(0x9888), 0x082cc000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x0f8305c0 },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x07830000 },
+	{ _MMIO(0x9888), 0x1d950080 },
+	{ _MMIO(0x9888), 0x13928000 },
+	{ _MMIO(0x9888), 0x0f988000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x59900001 },
+	{ _MMIO(0x9888), 0x4b900040 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_hdc_and_sf_mux_config(struct drm_i915_private *dev_priv,
+			  int *len)
+{
+	*len = ARRAY_SIZE(mux_config_hdc_and_sf);
+	return mux_config_hdc_and_sf;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1[] = {
+	{ _MMIO(0x9888), 0x126c7b40 },
+	{ _MMIO(0x9888), 0x166c0020 },
+	{ _MMIO(0x9888), 0x0a603444 },
+	{ _MMIO(0x9888), 0x0a613400 },
+	{ _MMIO(0x9888), 0x1a4ea800 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x064f4000 },
+	{ _MMIO(0x9888), 0x0c6c5327 },
+	{ _MMIO(0x9888), 0x0e6c5425 },
+	{ _MMIO(0x9888), 0x006c2a00 },
+	{ _MMIO(0x9888), 0x026c285b },
+	{ _MMIO(0x9888), 0x046c005c },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0800 },
+	{ _MMIO(0x9888), 0x0c1bc000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x001b8000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x1c1c003c },
+	{ _MMIO(0x9888), 0x121c8000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c0800 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x1a5c1000 },
+	{ _MMIO(0x9888), 0x10600000 },
+	{ _MMIO(0x9888), 0x04600000 },
+	{ _MMIO(0x9888), 0x0c610044 },
+	{ _MMIO(0x9888), 0x10610000 },
+	{ _MMIO(0x9888), 0x06610000 },
+	{ _MMIO(0x9888), 0x0c4c02a8 },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x0a4c002a },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0154 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x182c00aa },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2cc000 },
+	{ _MMIO(0x9888), 0x1190ffc0 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900420 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900021 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900400 },
+	{ _MMIO(0x9888), 0x43900421 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+};
+
+static const struct i915_oa_reg *
+get_l3_1_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_1);
+	return mux_config_l3_1;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00028002 },
+	{ _MMIO(0x277c), 0x000087ff },
+	{ _MMIO(0x2780), 0x00020002 },
+	{ _MMIO(0x2784), 0x00008fff },
+	{ _MMIO(0x2788), 0x00008002 },
+	{ _MMIO(0x278c), 0x0000a7ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_2[] = {
+	{ _MMIO(0x9888), 0x126c02e0 },
+	{ _MMIO(0x9888), 0x146c0001 },
+	{ _MMIO(0x9888), 0x0a623400 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x064f4000 },
+	{ _MMIO(0x9888), 0x026c3324 },
+	{ _MMIO(0x9888), 0x046c3422 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c0800 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x1a5c1000 },
+	{ _MMIO(0x9888), 0x06614000 },
+	{ _MMIO(0x9888), 0x0c620044 },
+	{ _MMIO(0x9888), 0x10620000 },
+	{ _MMIO(0x9888), 0x06620000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c002a },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f4000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2cc000 },
+	{ _MMIO(0x9888), 0x1190f800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_l3_2_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_2);
+	return mux_config_l3_2;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_3[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00028002 },
+	{ _MMIO(0x277c), 0x000087ff },
+	{ _MMIO(0x2780), 0x00020002 },
+	{ _MMIO(0x2784), 0x00008fff },
+	{ _MMIO(0x2788), 0x00008002 },
+	{ _MMIO(0x278c), 0x0000a7ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_3[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_3[] = {
+	{ _MMIO(0x9888), 0x126c4e80 },
+	{ _MMIO(0x9888), 0x146c0000 },
+	{ _MMIO(0x9888), 0x0a633400 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x026c3321 },
+	{ _MMIO(0x9888), 0x046c342f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1a6c2000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x061b4000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c1800 },
+	{ _MMIO(0x9888), 0x06604000 },
+	{ _MMIO(0x9888), 0x0c630044 },
+	{ _MMIO(0x9888), 0x10630000 },
+	{ _MMIO(0x9888), 0x06630000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c00aa },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f4000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x1190f800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900002 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_l3_3_mux_config(struct drm_i915_private *dev_priv,
+		    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_l3_3);
+	return mux_config_l3_3;
+}
+
+static const struct i915_oa_reg b_counter_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000efff },
+	{ _MMIO(0x2778), 0x00006000 },
+	{ _MMIO(0x277c), 0x0000f3ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x9888), 0x102f3800 },
+	{ _MMIO(0x9888), 0x144d0500 },
+	{ _MMIO(0x9888), 0x120d03c0 },
+	{ _MMIO(0x9888), 0x140d03cf },
+	{ _MMIO(0x9888), 0x0c0f0004 },
+	{ _MMIO(0x9888), 0x0c4e4000 },
+	{ _MMIO(0x9888), 0x042f0480 },
+	{ _MMIO(0x9888), 0x082f0000 },
+	{ _MMIO(0x9888), 0x022f0000 },
+	{ _MMIO(0x9888), 0x0a4c0090 },
+	{ _MMIO(0x9888), 0x064d0027 },
+	{ _MMIO(0x9888), 0x004d0000 },
+	{ _MMIO(0x9888), 0x000d0d40 },
+	{ _MMIO(0x9888), 0x020d803f },
+	{ _MMIO(0x9888), 0x040d8023 },
+	{ _MMIO(0x9888), 0x100d0000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020f0010 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x0e0f0050 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41901400 },
+	{ _MMIO(0x9888), 0x43901485 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900001 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_rasterizer_and_pixel_backend_mux_config(struct drm_i915_private *dev_priv,
+					    int *len)
+{
+	*len = ARRAY_SIZE(mux_config_rasterizer_and_pixel_backend);
+	return mux_config_rasterizer_and_pixel_backend;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler[] = {
+	{ _MMIO(0x9888), 0x14152c00 },
+	{ _MMIO(0x9888), 0x16150005 },
+	{ _MMIO(0x9888), 0x121600a0 },
+	{ _MMIO(0x9888), 0x14352c00 },
+	{ _MMIO(0x9888), 0x16350005 },
+	{ _MMIO(0x9888), 0x123600a0 },
+	{ _MMIO(0x9888), 0x14552c00 },
+	{ _MMIO(0x9888), 0x16550005 },
+	{ _MMIO(0x9888), 0x125600a0 },
+	{ _MMIO(0x9888), 0x062f6000 },
+	{ _MMIO(0x9888), 0x022f2000 },
+	{ _MMIO(0x9888), 0x0c4c0050 },
+	{ _MMIO(0x9888), 0x0a4c0010 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0350 },
+	{ _MMIO(0x9888), 0x0c0fb000 },
+	{ _MMIO(0x9888), 0x0e0f00da },
+	{ _MMIO(0x9888), 0x182c0028 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x022dc000 },
+	{ _MMIO(0x9888), 0x042d4000 },
+	{ _MMIO(0x9888), 0x0c138000 },
+	{ _MMIO(0x9888), 0x0e132000 },
+	{ _MMIO(0x9888), 0x0413c000 },
+	{ _MMIO(0x9888), 0x1c140018 },
+	{ _MMIO(0x9888), 0x0c157000 },
+	{ _MMIO(0x9888), 0x0e150078 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x04162180 },
+	{ _MMIO(0x9888), 0x02160000 },
+	{ _MMIO(0x9888), 0x04174000 },
+	{ _MMIO(0x9888), 0x0233a000 },
+	{ _MMIO(0x9888), 0x04333000 },
+	{ _MMIO(0x9888), 0x14348000 },
+	{ _MMIO(0x9888), 0x16348000 },
+	{ _MMIO(0x9888), 0x02357870 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x04360043 },
+	{ _MMIO(0x9888), 0x02360000 },
+	{ _MMIO(0x9888), 0x04371000 },
+	{ _MMIO(0x9888), 0x0e538000 },
+	{ _MMIO(0x9888), 0x00538000 },
+	{ _MMIO(0x9888), 0x06533000 },
+	{ _MMIO(0x9888), 0x1c540020 },
+	{ _MMIO(0x9888), 0x12548000 },
+	{ _MMIO(0x9888), 0x0e557000 },
+	{ _MMIO(0x9888), 0x00557800 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x06560043 },
+	{ _MMIO(0x9888), 0x02560000 },
+	{ _MMIO(0x9888), 0x06571000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900060 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900060 },
+};
+
+static const struct i915_oa_reg *
+get_sampler_mux_config(struct drm_i915_private *dev_priv,
+		       int *len)
+{
+	*len = ARRAY_SIZE(mux_config_sampler);
+	return mux_config_sampler;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x00007fff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x00009fff },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000efff },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000f3ff },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fdff },
+	{ _MMIO(0x2798), 0x00000000 },
+	{ _MMIO(0x279c), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_1[] = {
+	{ _MMIO(0x9888), 0x12120000 },
+	{ _MMIO(0x9888), 0x12320000 },
+	{ _MMIO(0x9888), 0x12520000 },
+	{ _MMIO(0x9888), 0x002f8000 },
+	{ _MMIO(0x9888), 0x022f3000 },
+	{ _MMIO(0x9888), 0x0a4c0015 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f03a0 },
+	{ _MMIO(0x9888), 0x0c0ff000 },
+	{ _MMIO(0x9888), 0x0e0f0095 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2d8000 },
+	{ _MMIO(0x9888), 0x0e2d4000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x02108000 },
+	{ _MMIO(0x9888), 0x0410c000 },
+	{ _MMIO(0x9888), 0x02118000 },
+	{ _MMIO(0x9888), 0x0411c000 },
+	{ _MMIO(0x9888), 0x02121880 },
+	{ _MMIO(0x9888), 0x041219b5 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x02134000 },
+	{ _MMIO(0x9888), 0x04135000 },
+	{ _MMIO(0x9888), 0x0c308000 },
+	{ _MMIO(0x9888), 0x0e304000 },
+	{ _MMIO(0x9888), 0x06304000 },
+	{ _MMIO(0x9888), 0x0c318000 },
+	{ _MMIO(0x9888), 0x0e314000 },
+	{ _MMIO(0x9888), 0x06314000 },
+	{ _MMIO(0x9888), 0x0c321a80 },
+	{ _MMIO(0x9888), 0x0e320033 },
+	{ _MMIO(0x9888), 0x06320031 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x0c334000 },
+	{ _MMIO(0x9888), 0x0e331000 },
+	{ _MMIO(0x9888), 0x06331000 },
+	{ _MMIO(0x9888), 0x0e508000 },
+	{ _MMIO(0x9888), 0x00508000 },
+	{ _MMIO(0x9888), 0x02504000 },
+	{ _MMIO(0x9888), 0x0e518000 },
+	{ _MMIO(0x9888), 0x00518000 },
+	{ _MMIO(0x9888), 0x02514000 },
+	{ _MMIO(0x9888), 0x0e521880 },
+	{ _MMIO(0x9888), 0x00521a80 },
+	{ _MMIO(0x9888), 0x02520033 },
+	{ _MMIO(0x9888), 0x0e534000 },
+	{ _MMIO(0x9888), 0x00534000 },
+	{ _MMIO(0x9888), 0x02531000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900062 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+};
+
+static const struct i915_oa_reg *
+get_tdl_1_mux_config(struct drm_i915_private *dev_priv,
+		     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_tdl_1);
+	return mux_config_tdl_1;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_2[] = {
+	{ _MMIO(0x9888), 0x12124d60 },
+	{ _MMIO(0x9888), 0x12322e60 },
+	{ _MMIO(0x9888), 0x12524d60 },
+	{ _MMIO(0x9888), 0x022f3000 },
+	{ _MMIO(0x9888), 0x0a4c0014 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0fe000 },
+	{ _MMIO(0x9888), 0x0e0f0097 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x002d8000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x0410c000 },
+	{ _MMIO(0x9888), 0x0411c000 },
+	{ _MMIO(0x9888), 0x04121fb7 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x04135000 },
+	{ _MMIO(0x9888), 0x00308000 },
+	{ _MMIO(0x9888), 0x06304000 },
+	{ _MMIO(0x9888), 0x00318000 },
+	{ _MMIO(0x9888), 0x06314000 },
+	{ _MMIO(0x9888), 0x00321b80 },
+	{ _MMIO(0x9888), 0x0632003f },
+	{ _MMIO(0x9888), 0x00334000 },
+	{ _MMIO(0x9888), 0x06331000 },
+	{ _MMIO(0x9888), 0x0250c000 },
+	{ _MMIO(0x9888), 0x0251c000 },
+	{ _MMIO(0x9888), 0x02521fb7 },
+	{ _MMIO(0x9888), 0x00520000 },
+	{ _MMIO(0x9888), 0x02535000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x43900063 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_tdl_2_mux_config(struct drm_i915_private *dev_priv,
+		     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_tdl_2);
+	return mux_config_tdl_2;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extra[] = {
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extra[] = {
+};
+
+static const struct i915_oa_reg mux_config_compute_extra[] = {
+	{ _MMIO(0x9888), 0x121203e0 },
+	{ _MMIO(0x9888), 0x123203e0 },
+	{ _MMIO(0x9888), 0x125203e0 },
+	{ _MMIO(0x9888), 0x129203e0 },
+	{ _MMIO(0x9888), 0x12b203e0 },
+	{ _MMIO(0x9888), 0x12d203e0 },
+	{ _MMIO(0x9888), 0x131203e0 },
+	{ _MMIO(0x9888), 0x133203e0 },
+	{ _MMIO(0x9888), 0x135203e0 },
+	{ _MMIO(0x9888), 0x1a4ef000 },
+	{ _MMIO(0x9888), 0x1c4e0003 },
+	{ _MMIO(0x9888), 0x024ec000 },
+	{ _MMIO(0x9888), 0x044ec000 },
+	{ _MMIO(0x9888), 0x064ec000 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x0c4c02a0 },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x0a4c0042 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0150 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f006d },
+	{ _MMIO(0x9888), 0x182c00a8 },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x042d8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06114000 },
+	{ _MMIO(0x9888), 0x06120033 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x04308000 },
+	{ _MMIO(0x9888), 0x04318000 },
+	{ _MMIO(0x9888), 0x04321980 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x04334000 },
+	{ _MMIO(0x9888), 0x04504000 },
+	{ _MMIO(0x9888), 0x04514000 },
+	{ _MMIO(0x9888), 0x04520033 },
+	{ _MMIO(0x9888), 0x00520000 },
+	{ _MMIO(0x9888), 0x04531000 },
+	{ _MMIO(0x9888), 0x1acef000 },
+	{ _MMIO(0x9888), 0x1cce0003 },
+	{ _MMIO(0x9888), 0x00af8000 },
+	{ _MMIO(0x9888), 0x0ccc02a0 },
+	{ _MMIO(0x9888), 0x0acc0001 },
+	{ _MMIO(0x9888), 0x0c8d8000 },
+	{ _MMIO(0x9888), 0x0e8da000 },
+	{ _MMIO(0x9888), 0x008d8000 },
+	{ _MMIO(0x9888), 0x028da000 },
+	{ _MMIO(0x9888), 0x108f0150 },
+	{ _MMIO(0x9888), 0x0c8fb000 },
+	{ _MMIO(0x9888), 0x0e8f0001 },
+	{ _MMIO(0x9888), 0x18ac00a8 },
+	{ _MMIO(0x9888), 0x06ac8000 },
+	{ _MMIO(0x9888), 0x02ad4000 },
+	{ _MMIO(0x9888), 0x02908000 },
+	{ _MMIO(0x9888), 0x02918000 },
+	{ _MMIO(0x9888), 0x02921980 },
+	{ _MMIO(0x9888), 0x00920000 },
+	{ _MMIO(0x9888), 0x02934000 },
+	{ _MMIO(0x9888), 0x02b04000 },
+	{ _MMIO(0x9888), 0x02b14000 },
+	{ _MMIO(0x9888), 0x02b20033 },
+	{ _MMIO(0x9888), 0x00b20000 },
+	{ _MMIO(0x9888), 0x02b31000 },
+	{ _MMIO(0x9888), 0x00d08000 },
+	{ _MMIO(0x9888), 0x00d18000 },
+	{ _MMIO(0x9888), 0x00d21980 },
+	{ _MMIO(0x9888), 0x00d34000 },
+	{ _MMIO(0x9888), 0x072f8000 },
+	{ _MMIO(0x9888), 0x0d4c0100 },
+	{ _MMIO(0x9888), 0x0d0d8000 },
+	{ _MMIO(0x9888), 0x0f0da000 },
+	{ _MMIO(0x9888), 0x110f01b0 },
+	{ _MMIO(0x9888), 0x192c0080 },
+	{ _MMIO(0x9888), 0x0f2d4000 },
+	{ _MMIO(0x9888), 0x0f108000 },
+	{ _MMIO(0x9888), 0x0f118000 },
+	{ _MMIO(0x9888), 0x0f121980 },
+	{ _MMIO(0x9888), 0x01120000 },
+	{ _MMIO(0x9888), 0x0f134000 },
+	{ _MMIO(0x9888), 0x0f304000 },
+	{ _MMIO(0x9888), 0x0f314000 },
+	{ _MMIO(0x9888), 0x0f320033 },
+	{ _MMIO(0x9888), 0x01320000 },
+	{ _MMIO(0x9888), 0x0f331000 },
+	{ _MMIO(0x9888), 0x0d508000 },
+	{ _MMIO(0x9888), 0x0d518000 },
+	{ _MMIO(0x9888), 0x0d521980 },
+	{ _MMIO(0x9888), 0x01520000 },
+	{ _MMIO(0x9888), 0x0d534000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900002 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51901100 },
+	{ _MMIO(0x9888), 0x41901000 },
+	{ _MMIO(0x9888), 0x43901423 },
+	{ _MMIO(0x9888), 0x53903331 },
+	{ _MMIO(0x9888), 0x45900044 },
+};
+
+static const struct i915_oa_reg *
+get_compute_extra_mux_config(struct drm_i915_private *dev_priv,
+			     int *len)
+{
+	*len = ARRAY_SIZE(mux_config_compute_extra);
+	return mux_config_compute_extra;
+}
+
+static const struct i915_oa_reg b_counter_config_vme_pipe[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00100030 },
+	{ _MMIO(0x2774), 0x0000fff9 },
+	{ _MMIO(0x2778), 0x00000002 },
+	{ _MMIO(0x277c), 0x0000fffc },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000fff3 },
+	{ _MMIO(0x2788), 0x00100180 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00000002 },
+	{ _MMIO(0x279c), 0x0000ff3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_vme_pipe[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00008003 },
+};
+
+static const struct i915_oa_reg mux_config_vme_pipe[] = {
+	{ _MMIO(0x9888), 0x141a5800 },
+	{ _MMIO(0x9888), 0x161a00c0 },
+	{ _MMIO(0x9888), 0x12180240 },
+	{ _MMIO(0x9888), 0x14180002 },
+	{ _MMIO(0x9888), 0x149a5800 },
+	{ _MMIO(0x9888), 0x169a00c0 },
+	{ _MMIO(0x9888), 0x12980240 },
+	{ _MMIO(0x9888), 0x14980002 },
+	{ _MMIO(0x9888), 0x1a4e3fc0 },
+	{ _MMIO(0x9888), 0x002f1000 },
+	{ _MMIO(0x9888), 0x022f8000 },
+	{ _MMIO(0x9888), 0x042f3000 },
+	{ _MMIO(0x9888), 0x004c4000 },
+	{ _MMIO(0x9888), 0x0a4c9500 },
+	{ _MMIO(0x9888), 0x0c4c002a },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f5500 },
+	{ _MMIO(0x9888), 0x100f0015 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162caa00 },
+	{ _MMIO(0x9888), 0x182c000a },
+	{ _MMIO(0x9888), 0x04193000 },
+	{ _MMIO(0x9888), 0x081a28c1 },
+	{ _MMIO(0x9888), 0x001a0000 },
+	{ _MMIO(0x9888), 0x00133000 },
+	{ _MMIO(0x9888), 0x0613c000 },
+	{ _MMIO(0x9888), 0x0813f000 },
+	{ _MMIO(0x9888), 0x00172000 },
+	{ _MMIO(0x9888), 0x06178000 },
+	{ _MMIO(0x9888), 0x0817a000 },
+	{ _MMIO(0x9888), 0x00180037 },
+	{ _MMIO(0x9888), 0x06180940 },
+	{ _MMIO(0x9888), 0x08180000 },
+	{ _MMIO(0x9888), 0x02180000 },
+	{ _MMIO(0x9888), 0x04183000 },
+	{ _MMIO(0x9888), 0x04afc000 },
+	{ _MMIO(0x9888), 0x06af3000 },
+	{ _MMIO(0x9888), 0x0acc4000 },
+	{ _MMIO(0x9888), 0x0ccc0015 },
+	{ _MMIO(0x9888), 0x0a8da000 },
+	{ _MMIO(0x9888), 0x0c8da000 },
+	{ _MMIO(0x9888), 0x0e8f4000 },
+	{ _MMIO(0x9888), 0x108f0015 },
+	{ _MMIO(0x9888), 0x16aca000 },
+	{ _MMIO(0x9888), 0x18ac000a },
+	{ _MMIO(0x9888), 0x06993000 },
+	{ _MMIO(0x9888), 0x0c9a28c1 },
+	{ _MMIO(0x9888), 0x009a0000 },
+	{ _MMIO(0x9888), 0x0a93f000 },
+	{ _MMIO(0x9888), 0x0c93f000 },
+	{ _MMIO(0x9888), 0x0a97a000 },
+	{ _MMIO(0x9888), 0x0c97a000 },
+	{ _MMIO(0x9888), 0x0a980977 },
+	{ _MMIO(0x9888), 0x08980000 },
+	{ _MMIO(0x9888), 0x04980000 },
+	{ _MMIO(0x9888), 0x06983000 },
+	{ _MMIO(0x9888), 0x119000ff },
+	{ _MMIO(0x9888), 0x51900010 },
+	{ _MMIO(0x9888), 0x41900060 },
+	{ _MMIO(0x9888), 0x55900111 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900821 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900002 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_vme_pipe_mux_config(struct drm_i915_private *dev_priv,
+			int *len)
+{
+	*len = ARRAY_SIZE(mux_config_vme_pipe);
+	return mux_config_vme_pipe;
+}
+
+static const struct i915_oa_reg b_counter_config_test_oa[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2770), 0x00000004 },
+	{ _MMIO(0x2774), 0x00000000 },
+	{ _MMIO(0x2778), 0x00000003 },
+	{ _MMIO(0x277c), 0x00000000 },
+	{ _MMIO(0x2780), 0x00000007 },
+	{ _MMIO(0x2784), 0x00000000 },
+	{ _MMIO(0x2788), 0x00100002 },
+	{ _MMIO(0x278c), 0x0000fff7 },
+	{ _MMIO(0x2790), 0x00100002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00100082 },
+	{ _MMIO(0x279c), 0x0000ffef },
+	{ _MMIO(0x27a0), 0x001000c2 },
+	{ _MMIO(0x27a4), 0x0000ffe7 },
+	{ _MMIO(0x27a8), 0x00100001 },
+	{ _MMIO(0x27ac), 0x0000ffe7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_test_oa[] = {
+};
+
+static const struct i915_oa_reg mux_config_test_oa[] = {
+	{ _MMIO(0x9888), 0x11810000 },
+	{ _MMIO(0x9888), 0x07810013 },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930040 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static const struct i915_oa_reg *
+get_test_oa_mux_config(struct drm_i915_private *dev_priv,
+		       int *len)
+{
+	*len = ARRAY_SIZE(mux_config_test_oa);
+	return mux_config_test_oa;
+}
+
 int i915_oa_select_metric_set_sklgt4(struct drm_i915_private *dev_priv)
 {
-	dev_priv->perf.oa.mux_regs = NULL;
-	dev_priv->perf.oa.mux_regs_len = 0;
-	dev_priv->perf.oa.b_counter_regs = NULL;
-	dev_priv->perf.oa.b_counter_regs_len = 0;
-	dev_priv->perf.oa.flex_regs = NULL;
-	dev_priv->perf.oa.flex_regs_len = 0;
+	dev_priv->perf.oa.mux_regs = NULL;
+	dev_priv->perf.oa.mux_regs_len = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_render_basic_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_BASIC:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_basic_mux_config(dev_priv,
+						     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
+		return 0;
+	case METRIC_SET_ID_RENDER_PIPE_PROFILE:
+		dev_priv->perf.oa.mux_regs =
+			get_render_pipe_profile_mux_config(dev_priv,
+							   &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_PIPE_PROFILE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_pipe_profile;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_pipe_profile);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_pipe_profile;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_pipe_profile);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_READS:
+		dev_priv->perf.oa.mux_regs =
+			get_memory_reads_mux_config(dev_priv,
+						    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_READS\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_reads;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_reads);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_reads;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_reads);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_WRITES:
+		dev_priv->perf.oa.mux_regs =
+			get_memory_writes_mux_config(dev_priv,
+						     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_WRITES\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_writes;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_writes);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_writes;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_writes);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTENDED:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_extended_mux_config(dev_priv,
+							&dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTENDED\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extended;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extended);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extended;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extended);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_L3_CACHE:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_l3_cache_mux_config(dev_priv,
+							&dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_L3_CACHE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_l3_cache;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_l3_cache);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_l3_cache;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_l3_cache);
+
+		return 0;
+	case METRIC_SET_ID_HDC_AND_SF:
+		dev_priv->perf.oa.mux_regs =
+			get_hdc_and_sf_mux_config(dev_priv,
+						  &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"HDC_AND_SF\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_hdc_and_sf;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_hdc_and_sf);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_hdc_and_sf;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_hdc_and_sf);
+
+		return 0;
+	case METRIC_SET_ID_L3_1:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_1_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_1);
+
+		return 0;
+	case METRIC_SET_ID_L3_2:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_2_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_2);
+
+		return 0;
+	case METRIC_SET_ID_L3_3:
+		dev_priv->perf.oa.mux_regs =
+			get_l3_3_mux_config(dev_priv,
+					    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_3\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_3;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_3);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_3;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_3);
+
+		return 0;
+	case METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND:
+		dev_priv->perf.oa.mux_regs =
+			get_rasterizer_and_pixel_backend_mux_config(dev_priv,
+								    &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RASTERIZER_AND_PIXEL_BACKEND\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_rasterizer_and_pixel_backend);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_rasterizer_and_pixel_backend);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER:
+		dev_priv->perf.oa.mux_regs =
+			get_sampler_mux_config(dev_priv,
+					       &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler);
+
+		return 0;
+	case METRIC_SET_ID_TDL_1:
+		dev_priv->perf.oa.mux_regs =
+			get_tdl_1_mux_config(dev_priv,
+					     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_1);
+
+		return 0;
+	case METRIC_SET_ID_TDL_2:
+		dev_priv->perf.oa.mux_regs =
+			get_tdl_2_mux_config(dev_priv,
+					     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_2);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTRA:
+		dev_priv->perf.oa.mux_regs =
+			get_compute_extra_mux_config(dev_priv,
+						     &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTRA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extra;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extra);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extra;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extra);
+
+		return 0;
+	case METRIC_SET_ID_VME_PIPE:
+		dev_priv->perf.oa.mux_regs =
+			get_vme_pipe_mux_config(dev_priv,
+						&dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"VME_PIPE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_vme_pipe;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_vme_pipe);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_vme_pipe;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_vme_pipe);
+
+		return 0;
+	case METRIC_SET_ID_TEST_OA:
+		dev_priv->perf.oa.mux_regs =
+			get_test_oa_mux_config(dev_priv,
+					       &dev_priv->perf.oa.mux_regs_len);
+		if (!dev_priv->perf.oa.mux_regs) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TEST_OA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_test_oa;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_test_oa);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_test_oa;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_test_oa);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "bad77c24-cc64-480d-99bf-e7b740713800",
+	.attrs =  attrs_render_basic,
+};
+
+static ssize_t
+show_compute_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_BASIC);
+}
+
+static struct device_attribute dev_attr_compute_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_basic[] = {
+	&dev_attr_compute_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_basic = {
+	.name = "7277228f-e7f3-4743-945a-6a2049d11377",
+	.attrs =  attrs_compute_basic,
+};
+
+static ssize_t
+show_render_pipe_profile_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_PIPE_PROFILE);
+}
+
+static struct device_attribute dev_attr_render_pipe_profile_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_pipe_profile_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_pipe_profile[] = {
+	&dev_attr_render_pipe_profile_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_pipe_profile = {
+	.name = "463c668c-3f60-49b6-8f85-d995b635b3b2",
+	.attrs =  attrs_render_pipe_profile,
+};
+
+static ssize_t
+show_memory_reads_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_READS);
+}

-	switch (dev_priv->perf.oa.metrics_set) {
-	case METRIC_SET_ID_RENDER_BASIC:
-		dev_priv->perf.oa.mux_regs =
-			get_render_basic_mux_config(dev_priv,
-						    &dev_priv->perf.oa.mux_regs_len);
-		if (!dev_priv->perf.oa.mux_regs) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+static struct device_attribute dev_attr_memory_reads_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_reads_id,
+	.store = NULL,
+};

-			/* EINVAL because *_register_sysfs already checked this
-			 * and so it wouldn't have been advertised to userspace and
-			 * so shouldn't have been requested
-			 */
-			return -EINVAL;
-		}
+static struct attribute *attrs_memory_reads[] = {
+	&dev_attr_memory_reads_id.attr,
+	NULL,
+};

-		dev_priv->perf.oa.b_counter_regs =
-			b_counter_config_render_basic;
-		dev_priv->perf.oa.b_counter_regs_len =
-			ARRAY_SIZE(b_counter_config_render_basic);
+static struct attribute_group group_memory_reads = {
+	.name = "3ae6e74c-72c3-4040-9bd0-7961430b8cc8",
+	.attrs =  attrs_memory_reads,
+};

-		dev_priv->perf.oa.flex_regs =
-			flex_eu_config_render_basic;
-		dev_priv->perf.oa.flex_regs_len =
-			ARRAY_SIZE(flex_eu_config_render_basic);
+static ssize_t
+show_memory_writes_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_WRITES);
+}

-		return 0;
-	default:
-		return -ENODEV;
-	}
+static struct device_attribute dev_attr_memory_writes_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_writes_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_writes[] = {
+	&dev_attr_memory_writes_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_writes = {
+	.name = "055f256d-4052-467c-8dec-6064a4806433",
+	.attrs =  attrs_memory_writes,
+};
+
+static ssize_t
+show_compute_extended_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTENDED);
+}
+
+static struct device_attribute dev_attr_compute_extended_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extended_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extended[] = {
+	&dev_attr_compute_extended_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extended = {
+	.name = "753972d4-87cd-4460-824d-754463ac5054",
+	.attrs =  attrs_compute_extended,
+};
+
+static ssize_t
+show_compute_l3_cache_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_L3_CACHE);
 }

+static struct device_attribute dev_attr_compute_l3_cache_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_l3_cache_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_l3_cache[] = {
+	&dev_attr_compute_l3_cache_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_l3_cache = {
+	.name = "4e4392e9-8f73-457b-ab44-b49f7a0c733b",
+	.attrs =  attrs_compute_l3_cache,
+};
+
 static ssize_t
-show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+show_hdc_and_sf_id(struct device *kdev, struct device_attribute *attr, char *buf)
 {
-	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+	return sprintf(buf, "%d\n", METRIC_SET_ID_HDC_AND_SF);
 }

-static struct device_attribute dev_attr_render_basic_id = {
+static struct device_attribute dev_attr_hdc_and_sf_id = {
 	.attr = { .name = "id", .mode = 0444 },
-	.show = show_render_basic_id,
+	.show = show_hdc_and_sf_id,
 	.store = NULL,
 };

-static struct attribute *attrs_render_basic[] = {
-	&dev_attr_render_basic_id.attr,
+static struct attribute *attrs_hdc_and_sf[] = {
+	&dev_attr_hdc_and_sf_id.attr,
 	NULL,
 };

-static struct attribute_group group_render_basic = {
-	.name = "bad77c24-cc64-480d-99bf-e7b740713800",
-	.attrs =  attrs_render_basic,
+static struct attribute_group group_hdc_and_sf = {
+	.name = "730d95dd-7da8-4e1c-ab8d-c0eb1e4c1805",
+	.attrs =  attrs_hdc_and_sf,
+};
+
+static ssize_t
+show_l3_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_1);
+}
+
+static struct device_attribute dev_attr_l3_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_1[] = {
+	&dev_attr_l3_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_1 = {
+	.name = "d9e86d70-462b-462a-851e-fd63e8c13d63",
+	.attrs =  attrs_l3_1,
+};
+
+static ssize_t
+show_l3_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_2);
+}
+
+static struct device_attribute dev_attr_l3_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_2[] = {
+	&dev_attr_l3_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_2 = {
+	.name = "52200424-6ee9-48b3-b7fa-0afcf1975e4d",
+	.attrs =  attrs_l3_2,
+};
+
+static ssize_t
+show_l3_3_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_3);
+}
+
+static struct device_attribute dev_attr_l3_3_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_3_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_3[] = {
+	&dev_attr_l3_3_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_3 = {
+	.name = "1988315f-0a26-44df-acb0-df7ec86b1456",
+	.attrs =  attrs_l3_3,
+};
+
+static ssize_t
+show_rasterizer_and_pixel_backend_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND);
+}
+
+static struct device_attribute dev_attr_rasterizer_and_pixel_backend_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_rasterizer_and_pixel_backend_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_rasterizer_and_pixel_backend[] = {
+	&dev_attr_rasterizer_and_pixel_backend_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_rasterizer_and_pixel_backend = {
+	.name = "f1f17ca7-286e-4ae5-9d15-9fccad6c665d",
+	.attrs =  attrs_rasterizer_and_pixel_backend,
+};
+
+static ssize_t
+show_sampler_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER);
+}
+
+static struct device_attribute dev_attr_sampler_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler[] = {
+	&dev_attr_sampler_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler = {
+	.name = "00a9e0fb-3d2e-4405-852c-dce6334ffb3b",
+	.attrs =  attrs_sampler,
+};
+
+static ssize_t
+show_tdl_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_1);
+}
+
+static struct device_attribute dev_attr_tdl_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_1[] = {
+	&dev_attr_tdl_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_1 = {
+	.name = "13dcc50a-7ec0-409b-99d6-a3f932cedcb3",
+	.attrs =  attrs_tdl_1,
+};
+
+static ssize_t
+show_tdl_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_2);
+}
+
+static struct device_attribute dev_attr_tdl_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_2[] = {
+	&dev_attr_tdl_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_2 = {
+	.name = "97875e21-6624-4aee-9191-682feb3eae21",
+	.attrs =  attrs_tdl_2,
+};
+
+static ssize_t
+show_compute_extra_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTRA);
+}
+
+static struct device_attribute dev_attr_compute_extra_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extra_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extra[] = {
+	&dev_attr_compute_extra_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extra = {
+	.name = "a5aa857d-e8f0-4dfa-8981-ce340fa748fd",
+	.attrs =  attrs_compute_extra,
+};
+
+static ssize_t
+show_vme_pipe_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_VME_PIPE);
+}
+
+static struct device_attribute dev_attr_vme_pipe_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_vme_pipe_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_vme_pipe[] = {
+	&dev_attr_vme_pipe_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_vme_pipe = {
+	.name = "0e8d8b86-4ee7-4cdd-aaaa-58adc92cb29e",
+	.attrs =  attrs_vme_pipe,
+};
+
+static ssize_t
+show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TEST_OA);
+}
+
+static struct device_attribute dev_attr_test_oa_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_test_oa_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_test_oa[] = {
+	&dev_attr_test_oa_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_test_oa = {
+	.name = "882fa433-1f4a-4a67-a962-c741888fe5f5",
+	.attrs =  attrs_test_oa,
 };

 int
@@ -230,9 +2723,145 @@ i915_perf_register_sysfs_sklgt4(struct drm_i915_private *dev_priv)
 		if (ret)
 			goto error_render_basic;
 	}
+	if (get_compute_basic_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+		if (ret)
+			goto error_compute_basic;
+	}
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+		if (ret)
+			goto error_render_pipe_profile;
+	}
+	if (get_memory_reads_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+		if (ret)
+			goto error_memory_reads;
+	}
+	if (get_memory_writes_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+		if (ret)
+			goto error_memory_writes;
+	}
+	if (get_compute_extended_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+		if (ret)
+			goto error_compute_extended;
+	}
+	if (get_compute_l3_cache_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+		if (ret)
+			goto error_compute_l3_cache;
+	}
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+		if (ret)
+			goto error_hdc_and_sf;
+	}
+	if (get_l3_1_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+		if (ret)
+			goto error_l3_1;
+	}
+	if (get_l3_2_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+		if (ret)
+			goto error_l3_2;
+	}
+	if (get_l3_3_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+		if (ret)
+			goto error_l3_3;
+	}
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+		if (ret)
+			goto error_rasterizer_and_pixel_backend;
+	}
+	if (get_sampler_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler);
+		if (ret)
+			goto error_sampler;
+	}
+	if (get_tdl_1_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+		if (ret)
+			goto error_tdl_1;
+	}
+	if (get_tdl_2_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+		if (ret)
+			goto error_tdl_2;
+	}
+	if (get_compute_extra_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+		if (ret)
+			goto error_compute_extra;
+	}
+	if (get_vme_pipe_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+		if (ret)
+			goto error_vme_pipe;
+	}
+	if (get_test_oa_mux_config(dev_priv, &mux_len)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+		if (ret)
+			goto error_test_oa;
+	}

 	return 0;

+error_test_oa:
+	if (get_vme_pipe_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+error_vme_pipe:
+	if (get_compute_extra_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+error_compute_extra:
+	if (get_tdl_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+error_tdl_2:
+	if (get_tdl_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+error_tdl_1:
+	if (get_sampler_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+error_sampler:
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+error_rasterizer_and_pixel_backend:
+	if (get_l3_3_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+error_l3_3:
+	if (get_l3_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+error_l3_2:
+	if (get_l3_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+error_l3_1:
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+error_hdc_and_sf:
+	if (get_compute_l3_cache_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+error_compute_l3_cache:
+	if (get_compute_extended_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+error_compute_extended:
+	if (get_memory_writes_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+error_memory_writes:
+	if (get_memory_reads_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+error_memory_reads:
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+error_render_pipe_profile:
+	if (get_compute_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+error_compute_basic:
+	if (get_render_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
 error_render_basic:
 	return ret;
 }
@@ -244,4 +2873,38 @@ i915_perf_unregister_sysfs_sklgt4(struct drm_i915_private *dev_priv)

 	if (get_render_basic_mux_config(dev_priv, &mux_len))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+	if (get_compute_basic_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+	if (get_render_pipe_profile_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+	if (get_memory_reads_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+	if (get_memory_writes_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+	if (get_compute_extended_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+	if (get_compute_l3_cache_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+	if (get_hdc_and_sf_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+	if (get_l3_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+	if (get_l3_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+	if (get_l3_3_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+	if (get_sampler_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+	if (get_tdl_1_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+	if (get_tdl_2_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+	if (get_compute_extra_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+	if (get_vme_pipe_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+	if (get_test_oa_mux_config(dev_priv, &mux_len))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_test_oa);
 }
--
2.11.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v5 14/15] drm/i915/perf: per-gen timebase for checking sample freq
  2017-04-24  7:17 ` [PATCH 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                     ` (12 preceding siblings ...)
  2017-04-24  7:17   ` [PATCH v5 13/15] drm/i915/perf: Add more OA configs for BDW, CHV, SKL + BXT Lionel Landwerlin
@ 2017-04-24  7:17   ` Lionel Landwerlin
  2017-04-24  7:17   ` [PATCH v5 15/15] drm/i915/perf: remove perf.hook_lock Lionel Landwerlin
                     ` (4 subsequent siblings)
  18 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-04-24  7:17 UTC (permalink / raw)
  To: intel-gfx; +Cc: Lionel Landwerlin

From: Robert Bragg <robert@sixbynine.org>

An oa_exponent_to_ns() utility and per-gen timebase constants where
recently removed when updating the tail pointer race condition WA, and
this restores those so we can update the _PROP_OA_EXPONENT validation
done in read_properties_unlocked() to not assume we have a 12.5MHz
timebase as we did for Haswell.

Accordingly the oa_sample_rate_hard_limit value that's referenced by
proc_dointvec_minmax defining the absolute limit for the OA sampling
frequency is now initialized to (timestamp_frequency / 2) instead of the
6.25MHz constant for Haswell.

v2:
    Specify frequency of 19.2MHz for BXT (Ville)
    Initialize oa_sample_rate_hard_limit per-gen too (Lionel)

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Cc: Lionel Landwerlin <lionel.g.landwerlin@linux.intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Acked-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h  |  1 +
 drivers/gpu/drm/i915/i915_perf.c | 37 ++++++++++++++++++++++++++-----------
 2 files changed, 27 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 41a97ade39ea..30fe1fc04234 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2460,6 +2460,7 @@ struct drm_i915_private {

 			bool periodic;
 			int period_exponent;
+			int timestamp_frequency;

 			int metrics_set;

diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 96fd59359109..bbd3f7fdd52a 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -288,10 +288,12 @@ static u32 i915_perf_stream_paranoid = true;

 /* For sysctl proc_dointvec_minmax of i915_oa_max_sample_rate
  *
- * 160ns is the smallest sampling period we can theoretically program the OA
- * unit with on Haswell, corresponding to 6.25MHz.
+ * The highest sampling frequency we can theoretically program the OA unit
+ * with is always half the timestamp frequency: E.g. 6.25Mhz for Haswell.
+ *
+ * Initialized just before we register the sysctl parameter.
  */
-static int oa_sample_rate_hard_limit = 6250000;
+static int oa_sample_rate_hard_limit;

 /* Theoretically we can program the OA unit to sample every 160ns but don't
  * allow that by default unless root...
@@ -2580,6 +2582,12 @@ i915_perf_open_ioctl_locked(struct drm_i915_private *dev_priv,
 	return ret;
 }

+static u64 oa_exponent_to_ns(struct drm_i915_private *dev_priv, int exponent)
+{
+	return div_u64(1000000000ULL * (2ULL << exponent),
+		       dev_priv->perf.oa.timestamp_frequency);
+}
+
 /**
  * read_properties_unlocked - validate + copy userspace stream open properties
  * @dev_priv: i915 device instance
@@ -2676,16 +2684,13 @@ static int read_properties_unlocked(struct drm_i915_private *dev_priv,
 			}

 			/* Theoretically we can program the OA unit to sample
-			 * every 160ns but don't allow that by default unless
-			 * root.
-			 *
-			 * On Haswell the period is derived from the exponent
-			 * as:
-			 *
-			 *   period = 80ns * 2^(exponent + 1)
+			 * e.g. every 160ns for HSW, 167ns for BDW/SKL or 104ns
+			 * for BXT. We don't allow such high sampling
+			 * frequencies by default unless root.
 			 */
+
 			BUILD_BUG_ON(sizeof(oa_period) != 8);
-			oa_period = 80ull * (2ull << value);
+			oa_period = oa_exponent_to_ns(dev_priv, value);

 			/* This check is primarily to ensure that oa_period <=
 			 * UINT32_MAX (before passing to do_div which only
@@ -2941,6 +2946,8 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
 		dev_priv->perf.oa.ops.oa_hw_tail_read =
 			gen7_oa_hw_tail_read;

+		dev_priv->perf.oa.timestamp_frequency = 12500000;
+
 		dev_priv->perf.oa.oa_formats = hsw_oa_formats;

 		dev_priv->perf.oa.n_builtin_sets =
@@ -2954,6 +2961,8 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
 		 */

 		if (IS_GEN8(dev_priv)) {
+			dev_priv->perf.oa.timestamp_frequency = 12500000;
+
 			dev_priv->perf.oa.ctx_oactxctrl_offset = 0x120;
 			dev_priv->perf.oa.ctx_flexeu0_offset = 0x2ce;
 			dev_priv->perf.oa.gen8_valid_ctx_bit = (1<<25);
@@ -2970,6 +2979,8 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
 					i915_oa_select_metric_set_chv;
 			}
 		} else if (IS_GEN9(dev_priv)) {
+			dev_priv->perf.oa.timestamp_frequency = 12000000;
+
 			dev_priv->perf.oa.ctx_oactxctrl_offset = 0x128;
 			dev_priv->perf.oa.ctx_flexeu0_offset = 0x3de;
 			dev_priv->perf.oa.gen8_valid_ctx_bit = (1<<16);
@@ -2990,6 +3001,8 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
 				dev_priv->perf.oa.ops.select_metric_set =
 					i915_oa_select_metric_set_sklgt4;
 			} else if (IS_BROXTON(dev_priv)) {
+				dev_priv->perf.oa.timestamp_frequency = 19200000;
+
 				dev_priv->perf.oa.n_builtin_sets =
 					i915_oa_n_builtin_metric_sets_bxt;
 				dev_priv->perf.oa.ops.select_metric_set =
@@ -3024,6 +3037,8 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
 		spin_lock_init(&dev_priv->perf.hook_lock);
 		spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);

+		oa_sample_rate_hard_limit =
+			dev_priv->perf.oa.timestamp_frequency / 2;
 		dev_priv->perf.sysctl_header = register_sysctl_table(dev_root);

 		dev_priv->perf.initialized = true;
--
2.11.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v5 15/15] drm/i915/perf: remove perf.hook_lock
  2017-04-24  7:17 ` [PATCH 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                     ` (13 preceding siblings ...)
  2017-04-24  7:17   ` [PATCH v5 14/15] drm/i915/perf: per-gen timebase for checking sample freq Lionel Landwerlin
@ 2017-04-24  7:17   ` Lionel Landwerlin
  2017-05-03 16:23   ` [PATCH v2 9/15] drm/i915: expose _SLICE_MASK GETPARM Lionel Landwerlin
                     ` (3 subsequent siblings)
  18 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-04-24  7:17 UTC (permalink / raw)
  To: intel-gfx

From: Robert Bragg <robert@sixbynine.org>

In earlier iterations of the i915-perf driver we had a number of
callbacks/hooks from other parts of the i915 driver to e.g. notify us
when a legacy context was pinned and these could run asynchronously with
respect to the stream file operations and might also run in atomic
context.

dev_priv->perf.hook_lock had been for serialising access to state needed
within these callbacks, but as the code has evolved some of the hooks
have gone away or are implemented to avoid needing to lock any state.

The remaining use of this lock was actually redundant considering how
the gen7 oacontrol state used to be updated as part of a context pin
hook.

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Acked-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h  |  2 --
 drivers/gpu/drm/i915/i915_perf.c | 32 ++++++++++----------------------
 2 files changed, 10 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 30fe1fc04234..a6aacd70ce7e 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2441,8 +2441,6 @@ struct drm_i915_private {
 		struct mutex lock;
 		struct list_head streams;

-		spinlock_t hook_lock;
-
 		struct {
 			struct i915_perf_stream *exclusive_stream;

diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index bbd3f7fdd52a..d36918a2bb70 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -1710,9 +1710,17 @@ static void gen8_disable_metric_set(struct drm_i915_private *dev_priv)
 	/* NOP */
 }

-static void gen7_update_oacontrol_locked(struct drm_i915_private *dev_priv)
+static void gen7_oa_enable(struct drm_i915_private *dev_priv)
 {
-	lockdep_assert_held(&dev_priv->perf.hook_lock);
+	/* Reset buf pointers so we don't forward reports from before now.
+	 *
+	 * Think carefully if considering trying to avoid this, since it
+	 * also ensures status flags and the buffer itself are cleared
+	 * in error paths, and we have checks for invalid reports based
+	 * on the assumption that certain fields are written to zeroed
+	 * memory which this helps maintains.
+	 */
+	gen7_init_oa_buffer(dev_priv);

 	if (dev_priv->perf.oa.exclusive_stream->enabled) {
 		struct i915_gem_context *ctx =
@@ -1735,25 +1743,6 @@ static void gen7_update_oacontrol_locked(struct drm_i915_private *dev_priv)
 		I915_WRITE(GEN7_OACONTROL, 0);
 }

-static void gen7_oa_enable(struct drm_i915_private *dev_priv)
-{
-	unsigned long flags;
-
-	/* Reset buf pointers so we don't forward reports from before now.
-	 *
-	 * Think carefully if considering trying to avoid this, since it
-	 * also ensures status flags and the buffer itself are cleared
-	 * in error paths, and we have checks for invalid reports based
-	 * on the assumption that certain fields are written to zeroed
-	 * memory which this helps maintains.
-	 */
-	gen7_init_oa_buffer(dev_priv);
-
-	spin_lock_irqsave(&dev_priv->perf.hook_lock, flags);
-	gen7_update_oacontrol_locked(dev_priv);
-	spin_unlock_irqrestore(&dev_priv->perf.hook_lock, flags);
-}
-
 static void gen8_oa_enable(struct drm_i915_private *dev_priv)
 {
 	u32 report_format = dev_priv->perf.oa.oa_buffer.format;
@@ -3034,7 +3023,6 @@ void i915_perf_init(struct drm_i915_private *dev_priv)

 		INIT_LIST_HEAD(&dev_priv->perf.streams);
 		mutex_init(&dev_priv->perf.lock);
-		spin_lock_init(&dev_priv->perf.hook_lock);
 		spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);

 		oa_sample_rate_hard_limit =
--
2.11.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* Re: [PATCH v5 12/15] drm/i915/perf: Add OA unit support for Gen 8+
  2017-04-24  7:17   ` [PATCH v5 12/15] drm/i915/perf: Add OA unit support for Gen 8+ Lionel Landwerlin
@ 2017-04-24 10:35     ` Chris Wilson
  2017-04-24 18:49       ` [PATCH v6 " Lionel Landwerlin
  0 siblings, 1 reply; 87+ messages in thread
From: Chris Wilson @ 2017-04-24 10:35 UTC (permalink / raw)
  To: Lionel Landwerlin; +Cc: intel-gfx

On Mon, Apr 24, 2017 at 12:17:48AM -0700, Lionel Landwerlin wrote:
> +static int gen8_configure_all_contexts(struct drm_i915_private *dev_priv)
> +{
> +	struct i915_gem_context *ctx;
> +	int ret;
> +
> +	ret = i915_mutex_lock_interruptible(&dev_priv->drm);
> +	if (ret)
> +		return ret;

Missed a switch to kernel context, to ensure that the last context is
also saved to memory.

> +
> +	/* The OA register config is setup through the context image. This image
> +	 * might be written to by the GPU on context switch (in particular on
> +	 * lite-restore). This means we can't safely update a context's image,
> +	 * if this context is scheduled/submitted to run on the GPU.
> +	 *
> +	 * We could emit the OA register config through the batch buffer but
> +	 * this might leave small interval of time where the OA unit is
> +	 * configured at an invalid sampling period.
> +	 *
> +	 * So far the best way to work around this issue seems to be draining
> +	 * the GPU from any submitted work.
> +	 */
> +        ret = i915_gem_wait_for_idle(dev_priv,
> +                                     I915_WAIT_INTERRUPTIBLE |
> +                                     I915_WAIT_LOCKED);
> +        if (ret) {
> +		mutex_unlock(&dev_priv->drm.struct_mutex);
> +                return ret;
> +	}
> +
> +	/* Since execlist submission may be happening asynchronously here then
> +	 * we first mark existing contexts dirty before we update the current
> +	 * context so if any switches happen in the middle we can expect
> +	 * that the act of scheduling will have itself ensured a consistent
> +	 * OA state update.
> +	 */
> +	list_for_each_entry(ctx, &dev_priv->context_list, link) {
> +		/* The actual update of the register state context will happen
> +		 * the next time this logical ring is submitted. (See
> +		 * i915_oa_update_reg_state() which hooks into
> +		 * execlists_update_context())
> +		 */
> +		atomic_set(&ctx->engine[RCS].oa_state_dirty, 1);

Considering that everything is now available to be modified, you can
change the contexts right here and now and remove the checks from the
much more common path.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* [PATCH v6 12/15] drm/i915/perf: Add OA unit support for Gen 8+
  2017-04-24 10:35     ` Chris Wilson
@ 2017-04-24 18:49       ` Lionel Landwerlin
  2017-04-25 16:20         ` Lionel Landwerlin
  2017-04-25 16:42         ` Matthew Auld
  0 siblings, 2 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-04-24 18:49 UTC (permalink / raw)
  To: intel-gfx

From: Robert Bragg <robert@sixbynine.org>

Enables access to OA unit metrics for BDW, CHV, SKL and BXT which all
share (more-or-less) the same OA unit design.

Of particular note in comparison to Haswell: some OA unit HW config
state has become per-context state and as a consequence it is somewhat
more complicated to manage synchronous state changes from the cpu while
there's no guarantee of what context (if any) is currently actively
running on the gpu.

The periodic sampling frequency which can be particularly useful for
system-wide analysis (as opposed to command stream synchronised
MI_REPORT_PERF_COUNT commands) is perhaps the most surprising state to
have become per-context save and restored (while the OABUFFER
destination is still a shared, system-wide resource).

This support for gen8+ takes care to consider a number of timing
challenges involved in synchronously updating per-context state
primarily by programming all config state from the cpu and updating all
current and saved contexts synchronously while the OA unit is still
disabled.

The driver intentionally avoids depending on command streamer
programming to update OA state considering the lack of synchronization
between the automatic loading of OACTXCONTROL state (that includes the
periodic sampling state and enable state) on context restore and the
parsing of any general purpose BB the driver can control. I.e. this
implementation is careful to avoid the possibility of a context restore
temporarily enabling any out-of-date periodic sampling state. In
addition to the risk of transiently-out-of-date state being loaded
automatically; there are also internal HW latencies involved in the
loading of MUX configurations which would be difficult to account for
from the command streamer (and we only want to enable the unit when once
the MUX configuration is complete).

Since the Gen8+ OA unit design no longer supports clock gating the unit
off for a single given context (which effectively stopped any progress
of counters while any other context was running) and instead supports
tagging OA reports with a context ID for filtering on the CPU, it means
we can no longer hide the system-wide progress of counters from a
non-privileged application only interested in metrics for its own
context. Although we could theoretically try and subtract the progress
of other contexts before forwarding reports via read() we aren't in a
position to filter reports captured via MI_REPORT_PERF_COUNT commands.
As a result, for Gen8+, we always require the
dev.i915.perf_stream_paranoid to be unset for any access to OA metrics
if not root.

v5: Drain submitted requests when enabling metric set to ensure no
    lite-restore erases the context image we just updated (Lionel)

v6: In addition to drain, switch to kernel context & update all
    context in place (Chris)

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com> \o/
---
 drivers/gpu/drm/i915/i915_drv.h  |  45 +-
 drivers/gpu/drm/i915/i915_perf.c | 957 ++++++++++++++++++++++++++++++++++++---
 drivers/gpu/drm/i915/i915_reg.h  |  22 +
 drivers/gpu/drm/i915/intel_lrc.c |   2 +
 include/uapi/drm/i915_drm.h      |  19 +-
 5 files changed, 952 insertions(+), 93 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index ffa1fc5eddfd..676b1227067c 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2064,9 +2064,17 @@ struct i915_oa_ops {
 	void (*init_oa_buffer)(struct drm_i915_private *dev_priv);

 	/**
-	 * @enable_metric_set: Applies any MUX configuration to set up the
-	 * Boolean and Custom (B/C) counters that are part of the counter
-	 * reports being sampled. May apply system constraints such as
+	 * @select_metric_set: The auto generated code that checks whether a
+	 * requested OA config is applicable to the system and if so sets up
+	 * the mux, oa and flex eu register config pointers according to the
+	 * current dev_priv->perf.oa.metrics_set.
+	 */
+	int (*select_metric_set)(struct drm_i915_private *dev_priv);
+
+	/**
+	 * @enable_metric_set: Selects and applies any MUX configuration to set
+	 * up the Boolean and Custom (B/C) counters that are part of the
+	 * counter reports being sampled. May apply system constraints such as
 	 * disabling EU clock gating as required.
 	 */
 	int (*enable_metric_set)(struct drm_i915_private *dev_priv);
@@ -2097,20 +2105,13 @@ struct i915_oa_ops {
 		    size_t *offset);

 	/**
-	 * @oa_buffer_check: Check for OA buffer data + update tail
-	 *
-	 * This is either called via fops or the poll check hrtimer (atomic
-	 * ctx) without any locks taken.
+	 * @oa_hw_tail_read: read the OA tail pointer register
 	 *
-	 * It's safe to read OA config state here unlocked, assuming that this
-	 * is only called while the stream is enabled, while the global OA
-	 * configuration can't be modified.
-	 *
-	 * Efficiency is more important than avoiding some false positives
-	 * here, which will be handled gracefully - likely resulting in an
-	 * %EAGAIN error for userspace.
+	 * In particular this enables us to share all the fiddly code for
+	 * handling the OA unit tail pointer race that affects multiple
+	 * generations.
 	 */
-	bool (*oa_buffer_check)(struct drm_i915_private *dev_priv);
+	u32 (*oa_hw_tail_read)(struct drm_i915_private *dev_priv);
 };

 struct intel_cdclk_state {
@@ -2472,6 +2473,7 @@ struct drm_i915_private {
 			struct {
 				struct i915_vma *vma;
 				u8 *vaddr;
+				u32 last_ctx_id;
 				int format;
 				int format_size;

@@ -2541,6 +2543,14 @@ struct drm_i915_private {
 			} oa_buffer;

 			u32 gen7_latched_oastatus1;
+			u32 ctx_oactxctrl_offset;
+			u32 ctx_flexeu0_offset;
+
+			/* The RPT_ID/reason field for Gen8+ includes a bit
+			 * to determine if the CTX ID in the report is valid
+			 * but the specific bit differs between Gen 8 and 9
+			 */
+			u32 gen8_valid_ctx_bit;

 			struct i915_oa_ops ops;
 			const struct i915_oa_format *oa_formats;
@@ -2851,6 +2861,8 @@ intel_info(const struct drm_i915_private *dev_priv)
 #define IS_KBL_ULX(dev_priv)	(INTEL_DEVID(dev_priv) == 0x590E || \
 				 INTEL_DEVID(dev_priv) == 0x5915 || \
 				 INTEL_DEVID(dev_priv) == 0x591E)
+#define IS_SKL_GT2(dev_priv)	(IS_SKYLAKE(dev_priv) && \
+				 (INTEL_DEVID(dev_priv) & 0x00F0) == 0x0010)
 #define IS_SKL_GT3(dev_priv)	(IS_SKYLAKE(dev_priv) && \
 				 (INTEL_DEVID(dev_priv) & 0x00F0) == 0x0020)
 #define IS_SKL_GT4(dev_priv)	(IS_SKYLAKE(dev_priv) && \
@@ -3615,6 +3627,9 @@ i915_gem_context_lookup_timeline(struct i915_gem_context *ctx,

 int i915_perf_open_ioctl(struct drm_device *dev, void *data,
 			 struct drm_file *file);
+void i915_oa_init_reg_state(struct intel_engine_cs *engine,
+			    struct i915_gem_context *ctx,
+			    uint32_t *reg_state);

 /* i915_gem_evict.c */
 int __must_check i915_gem_evict_something(struct i915_address_space *vm,
diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 3277a52ce98e..f2688dee5a03 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -196,6 +196,12 @@

 #include "i915_drv.h"
 #include "i915_oa_hsw.h"
+#include "i915_oa_bdw.h"
+#include "i915_oa_chv.h"
+#include "i915_oa_sklgt2.h"
+#include "i915_oa_sklgt3.h"
+#include "i915_oa_sklgt4.h"
+#include "i915_oa_bxt.h"

 /* HW requires this to be a power of two, between 128k and 16M, though driver
  * is currently generally designed assuming the largest 16M size is used such
@@ -215,7 +221,7 @@
  *
  * Although this can be observed explicitly while copying reports to userspace
  * by checking for a zeroed report-id field in tail reports, we want to account
- * for this earlier, as part of the _oa_buffer_check to avoid lots of redundant
+ * for this earlier, as part of the oa_buffer_check to avoid lots of redundant
  * read() attempts.
  *
  * In effect we define a tail pointer for reading that lags the real tail
@@ -237,7 +243,7 @@
  * indicates that an updated tail pointer is needed.
  *
  * Most of the implementation details for this workaround are in
- * gen7_oa_buffer_check_unlocked() and gen7_appand_oa_reports()
+ * oa_buffer_check_unlocked() and _append_oa_reports()
  *
  * Note for posterity: previously the driver used to define an effective tail
  * pointer that lagged the real pointer by a 'tail margin' measured in bytes
@@ -272,6 +278,13 @@ static u32 i915_perf_stream_paranoid = true;

 #define INVALID_CTX_ID 0xffffffff

+/* On Gen8+ automatically triggered OA reports include a 'reason' field... */
+#define OAREPORT_REASON_MASK           0x3f
+#define OAREPORT_REASON_SHIFT          19
+#define OAREPORT_REASON_TIMER          (1<<0)
+#define OAREPORT_REASON_CTX_SWITCH     (1<<3)
+#define OAREPORT_REASON_CLK_RATIO      (1<<5)
+

 /* For sysctl proc_dointvec_minmax of i915_oa_max_sample_rate
  *
@@ -303,6 +316,13 @@ static struct i915_oa_format hsw_oa_formats[I915_OA_FORMAT_MAX] = {
 	[I915_OA_FORMAT_C4_B8]	    = { 7, 64 },
 };

+static struct i915_oa_format gen8_plus_oa_formats[I915_OA_FORMAT_MAX] = {
+	[I915_OA_FORMAT_A12]		    = { 0, 64 },
+	[I915_OA_FORMAT_A12_B8_C8]	    = { 2, 128 },
+	[I915_OA_FORMAT_A32u40_A4u32_B8_C8] = { 5, 256 },
+	[I915_OA_FORMAT_C4_B8]		    = { 7, 64 },
+};
+
 #define SAMPLE_OA_REPORT      (1<<0)

 /**
@@ -332,8 +352,20 @@ struct perf_open_properties {
 	int oa_period_exponent;
 };

+static u32 gen8_oa_hw_tail_read(struct drm_i915_private *dev_priv)
+{
+	return I915_READ(GEN8_OATAILPTR) & GEN8_OATAILPTR_MASK;
+}
+
+static u32 gen7_oa_hw_tail_read(struct drm_i915_private *dev_priv)
+{
+	u32 oastatus1 = I915_READ(GEN7_OASTATUS1);
+
+	return oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
+}
+
 /**
- * gen7_oa_buffer_check_unlocked - check for data and update tail ptr state
+ * oa_buffer_check_unlocked - check for data and update tail ptr state
  * @dev_priv: i915 device instance
  *
  * This is either called via fops (for blocking reads in user ctx) or the poll
@@ -356,12 +388,11 @@ struct perf_open_properties {
  *
  * Returns: %true if the OA buffer contains data, else %false
  */
-static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
+static bool oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
 {
 	int report_size = dev_priv->perf.oa.oa_buffer.format_size;
 	unsigned long flags;
 	unsigned int aged_idx;
-	u32 oastatus1;
 	u32 head, hw_tail, aged_tail, aging_tail;
 	u64 now;

@@ -381,8 +412,7 @@ static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
 	aged_tail = dev_priv->perf.oa.oa_buffer.tails[aged_idx].offset;
 	aging_tail = dev_priv->perf.oa.oa_buffer.tails[!aged_idx].offset;

-	oastatus1 = I915_READ(GEN7_OASTATUS1);
-	hw_tail = oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
+	hw_tail = dev_priv->perf.oa.ops.oa_hw_tail_read(dev_priv);

 	/* The tail pointer increases in 64 byte increments,
 	 * not in report_size steps...
@@ -404,6 +434,7 @@ static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
 	if (aging_tail != INVALID_TAIL_PTR &&
 	    ((now - dev_priv->perf.oa.oa_buffer.aging_timestamp) >
 	     OA_TAIL_MARGIN_NSEC)) {
+
 		aged_idx ^= 1;
 		dev_priv->perf.oa.oa_buffer.aged_tail_idx = aged_idx;

@@ -553,6 +584,287 @@ static int append_oa_sample(struct i915_perf_stream *stream,
  *
  * Returns: 0 on success, negative error code on failure.
  */
+static int gen8_append_oa_reports(struct i915_perf_stream *stream,
+				  char __user *buf,
+				  size_t count,
+				  size_t *offset)
+{
+	struct drm_i915_private *dev_priv = stream->dev_priv;
+	int report_size = dev_priv->perf.oa.oa_buffer.format_size;
+	u8 *oa_buf_base = dev_priv->perf.oa.oa_buffer.vaddr;
+	u32 gtt_offset = i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
+	u32 mask = (OA_BUFFER_SIZE - 1);
+	size_t start_offset = *offset;
+	unsigned long flags;
+	unsigned int aged_tail_idx;
+	u32 head, tail;
+	u32 taken;
+	int ret = 0;
+
+	if (WARN_ON(!stream->enabled))
+		return -EIO;
+
+	spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	head = dev_priv->perf.oa.oa_buffer.head;
+	aged_tail_idx = dev_priv->perf.oa.oa_buffer.aged_tail_idx;
+	tail = dev_priv->perf.oa.oa_buffer.tails[aged_tail_idx].offset;
+
+	spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	/* An invalid tail pointer here means we're still waiting for the poll
+	 * hrtimer callback to give us a pointer
+	 */
+	if (tail == INVALID_TAIL_PTR)
+		return -EAGAIN;
+
+	/* NB: oa_buffer.head/tail include the gtt_offset which we don't want
+	 * while indexing relative to oa_buf_base.
+	 */
+	head -= gtt_offset;
+	tail -= gtt_offset;
+
+	/* An out of bounds or misaligned head or tail pointer implies a driver
+	 * bug since we validate + align the tail pointers we read from the
+	 * hardware and we are in full control of the head pointer which should
+	 * only be incremented by multiples of the report size (notably also
+	 * all a power of two).
+	 */
+	if (WARN_ONCE(head > OA_BUFFER_SIZE || head % report_size ||
+		      tail > OA_BUFFER_SIZE || tail % report_size,
+		      "Inconsistent OA buffer pointers: head = %u, tail = %u\n",
+		      head, tail))
+		return -EIO;
+
+
+	for (/* none */;
+	     (taken = OA_TAKEN(tail, head));
+	     head = (head + report_size) & mask) {
+		u8 *report = oa_buf_base + head;
+		u32 *report32 = (void *)report;
+		u32 ctx_id;
+		u32 reason;
+
+		/* All the report sizes factor neatly into the buffer
+		 * size so we never expect to see a report split
+		 * between the beginning and end of the buffer.
+		 *
+		 * Given the initial alignment check a misalignment
+		 * here would imply a driver bug that would result
+		 * in an overrun.
+		 */
+		if (WARN_ON((OA_BUFFER_SIZE - head) < report_size)) {
+			DRM_ERROR("Spurious OA head ptr: non-integral report offset\n");
+			break;
+		}
+
+		/* The reason field includes flags identifying what
+		 * triggered this specific report (mostly timer
+		 * triggered or e.g. due to a context switch).
+		 *
+		 * This field is never expected to be zero so we can
+		 * check that the report isn't invalid before copying
+		 * it to userspace...
+		 */
+		reason = ((report32[0] >> OAREPORT_REASON_SHIFT) &
+			  OAREPORT_REASON_MASK);
+		if (reason == 0) {
+			if (__ratelimit(&dev_priv->perf.oa.spurious_report_rs))
+				DRM_NOTE("Skipping spurious, invalid OA report\n");
+			continue;
+		}
+
+		/* XXX: Just keep the lower 21 bits for now since I'm not
+		 * entirely sure if the HW touches any of the higher bits in
+		 * this field
+		 */
+		ctx_id = report32[2] & 0x1fffff;
+
+		/* Squash whatever is in the CTX_ID field if it's marked as
+		 * invalid to be sure we avoid false-positive, single-context
+		 * filtering below...
+		 *
+		 * Note: that we don't clear the valid_ctx_bit so userspace can
+		 * understand that the ID has been squashed by the kernel.
+		 */
+		if (!(report32[0] & dev_priv->perf.oa.gen8_valid_ctx_bit))
+			ctx_id = report32[2] = INVALID_CTX_ID;
+
+		/* NB: For Gen 8 the OA unit no longer supports clock gating
+		 * off for a specific context and the kernel can't securely
+		 * stop the counters from updating as system-wide / global
+		 * values.
+		 *
+		 * Automatic reports now include a context ID so reports can be
+		 * filtered on the cpu but it's not worth trying to
+		 * automatically subtract/hide counter progress for other
+		 * contexts while filtering since we can't stop userspace
+		 * issuing MI_REPORT_PERF_COUNT commands which would still
+		 * provide a side-band view of the real values.
+		 *
+		 * To allow userspace (such as Mesa/GL_INTEL_performance_query)
+		 * to normalize counters for a single filtered context then it
+		 * needs be forwarded bookend context-switch reports so that it
+		 * can track switches in between MI_REPORT_PERF_COUNT commands
+		 * and can itself subtract/ignore the progress of counters
+		 * associated with other contexts. Note that the hardware
+		 * automatically triggers reports when switching to a new
+		 * context which are tagged with the ID of the newly active
+		 * context. To avoid the complexity (and likely fragility) of
+		 * reading ahead while parsing reports to try and minimize
+		 * forwarding redundant context switch reports (i.e. between
+		 * other, unrelated contexts) we simply elect to forward them
+		 * all.
+		 *
+		 * We don't rely solely on the reason field to identify context
+		 * switches since it's not-uncommon for periodic samples to
+		 * identify a switch before any 'context switch' report.
+		 */
+		if (!dev_priv->perf.oa.exclusive_stream->ctx ||
+		    dev_priv->perf.oa.specific_ctx_id == ctx_id ||
+		    (dev_priv->perf.oa.oa_buffer.last_ctx_id ==
+		     dev_priv->perf.oa.specific_ctx_id) ||
+		    reason & OAREPORT_REASON_CTX_SWITCH) {
+
+			/* While filtering for a single context we avoid
+			 * leaking the IDs of other contexts.
+			 */
+			if (dev_priv->perf.oa.exclusive_stream->ctx &&
+			    dev_priv->perf.oa.specific_ctx_id != ctx_id) {
+				report32[2] = INVALID_CTX_ID;
+			}
+
+			ret = append_oa_sample(stream, buf, count, offset,
+					       report);
+			if (ret)
+				break;
+
+			dev_priv->perf.oa.oa_buffer.last_ctx_id = ctx_id;
+		}
+
+		/* The above reason field sanity check is based on
+		 * the assumption that the OA buffer is initially
+		 * zeroed and we reset the field after copying so the
+		 * check is still meaningful once old reports start
+		 * being overwritten.
+		 */
+		report32[0] = 0;
+	}
+
+	if (start_offset != *offset) {
+		spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+		/* We removed the gtt_offset for the copy loop above, indexing
+		 * relative to oa_buf_base so put back here...
+		 */
+		head += gtt_offset;
+
+		I915_WRITE(GEN8_OAHEADPTR, head & GEN8_OAHEADPTR_MASK);
+		dev_priv->perf.oa.oa_buffer.head = head;
+
+		spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+	}
+
+	return ret;
+}
+
+/**
+ * gen8_oa_read - copy status records then buffered OA reports
+ * @stream: An i915-perf stream opened for OA metrics
+ * @buf: destination buffer given by userspace
+ * @count: the number of bytes userspace wants to read
+ * @offset: (inout): the current position for writing into @buf
+ *
+ * Checks OA unit status registers and if necessary appends corresponding
+ * status records for userspace (such as for a buffer full condition) and then
+ * initiate appending any buffered OA reports.
+ *
+ * Updates @offset according to the number of bytes successfully copied into
+ * the userspace buffer.
+ *
+ * NB: some data may be successfully copied to the userspace buffer
+ * even if an error is returned, and this is reflected in the
+ * updated @offset.
+ *
+ * Returns: zero on success or a negative error code
+ */
+static int gen8_oa_read(struct i915_perf_stream *stream,
+			char __user *buf,
+			size_t count,
+			size_t *offset)
+{
+	struct drm_i915_private *dev_priv = stream->dev_priv;
+	u32 oastatus;
+	int ret;
+
+	if (WARN_ON(!dev_priv->perf.oa.oa_buffer.vaddr))
+		return -EIO;
+
+	oastatus = I915_READ(GEN8_OASTATUS);
+
+	/* We treat OABUFFER_OVERFLOW as a significant error:
+	 *
+	 * Although theoretically we could handle this more gracefully
+	 * sometimes, some Gens don't correctly suppress certain
+	 * automatically triggered reports in this condition and so we
+	 * have to assume that old reports are now being trampled
+	 * over.
+	 *
+	 * Considering how we don't currently give userspace control
+	 * over the OA buffer size and always configure a large 16MB
+	 * buffer, then a buffer overflow does anyway likely indicate
+	 * that something has gone quite badly wrong.
+	 */
+	if (oastatus & GEN8_OASTATUS_OABUFFER_OVERFLOW) {
+		ret = append_oa_status(stream, buf, count, offset,
+				       DRM_I915_PERF_RECORD_OA_BUFFER_LOST);
+		if (ret)
+			return ret;
+
+		DRM_DEBUG("OA buffer overflow (exponent = %d): force restart\n",
+			  dev_priv->perf.oa.period_exponent);
+
+		dev_priv->perf.oa.ops.oa_disable(dev_priv);
+		dev_priv->perf.oa.ops.oa_enable(dev_priv);
+
+		/* Note: .oa_enable() is expected to re-init the oabuffer
+		 * and reset GEN8_OASTATUS for us
+		 */
+		oastatus = I915_READ(GEN8_OASTATUS);
+	}
+
+	if (oastatus & GEN8_OASTATUS_REPORT_LOST) {
+		ret = append_oa_status(stream, buf, count, offset,
+				       DRM_I915_PERF_RECORD_OA_REPORT_LOST);
+		if (ret)
+			return ret;
+		I915_WRITE(GEN8_OASTATUS,
+			   oastatus & ~GEN8_OASTATUS_REPORT_LOST);
+	}
+
+	return gen8_append_oa_reports(stream, buf, count, offset);
+}
+
+/**
+ * Copies all buffered OA reports into userspace read() buffer.
+ * @stream: An i915-perf stream opened for OA metrics
+ * @buf: destination buffer given by userspace
+ * @count: the number of bytes userspace wants to read
+ * @offset: (inout): the current position for writing into @buf
+ *
+ * Notably any error condition resulting in a short read (-%ENOSPC or
+ * -%EFAULT) will be returned even though one or more records may
+ * have been successfully copied. In this case it's up to the caller
+ * to decide if the error should be squashed before returning to
+ * userspace.
+ *
+ * Note: reports are consumed from the head, and appended to the
+ * tail, so the tail chases the head?... If you think that's mad
+ * and back-to-front you're not alone, but this follows the
+ * Gen PRM naming convention.
+ *
+ * Returns: 0 on success, negative error code on failure.
+ */
 static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 				  char __user *buf,
 				  size_t count,
@@ -733,7 +1045,8 @@ static int gen7_oa_read(struct i915_perf_stream *stream,
 		if (ret)
 			return ret;

-		DRM_DEBUG("OA buffer overflow: force restart\n");
+		DRM_DEBUG("OA buffer overflow (exponent = %d): force restart\n",
+			  dev_priv->perf.oa.period_exponent);

 		dev_priv->perf.oa.ops.oa_disable(dev_priv);
 		dev_priv->perf.oa.ops.oa_enable(dev_priv);
@@ -776,7 +1089,7 @@ static int i915_oa_wait_unlocked(struct i915_perf_stream *stream)
 		return -EIO;

 	return wait_event_interruptible(dev_priv->perf.oa.poll_wq,
-					dev_priv->perf.oa.ops.oa_buffer_check(dev_priv));
+					oa_buffer_check_unlocked(dev_priv));
 }

 /**
@@ -833,33 +1146,37 @@ static int i915_oa_read(struct i915_perf_stream *stream,
 static int oa_get_render_ctx_id(struct i915_perf_stream *stream)
 {
 	struct drm_i915_private *dev_priv = stream->dev_priv;
-	struct intel_engine_cs *engine = dev_priv->engine[RCS];
-	int ret;

-	ret = i915_mutex_lock_interruptible(&dev_priv->drm);
-	if (ret)
-		return ret;
+	if (i915.enable_execlists)
+		dev_priv->perf.oa.specific_ctx_id = stream->ctx->hw_id;
+	else {
+		struct intel_engine_cs *engine = dev_priv->engine[RCS];
+		int ret;

-	/* As the ID is the gtt offset of the context's vma we pin
-	 * the vma to ensure the ID remains fixed.
-	 *
-	 * NB: implied RCS engine...
-	 */
-	ret = engine->context_pin(engine, stream->ctx);
-	if (ret)
-		goto unlock;
+		ret = i915_mutex_lock_interruptible(&dev_priv->drm);
+		if (ret)
+			return ret;

-	/* Explicitly track the ID (instead of calling i915_ggtt_offset()
-	 * on the fly) considering the difference with gen8+ and
-	 * execlists
-	 */
-	dev_priv->perf.oa.specific_ctx_id =
-		i915_ggtt_offset(stream->ctx->engine[engine->id].state);
+		/* As the ID is the gtt offset of the context's vma we pin
+		 * the vma to ensure the ID remains fixed.
+		 */
+		ret = engine->context_pin(engine, stream->ctx);
+		if (ret) {
+			mutex_unlock(&dev_priv->drm.struct_mutex);
+			return ret;
+		}

-unlock:
-	mutex_unlock(&dev_priv->drm.struct_mutex);
+		/* Explicitly track the ID (instead of calling
+		 * i915_ggtt_offset() on the fly) considering the difference
+		 * with gen8+ and execlists
+		 */
+		dev_priv->perf.oa.specific_ctx_id =
+			i915_ggtt_offset(stream->ctx->engine[engine->id].state);

-	return ret;
+		mutex_unlock(&dev_priv->drm.struct_mutex);
+	}
+
+	return 0;
 }

 /**
@@ -874,12 +1191,16 @@ static void oa_put_render_ctx_id(struct i915_perf_stream *stream)
 	struct drm_i915_private *dev_priv = stream->dev_priv;
 	struct intel_engine_cs *engine = dev_priv->engine[RCS];

-	mutex_lock(&dev_priv->drm.struct_mutex);
+	if (i915.enable_execlists) {
+		dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
+	} else {
+		mutex_lock(&dev_priv->drm.struct_mutex);

-	dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
-	engine->context_unpin(engine, stream->ctx);
+		dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
+		engine->context_unpin(engine, stream->ctx);

-	mutex_unlock(&dev_priv->drm.struct_mutex);
+		mutex_unlock(&dev_priv->drm.struct_mutex);
+	}
 }

 static void
@@ -969,6 +1290,61 @@ static void gen7_init_oa_buffer(struct drm_i915_private *dev_priv)
 	dev_priv->perf.oa.pollin = false;
 }

+static void gen8_init_oa_buffer(struct drm_i915_private *dev_priv)
+{
+	u32 gtt_offset = i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
+	unsigned long flags;
+
+	spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	I915_WRITE(GEN8_OASTATUS, 0);
+	I915_WRITE(GEN8_OAHEADPTR, gtt_offset);
+	dev_priv->perf.oa.oa_buffer.head = gtt_offset;
+
+	I915_WRITE(GEN8_OABUFFER_UDW, 0);
+
+	/* PRM says:
+	 *
+	 *  "This MMIO must be set before the OATAILPTR
+	 *  register and after the OAHEADPTR register. This is
+	 *  to enable proper functionality of the overflow
+	 *  bit."
+	 */
+	I915_WRITE(GEN8_OABUFFER, gtt_offset |
+		   OABUFFER_SIZE_16M | OA_MEM_SELECT_GGTT);
+	I915_WRITE(GEN8_OATAILPTR, gtt_offset & GEN8_OATAILPTR_MASK);
+
+	/* Mark that we need updated tail pointers to read from... */
+	dev_priv->perf.oa.oa_buffer.tails[0].offset = INVALID_TAIL_PTR;
+	dev_priv->perf.oa.oa_buffer.tails[1].offset = INVALID_TAIL_PTR;
+
+	/* Reset state used to recognise context switches, affecting which
+	 * reports we will forward to userspace while filtering for a single
+	 * context.
+	 */
+	dev_priv->perf.oa.oa_buffer.last_ctx_id = INVALID_CTX_ID;
+
+	spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	/* NB: although the OA buffer will initially be allocated
+	 * zeroed via shmfs (and so this memset is redundant when
+	 * first allocating), we may re-init the OA buffer, either
+	 * when re-enabling a stream or in error/reset paths.
+	 *
+	 * The reason we clear the buffer for each re-init is for the
+	 * sanity check in gen8_append_oa_reports() that looks at the
+	 * reason field to make sure it's non-zero which relies on
+	 * the assumption that new reports are being written to zeroed
+	 * memory...
+	 */
+	memset(dev_priv->perf.oa.oa_buffer.vaddr, 0, OA_BUFFER_SIZE);
+
+	/* Maybe make ->pollin per-stream state if we support multiple
+	 * concurrent streams in the future.
+	 */
+	dev_priv->perf.oa.pollin = false;
+}
+
 static int alloc_oa_buffer(struct drm_i915_private *dev_priv)
 {
 	struct drm_i915_gem_object *bo;
@@ -1113,6 +1489,280 @@ static void hsw_disable_metric_set(struct drm_i915_private *dev_priv)
 				      ~GT_NOA_ENABLE));
 }

+/*
+ * From Broadwell PRM, 3D-Media-GPGPU -> Register State Context
+ *
+ * MMIO reads or writes to any of the registers listed in the
+ * “Register State Context image” subsections through HOST/IA
+ * MMIO interface for debug purposes must follow the steps below:
+ *
+ * - SW should set the Force Wakeup bit to prevent GT from entering C6.
+ * - Write 0x2050[31:0] = 0x00010001 (disable sequence).
+ * - Disable IDLE messaging in CS (Write 0x2050[31:0] = 0x00010001).
+ * - BDW:  Poll/Wait for register bits of 0x22AC[6:0] turn to 0x30 value.
+ * - SKL+: Poll/Wait for register bits of 0x22A4[6:0] turn to 0x30 value.
+ * - Read/Write to desired MMIO registers.
+ * - Enable IDLE messaging in CS (Write 0x2050[31:0] = 0x00010000).
+ * - Force Wakeup bit should be reset to enable C6 entry.
+ *
+ * XXX: don't nest or overlap calls to this API, it has no ref
+ * counting to track how many entities require the RCS to be
+ * blocked from being idle.
+ */
+static int gen8_begin_ctx_mmio(struct drm_i915_private *dev_priv)
+{
+	i915_reg_t fsm_reg = INTEL_GEN(dev_priv) > 8 ?
+		GEN9_RCS_FE_FSM2 : GEN6_RCS_PWR_FSM;
+	int ret = 0;
+
+	/* There's only no active context while idle in execlist mode
+	 * (though we shouldn't be using this in any other case)
+	 */
+	if (WARN_ON(!i915.enable_execlists))
+		return -EINVAL;
+
+	intel_uncore_forcewake_get(dev_priv, FORCEWAKE_RENDER);
+
+	I915_WRITE(GEN6_RC_SLEEP_PSMI_CONTROL,
+		   _MASKED_BIT_ENABLE(GEN6_PSMI_SLEEP_MSG_DISABLE));
+
+	/* Note: we don't currently have a good handle on the maximum
+	 * latency for this wake up so while we only need to hold rcs
+	 * busy from process context we at least keep the waiting
+	 * interruptible...
+	 */
+	ret = intel_wait_for_register(dev_priv, fsm_reg, 0x3f, 0x30, 50);
+	if (ret == -ETIMEDOUT)
+		DRM_ERROR("Timeout waiting to be able to begin per-ctx mmio\n");
+
+	if (ret)
+		intel_uncore_forcewake_put(dev_priv, FORCEWAKE_RENDER);
+
+	return ret;
+}
+
+static void gen8_end_ctx_mmio(struct drm_i915_private *dev_priv)
+{
+	if (WARN_ON(!i915.enable_execlists))
+		return;
+
+	I915_WRITE(GEN6_RC_SLEEP_PSMI_CONTROL,
+		   _MASKED_BIT_DISABLE(GEN6_PSMI_SLEEP_MSG_DISABLE));
+	intel_uncore_forcewake_put(dev_priv, FORCEWAKE_RENDER);
+}
+
+/* NB: It must always remain pointer safe to run this even if the OA unit
+ * has been disabled.
+ *
+ * It's fine to put out-of-date values into these per-context registers
+ * in the case that the OA unit has been disabled.
+ */
+static void gen8_update_reg_state_unlocked(struct i915_gem_context *ctx,
+					   u32 *reg_state)
+{
+	struct drm_i915_private *dev_priv = ctx->i915;
+	const struct i915_oa_reg *flex_regs = dev_priv->perf.oa.flex_regs;
+	int n_flex_regs = dev_priv->perf.oa.flex_regs_len;
+	u32 ctx_oactxctrl = dev_priv->perf.oa.ctx_oactxctrl_offset;
+	u32 ctx_flexeu0 = dev_priv->perf.oa.ctx_flexeu0_offset;
+	/* The MMIO offsets for Flex EU registers aren't contiguous */
+	u32 flex_mmio[] = {
+		i915_mmio_reg_offset(EU_PERF_CNTL0),
+		i915_mmio_reg_offset(EU_PERF_CNTL1),
+		i915_mmio_reg_offset(EU_PERF_CNTL2),
+		i915_mmio_reg_offset(EU_PERF_CNTL3),
+		i915_mmio_reg_offset(EU_PERF_CNTL4),
+		i915_mmio_reg_offset(EU_PERF_CNTL5),
+		i915_mmio_reg_offset(EU_PERF_CNTL6),
+	};
+	int i;
+
+	reg_state[ctx_oactxctrl] = i915_mmio_reg_offset(GEN8_OACTXCONTROL);
+	reg_state[ctx_oactxctrl+1] = (dev_priv->perf.oa.period_exponent <<
+				      GEN8_OA_TIMER_PERIOD_SHIFT) |
+				     (dev_priv->perf.oa.periodic ?
+				      GEN8_OA_TIMER_ENABLE : 0) |
+				     GEN8_OA_COUNTER_RESUME;
+
+	for (i = 0; i < ARRAY_SIZE(flex_mmio); i++) {
+		u32 state_offset = ctx_flexeu0 + i * 2;
+		u32 mmio = flex_mmio[i];
+
+		/* This arbitrary default will select the 'EU FPU0 Pipeline
+		 * Active' event. In the future it's anticipated that there
+		 * will be an explicit 'No Event' we can select, but not yet...
+		 */
+		u32 value = 0;
+		int j;
+
+		for (j = 0; j < n_flex_regs; j++) {
+			if (i915_mmio_reg_offset(flex_regs[j].addr) == mmio) {
+				value = flex_regs[j].value;
+				break;
+			}
+		}
+
+		reg_state[state_offset] = mmio;
+		reg_state[state_offset+1] = value;
+	}
+}
+
+/* Manages updating the per-context aspects of the OA stream
+ * configuration across all contexts.
+ *
+ * The awkward consideration here is that OACTXCONTROL controls the
+ * exponent for periodic sampling which is primarily used for system
+ * wide profiling where we'd like a consistent sampling period even in
+ * the face of context switches.
+ *
+ * Our approach of updating the register state context (as opposed to
+ * say using a workaround batch buffer) ensures that the hardware
+ * won't automatically reload an out-of-date timer exponent even
+ * transiently before a WA BB could be parsed.
+ *
+ * This function needs to:
+ * - Ensure the currently running context's per-context OA state is
+ *   updated
+ * - Ensure that all existing contexts will have the correct per-context
+ *   OA state if they are scheduled for use.
+ * - Ensure any new contexts will be initialized with the correct
+ *   per-context OA state.
+ *
+ * Note: it's only the RCS/Render context that has any OA state.
+ */
+static int gen8_configure_all_contexts(struct drm_i915_private *dev_priv)
+{
+	struct i915_gem_context *ctx;
+	int ret;
+
+	ret = i915_mutex_lock_interruptible(&dev_priv->drm);
+	if (ret)
+		return ret;
+
+	/* Switch away from any user context.  */
+	ret = i915_gem_switch_to_kernel_context(dev_priv);
+	if (ret)
+		return ret;
+
+	/* The OA register config is setup through the context image. This image
+	 * might be written to by the GPU on context switch (in particular on
+	 * lite-restore). This means we can't safely update a context's image,
+	 * if this context is scheduled/submitted to run on the GPU.
+	 *
+	 * We could emit the OA register config through the batch buffer but
+	 * this might leave small interval of time where the OA unit is
+	 * configured at an invalid sampling period.
+	 *
+	 * So far the best way to work around this issue seems to be draining
+	 * the GPU from any submitted work.
+	 */
+	ret = i915_gem_wait_for_idle(dev_priv,
+				     I915_WAIT_INTERRUPTIBLE |
+				     I915_WAIT_LOCKED);
+	if (ret) {
+		mutex_unlock(&dev_priv->drm.struct_mutex);
+		return ret;
+	}
+
+	/* Update all contexts now that we've stalled the submission. */
+	list_for_each_entry(ctx, &dev_priv->context_list, link) {
+		if (!ctx->engine[RCS].initialised)
+			continue;
+
+		gen8_update_reg_state_unlocked(ctx,
+					       ctx->engine[RCS].lrc_reg_state);
+	}
+
+	mutex_unlock(&dev_priv->drm.struct_mutex);
+
+	/* Now update the current context.
+	 *
+	 * Note: Using MMIO to update per-context registers requires
+	 * some extra care...
+	 */
+	ret = gen8_begin_ctx_mmio(dev_priv);
+	if (ret) {
+		DRM_ERROR("Failed to bring RCS out of idle to update current ctx OA state\n");
+		return ret;
+	}
+
+	I915_WRITE(GEN8_OACTXCONTROL, ((dev_priv->perf.oa.period_exponent <<
+					GEN8_OA_TIMER_PERIOD_SHIFT) |
+				      (dev_priv->perf.oa.periodic ?
+				       GEN8_OA_TIMER_ENABLE : 0) |
+				      GEN8_OA_COUNTER_RESUME));
+
+	config_oa_regs(dev_priv, dev_priv->perf.oa.flex_regs,
+			dev_priv->perf.oa.flex_regs_len);
+
+	gen8_end_ctx_mmio(dev_priv);
+
+	return 0;
+}
+
+static int gen8_enable_metric_set(struct drm_i915_private *dev_priv)
+{
+	int ret = dev_priv->perf.oa.ops.select_metric_set(dev_priv);
+
+	if (ret)
+		return ret;
+
+	/* We disable slice/unslice clock ratio change reports on SKL since
+	 * they are too noisy. The HW generates a lot of redundant reports
+	 * where the ratio hasn't really changed causing a lot of redundant
+	 * work to processes and increasing the chances we'll hit buffer
+	 * overruns.
+	 *
+	 * Although we don't currently use the 'disable overrun' OABUFFER
+	 * feature it's worth noting that clock ratio reports have to be
+	 * disabled before considering to use that feature since the HW doesn't
+	 * correctly block these reports.
+	 *
+	 * Currently none of the high-level metrics we have depend on knowing
+	 * this ratio to normalize.
+	 *
+	 * Note: This register is not power context saved and restored, but
+	 * that's OK considering that we disable RC6 while the OA unit is
+	 * enabled.
+	 *
+	 * The _INCLUDE_CLK_RATIO bit allows the slice/unslice frequency to
+	 * be read back from automatically triggered reports, as part of the
+	 * RPT_ID field.
+	 */
+	if (IS_SKYLAKE(dev_priv) || IS_BROXTON(dev_priv)) {
+		I915_WRITE(GEN8_OA_DEBUG,
+			   _MASKED_BIT_ENABLE(GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS |
+					      GEN9_OA_DEBUG_INCLUDE_CLK_RATIO));
+	}
+
+	I915_WRITE(GDT_CHICKEN_BITS, 0xA0);
+	config_oa_regs(dev_priv, dev_priv->perf.oa.mux_regs,
+		       dev_priv->perf.oa.mux_regs_len);
+	I915_WRITE(GDT_CHICKEN_BITS, 0x80);
+
+	/* It takes a fairly long time for a new MUX configuration to
+	 * be be applied after these register writes. This delay
+	 * duration is take from Haswell (derived empirically based on
+	 * the render_basic config) but hopefully it covers the
+	 * maximum configuration latency for Gen8+ too...
+	 */
+	usleep_range(15000, 20000);
+
+	config_oa_regs(dev_priv, dev_priv->perf.oa.b_counter_regs,
+		       dev_priv->perf.oa.b_counter_regs_len);
+
+	ret = gen8_configure_all_contexts(dev_priv);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+static void gen8_disable_metric_set(struct drm_i915_private *dev_priv)
+{
+	/* NOP */
+}
+
 static void gen7_update_oacontrol_locked(struct drm_i915_private *dev_priv)
 {
 	lockdep_assert_held(&dev_priv->perf.hook_lock);
@@ -1157,6 +1807,29 @@ static void gen7_oa_enable(struct drm_i915_private *dev_priv)
 	spin_unlock_irqrestore(&dev_priv->perf.hook_lock, flags);
 }

+static void gen8_oa_enable(struct drm_i915_private *dev_priv)
+{
+	u32 report_format = dev_priv->perf.oa.oa_buffer.format;
+
+	/* Reset buf pointers so we don't forward reports from before now.
+	 *
+	 * Think carefully if considering trying to avoid this, since it
+	 * also ensures status flags and the buffer itself are cleared
+	 * in error paths, and we have checks for invalid reports based
+	 * on the assumption that certain fields are written to zeroed
+	 * memory which this helps maintains.
+	 */
+	gen8_init_oa_buffer(dev_priv);
+
+	/* Note: we don't rely on the hardware to perform single context
+	 * filtering and instead filter on the cpu based on the context-id
+	 * field of reports
+	 */
+	I915_WRITE(GEN8_OACONTROL, (report_format <<
+				    GEN8_OA_REPORT_FORMAT_SHIFT) |
+				   GEN8_OA_COUNTER_ENABLE);
+}
+
 /**
  * i915_oa_stream_enable - handle `I915_PERF_IOCTL_ENABLE` for OA stream
  * @stream: An i915 perf stream opened for OA metrics
@@ -1183,6 +1856,11 @@ static void gen7_oa_disable(struct drm_i915_private *dev_priv)
 	I915_WRITE(GEN7_OACONTROL, 0);
 }

+static void gen8_oa_disable(struct drm_i915_private *dev_priv)
+{
+	I915_WRITE(GEN8_OACONTROL, 0);
+}
+
 /**
  * i915_oa_stream_disable - handle `I915_PERF_IOCTL_DISABLE` for OA stream
  * @stream: An i915 perf stream opened for OA metrics
@@ -1361,6 +2039,21 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream,
 	return ret;
 }

+void i915_oa_init_reg_state(struct intel_engine_cs *engine,
+			    struct i915_gem_context *ctx,
+			    u32 *reg_state)
+{
+	struct drm_i915_private *dev_priv = engine->i915;
+
+	if (engine->id != RCS)
+		return;
+
+	if (!dev_priv->perf.initialized)
+		return;
+
+	gen8_update_reg_state_unlocked(ctx, reg_state);
+}
+
 /**
  * i915_perf_read_locked - &i915_perf_stream_ops->read with error normalisation
  * @stream: An i915 perf stream
@@ -1486,7 +2179,7 @@ static enum hrtimer_restart oa_poll_check_timer_cb(struct hrtimer *hrtimer)
 		container_of(hrtimer, typeof(*dev_priv),
 			     perf.oa.poll_check_timer);

-	if (dev_priv->perf.oa.ops.oa_buffer_check(dev_priv)) {
+	if (oa_buffer_check_unlocked(dev_priv)) {
 		dev_priv->perf.oa.pollin = true;
 		wake_up(&dev_priv->perf.oa.poll_wq);
 	}
@@ -1775,6 +2468,7 @@ i915_perf_open_ioctl_locked(struct drm_i915_private *dev_priv,
 	struct i915_gem_context *specific_ctx = NULL;
 	struct i915_perf_stream *stream = NULL;
 	unsigned long f_flags = 0;
+	bool privileged_op = true;
 	int stream_fd;
 	int ret;

@@ -1792,12 +2486,28 @@ i915_perf_open_ioctl_locked(struct drm_i915_private *dev_priv,
 		}
 	}

+	/* On Haswell the OA unit supports clock gating off for a specific
+	 * context and in this mode there's no visibility of metrics for the
+	 * rest of the system, which we consider acceptable for a
+	 * non-privileged client.
+	 *
+	 * For Gen8+ the OA unit no longer supports clock gating off for a
+	 * specific context and the kernel can't securely stop the counters
+	 * from updating as system-wide / global values. Even though we can
+	 * filter reports based on the included context ID we can't block
+	 * clients from seeing the raw / global counter values via
+	 * MI_REPORT_PERF_COUNT commands and so consider it a privileged op to
+	 * enable the OA unit by default.
+	 */
+	if (IS_HASWELL(dev_priv) && specific_ctx)
+		privileged_op = false;
+
 	/* Similar to perf's kernel.perf_paranoid_cpu sysctl option
 	 * we check a dev.i915.perf_stream_paranoid sysctl option
 	 * to determine if it's ok to access system wide OA counters
 	 * without CAP_SYS_ADMIN privileges.
 	 */
-	if (!specific_ctx &&
+	if (privileged_op &&
 	    i915_perf_stream_paranoid && !capable(CAP_SYS_ADMIN)) {
 		DRM_DEBUG("Insufficient privileges to open system-wide i915 perf stream\n");
 		ret = -EACCES;
@@ -2069,9 +2779,6 @@ int i915_perf_open_ioctl(struct drm_device *dev, void *data,
  */
 void i915_perf_register(struct drm_i915_private *dev_priv)
 {
-	if (!IS_HASWELL(dev_priv))
-		return;
-
 	if (!dev_priv->perf.initialized)
 		return;

@@ -2087,11 +2794,38 @@ void i915_perf_register(struct drm_i915_private *dev_priv)
 	if (!dev_priv->perf.metrics_kobj)
 		goto exit;

-	if (i915_perf_register_sysfs_hsw(dev_priv)) {
-		kobject_put(dev_priv->perf.metrics_kobj);
-		dev_priv->perf.metrics_kobj = NULL;
+	if (IS_HASWELL(dev_priv)) {
+		if (i915_perf_register_sysfs_hsw(dev_priv))
+			goto sysfs_error;
+	} else if (IS_BROADWELL(dev_priv)) {
+		if (i915_perf_register_sysfs_bdw(dev_priv))
+			goto sysfs_error;
+	} else if (IS_CHERRYVIEW(dev_priv)) {
+		if (i915_perf_register_sysfs_chv(dev_priv))
+			goto sysfs_error;
+	} else if (IS_SKYLAKE(dev_priv)) {
+		if (IS_SKL_GT2(dev_priv)) {
+			if (i915_perf_register_sysfs_sklgt2(dev_priv))
+				goto sysfs_error;
+		} else if (IS_SKL_GT3(dev_priv)) {
+			if (i915_perf_register_sysfs_sklgt3(dev_priv))
+				goto sysfs_error;
+		} else if (IS_SKL_GT4(dev_priv)) {
+			if (i915_perf_register_sysfs_sklgt4(dev_priv))
+				goto sysfs_error;
+		} else
+			goto sysfs_error;
+	} else if (IS_BROXTON(dev_priv)) {
+		if (i915_perf_register_sysfs_bxt(dev_priv))
+			goto sysfs_error;
 	}

+	goto exit;
+
+sysfs_error:
+	kobject_put(dev_priv->perf.metrics_kobj);
+	dev_priv->perf.metrics_kobj = NULL;
+
 exit:
 	mutex_unlock(&dev_priv->perf.lock);
 }
@@ -2107,13 +2841,24 @@ void i915_perf_register(struct drm_i915_private *dev_priv)
  */
 void i915_perf_unregister(struct drm_i915_private *dev_priv)
 {
-	if (!IS_HASWELL(dev_priv))
-		return;
-
 	if (!dev_priv->perf.metrics_kobj)
 		return;

-	i915_perf_unregister_sysfs_hsw(dev_priv);
+        if (IS_HASWELL(dev_priv))
+                i915_perf_unregister_sysfs_hsw(dev_priv);
+        else if (IS_BROADWELL(dev_priv))
+                i915_perf_unregister_sysfs_bdw(dev_priv);
+        else if (IS_CHERRYVIEW(dev_priv))
+                i915_perf_unregister_sysfs_chv(dev_priv);
+        else if (IS_SKYLAKE(dev_priv)) {
+		if (IS_SKL_GT2(dev_priv))
+			i915_perf_unregister_sysfs_sklgt2(dev_priv);
+		else if (IS_SKL_GT3(dev_priv))
+			i915_perf_unregister_sysfs_sklgt3(dev_priv);
+		else if (IS_SKL_GT4(dev_priv))
+			i915_perf_unregister_sysfs_sklgt4(dev_priv);
+	} else if (IS_BROXTON(dev_priv))
+                i915_perf_unregister_sysfs_bxt(dev_priv);

 	kobject_put(dev_priv->perf.metrics_kobj);
 	dev_priv->perf.metrics_kobj = NULL;
@@ -2172,36 +2917,105 @@ static struct ctl_table dev_root[] = {
  */
 void i915_perf_init(struct drm_i915_private *dev_priv)
 {
-	if (!IS_HASWELL(dev_priv))
-		return;
-
-	hrtimer_init(&dev_priv->perf.oa.poll_check_timer,
-		     CLOCK_MONOTONIC, HRTIMER_MODE_REL);
-	dev_priv->perf.oa.poll_check_timer.function = oa_poll_check_timer_cb;
-	init_waitqueue_head(&dev_priv->perf.oa.poll_wq);
+	dev_priv->perf.oa.n_builtin_sets = 0;
+
+	if (IS_HASWELL(dev_priv)) {
+		dev_priv->perf.oa.ops.init_oa_buffer = gen7_init_oa_buffer;
+		dev_priv->perf.oa.ops.enable_metric_set = hsw_enable_metric_set;
+		dev_priv->perf.oa.ops.disable_metric_set = hsw_disable_metric_set;
+		dev_priv->perf.oa.ops.oa_enable = gen7_oa_enable;
+		dev_priv->perf.oa.ops.oa_disable = gen7_oa_disable;
+		dev_priv->perf.oa.ops.read = gen7_oa_read;
+		dev_priv->perf.oa.ops.oa_hw_tail_read =
+			gen7_oa_hw_tail_read;
+
+		dev_priv->perf.oa.oa_formats = hsw_oa_formats;
+
+		dev_priv->perf.oa.n_builtin_sets =
+			i915_oa_n_builtin_metric_sets_hsw;
+	} else if (i915.enable_execlists) {
+		/* Note: that although we could theoretically also support the
+		 * legacy ringbuffer mode on BDW (and earlier iterations of
+		 * this driver, before upstreaming did this) it didn't seem
+		 * worth the complexity to maintain now that BDW+ enable
+		 * execlist mode by default.
+		 */

-	INIT_LIST_HEAD(&dev_priv->perf.streams);
-	mutex_init(&dev_priv->perf.lock);
-	spin_lock_init(&dev_priv->perf.hook_lock);
-	spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);
+		if (IS_GEN8(dev_priv)) {
+			dev_priv->perf.oa.ctx_oactxctrl_offset = 0x120;
+			dev_priv->perf.oa.ctx_flexeu0_offset = 0x2ce;
+			dev_priv->perf.oa.gen8_valid_ctx_bit = (1<<25);
+
+			if (IS_BROADWELL(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_bdw;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_bdw;
+			} else if (IS_CHERRYVIEW(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_chv;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_chv;
+			}
+		} else if (IS_GEN9(dev_priv)) {
+			dev_priv->perf.oa.ctx_oactxctrl_offset = 0x128;
+			dev_priv->perf.oa.ctx_flexeu0_offset = 0x3de;
+			dev_priv->perf.oa.gen8_valid_ctx_bit = (1<<16);
+
+			if (IS_SKL_GT2(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_sklgt2;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_sklgt2;
+			} else if (IS_SKL_GT3(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_sklgt3;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_sklgt3;
+			} else if (IS_SKL_GT4(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_sklgt4;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_sklgt4;
+			} else if (IS_BROXTON(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_bxt;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_bxt;
+			}
+		}

-	dev_priv->perf.oa.ops.init_oa_buffer = gen7_init_oa_buffer;
-	dev_priv->perf.oa.ops.enable_metric_set = hsw_enable_metric_set;
-	dev_priv->perf.oa.ops.disable_metric_set = hsw_disable_metric_set;
-	dev_priv->perf.oa.ops.oa_enable = gen7_oa_enable;
-	dev_priv->perf.oa.ops.oa_disable = gen7_oa_disable;
-	dev_priv->perf.oa.ops.read = gen7_oa_read;
-	dev_priv->perf.oa.ops.oa_buffer_check =
-		gen7_oa_buffer_check_unlocked;
+		if (dev_priv->perf.oa.n_builtin_sets) {
+			dev_priv->perf.oa.ops.init_oa_buffer = gen8_init_oa_buffer;
+			dev_priv->perf.oa.ops.enable_metric_set =
+				gen8_enable_metric_set;
+			dev_priv->perf.oa.ops.disable_metric_set =
+				gen8_disable_metric_set;
+			dev_priv->perf.oa.ops.oa_enable = gen8_oa_enable;
+			dev_priv->perf.oa.ops.oa_disable = gen8_oa_disable;
+			dev_priv->perf.oa.ops.read = gen8_oa_read;
+			dev_priv->perf.oa.ops.oa_hw_tail_read =
+				gen8_oa_hw_tail_read;
+
+			dev_priv->perf.oa.oa_formats = gen8_plus_oa_formats;
+		}
+	}

-	dev_priv->perf.oa.oa_formats = hsw_oa_formats;
+	if (dev_priv->perf.oa.n_builtin_sets) {
+		hrtimer_init(&dev_priv->perf.oa.poll_check_timer,
+				CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+		dev_priv->perf.oa.poll_check_timer.function = oa_poll_check_timer_cb;
+		init_waitqueue_head(&dev_priv->perf.oa.poll_wq);

-	dev_priv->perf.oa.n_builtin_sets =
-		i915_oa_n_builtin_metric_sets_hsw;
+		INIT_LIST_HEAD(&dev_priv->perf.streams);
+		mutex_init(&dev_priv->perf.lock);
+		spin_lock_init(&dev_priv->perf.hook_lock);
+		spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);

-	dev_priv->perf.sysctl_header = register_sysctl_table(dev_root);
+		dev_priv->perf.sysctl_header = register_sysctl_table(dev_root);

-	dev_priv->perf.initialized = true;
+		dev_priv->perf.initialized = true;
+	}
 }

 /**
@@ -2216,5 +3030,6 @@ void i915_perf_fini(struct drm_i915_private *dev_priv)
 	unregister_sysctl_table(dev_priv->perf.sysctl_header);

 	memset(&dev_priv->perf.oa.ops, 0, sizeof(dev_priv->perf.oa.ops));
+
 	dev_priv->perf.initialized = false;
 }
diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index 4c72adae368b..c0bbba6d5b78 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -653,6 +653,12 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)

 #define GEN8_OACTXID _MMIO(0x2364)

+#define GEN8_OA_DEBUG _MMIO(0x2B04)
+#define  GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS    (1<<5)
+#define  GEN9_OA_DEBUG_INCLUDE_CLK_RATIO	    (1<<6)
+#define  GEN9_OA_DEBUG_DISABLE_GO_1_0_REPORTS	    (1<<2)
+#define  GEN9_OA_DEBUG_DISABLE_CTX_SWITCH_REPORTS   (1<<1)
+
 #define GEN8_OACONTROL _MMIO(0x2B00)
 #define  GEN8_OA_REPORT_FORMAT_A12	    (0<<2)
 #define  GEN8_OA_REPORT_FORMAT_A12_B8_C8    (2<<2)
@@ -674,6 +680,7 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 #define  GEN7_OABUFFER_STOP_RESUME_ENABLE   (1<<1)
 #define  GEN7_OABUFFER_RESUME		    (1<<0)

+#define GEN8_OABUFFER_UDW _MMIO(0x23b4)
 #define GEN8_OABUFFER _MMIO(0x2b14)

 #define GEN7_OASTATUS1 _MMIO(0x2364)
@@ -692,7 +699,9 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 #define  GEN8_OASTATUS_REPORT_LOST	    (1<<0)

 #define GEN8_OAHEADPTR _MMIO(0x2B0C)
+#define GEN8_OAHEADPTR_MASK    0xffffffc0
 #define GEN8_OATAILPTR _MMIO(0x2B10)
+#define GEN8_OATAILPTR_MASK    0xffffffc0

 #define OABUFFER_SIZE_128K  (0<<3)
 #define OABUFFER_SIZE_256K  (1<<3)
@@ -705,7 +714,17 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)

 #define OA_MEM_SELECT_GGTT  (1<<0)

+/*
+ * Flexible, Aggregate EU Counter Registers.
+ * Note: these aren't contiguous
+ */
 #define EU_PERF_CNTL0	    _MMIO(0xe458)
+#define EU_PERF_CNTL1	    _MMIO(0xe558)
+#define EU_PERF_CNTL2	    _MMIO(0xe658)
+#define EU_PERF_CNTL3	    _MMIO(0xe758)
+#define EU_PERF_CNTL4	    _MMIO(0xe45c)
+#define EU_PERF_CNTL5	    _MMIO(0xe55c)
+#define EU_PERF_CNTL6	    _MMIO(0xe65c)

 #define GDT_CHICKEN_BITS    _MMIO(0x9840)
 #define GT_NOA_ENABLE	    0x00000080
@@ -2325,6 +2344,9 @@ enum skl_disp_power_wells {
 #define   GEN8_RC_SEMA_IDLE_MSG_DISABLE	(1 << 12)
 #define   GEN8_FF_DOP_CLOCK_GATE_DISABLE	(1<<10)

+#define GEN6_RCS_PWR_FSM _MMIO(0x22ac)
+#define GEN9_RCS_FE_FSM2 _MMIO(0x22a4)
+
 /* Fuse readout registers for GT */
 #define CHV_FUSE_GT			_MMIO(VLV_DISPLAY_BASE + 0x2168)
 #define   CHV_FGT_DISABLE_SS0		(1 << 10)
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 7df278fe492e..351e231a02f4 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1879,6 +1879,8 @@ static void execlists_init_reg_state(u32 *regs,
 		regs[CTX_LRI_HEADER_2] = MI_LOAD_REGISTER_IMM(1);
 		CTX_REG(regs, CTX_R_PWR_CLK_STATE, GEN8_R_PWR_CLK_STATE,
 			make_rpcs(dev_priv));
+
+		i915_oa_init_reg_state(engine, ctx, regs);
 	}
 }

diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
index 464547d08173..15bc9f78ba4d 100644
--- a/include/uapi/drm/i915_drm.h
+++ b/include/uapi/drm/i915_drm.h
@@ -1316,13 +1316,18 @@ struct drm_i915_gem_context_param {
 };

 enum drm_i915_oa_format {
-	I915_OA_FORMAT_A13 = 1,
-	I915_OA_FORMAT_A29,
-	I915_OA_FORMAT_A13_B8_C8,
-	I915_OA_FORMAT_B4_C8,
-	I915_OA_FORMAT_A45_B8_C8,
-	I915_OA_FORMAT_B4_C8_A16,
-	I915_OA_FORMAT_C4_B8,
+	I915_OA_FORMAT_A13 = 1,	    /* HSW only */
+	I915_OA_FORMAT_A29,	    /* HSW only */
+	I915_OA_FORMAT_A13_B8_C8,   /* HSW only */
+	I915_OA_FORMAT_B4_C8,	    /* HSW only */
+	I915_OA_FORMAT_A45_B8_C8,   /* HSW only */
+	I915_OA_FORMAT_B4_C8_A16,   /* HSW only */
+	I915_OA_FORMAT_C4_B8,	    /* HSW+ */
+
+	/* Gen8+ */
+	I915_OA_FORMAT_A12,
+	I915_OA_FORMAT_A12_B8_C8,
+	I915_OA_FORMAT_A32u40_A4u32_B8_C8,

 	I915_OA_FORMAT_MAX	    /* non-ABI */
 };
--
2.11.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* ✗ Fi.CI.BAT: failure for Enable OA unit for Gen 8 and 9 in i915 perf (rev7)
  2017-04-12 15:55 [PATCH v4 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Robert Bragg
                   ` (16 preceding siblings ...)
  2017-04-24  7:17 ` [PATCH 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
@ 2017-04-24 18:52 ` Patchwork
  2017-04-25 17:35 ` ✗ Fi.CI.BAT: failure for Enable OA unit for Gen 8 and 9 in i915 perf (rev8) Patchwork
  18 siblings, 0 replies; 87+ messages in thread
From: Patchwork @ 2017-04-24 18:52 UTC (permalink / raw)
  To: Lionel Landwerlin; +Cc: intel-gfx

== Series Details ==

Series: Enable OA unit for Gen 8 and 9 in i915 perf (rev7)
URL   : https://patchwork.freedesktop.org/series/20084/
State : failure

== Summary ==

cc1: all warnings being treated as errors
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/gvt/scheduler.o' failed
make[4]: *** [drivers/gpu/drm/i915/gvt/scheduler.o] Error 1
cc1: all warnings being treated as errors
cc1: all warnings being treated as errors
cc1: all warnings being treated as errors
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/i915_gpu_error.o' failed
make[4]: *** [drivers/gpu/drm/i915/i915_gpu_error.o] Error 1
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/i915_vgpu.o' failed
make[4]: *** [drivers/gpu/drm/i915/i915_vgpu.o] Error 1
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/intel_fbdev.o' failed
make[4]: *** [drivers/gpu/drm/i915/intel_fbdev.o] Error 1
cc1: all warnings being treated as errors
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/intel_hdmi.o' failed
make[4]: *** [drivers/gpu/drm/i915/intel_hdmi.o] Error 1
cc1: all warnings being treated as errors
cc1: all warnings being treated as errors
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/i915_oa_hsw.o' failed
make[4]: *** [drivers/gpu/drm/i915/i915_oa_hsw.o] Error 1
cc1: all warnings being treated as errors
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/i915_perf.o' failed
make[4]: *** [drivers/gpu/drm/i915/i915_perf.o] Error 1
cc1: all warnings being treated as errors
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/selftests/i915_selftest.o' failed
make[4]: *** [drivers/gpu/drm/i915/selftests/i915_selftest.o] Error 1
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/dvo_tfp410.o' failed
make[4]: *** [drivers/gpu/drm/i915/dvo_tfp410.o] Error 1
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/intel_dp.o' failed
make[4]: *** [drivers/gpu/drm/i915/intel_dp.o] Error 1
cc1: all warnings being treated as errors
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/gvt/cmd_parser.o' failed
make[4]: *** [drivers/gpu/drm/i915/gvt/cmd_parser.o] Error 1
cc1: all warnings being treated as errors
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/intel_pm.o' failed
make[4]: *** [drivers/gpu/drm/i915/intel_pm.o] Error 1
cc1: all warnings being treated as errors
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/gvt/gtt.o' failed
make[4]: *** [drivers/gpu/drm/i915/gvt/gtt.o] Error 1
cc1: all warnings being treated as errors
cc1: all warnings being treated as errors
  LD [M]  drivers/net/ethernet/intel/e1000/e1000.o
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/gvt/execlist.o' failed
make[4]: *** [drivers/gpu/drm/i915/gvt/execlist.o] Error 1
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/intel_sdvo.o' failed
make[4]: *** [drivers/gpu/drm/i915/intel_sdvo.o] Error 1
  LD      drivers/scsi/scsi_mod.o
cc1: all warnings being treated as errors
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/gvt/handlers.o' failed
make[4]: *** [drivers/gpu/drm/i915/gvt/handlers.o] Error 1
  LD      drivers/tty/serial/8250/8250_base.o
  LD      drivers/tty/serial/8250/built-in.o
  LD [M]  drivers/net/ethernet/intel/igbvf/igbvf.o
cc1: all warnings being treated as errors
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/intel_display.o' failed
make[4]: *** [drivers/gpu/drm/i915/intel_display.o] Error 1
scripts/Makefile.build:553: recipe for target 'drivers/gpu/drm/i915' failed
make[3]: *** [drivers/gpu/drm/i915] Error 2
make[3]: *** Waiting for unfinished jobs....
  AR      lib/lib.a
  LD      drivers/tty/serial/built-in.o
  EXPORTS lib/lib-ksyms.o
  LD      net/xfrm/built-in.o
  LD      drivers/usb/core/usbcore.o
  LD      lib/built-in.o
  CC      arch/x86/kernel/cpu/capflags.o
  LD      arch/x86/kernel/cpu/built-in.o
  LD      drivers/usb/core/built-in.o
  LD      net/ipv4/built-in.o
  LD      fs/btrfs/btrfs.o
scripts/Makefile.build:553: recipe for target 'arch/x86/kernel' failed
make[1]: *** [arch/x86/kernel] Error 2
Makefile:1002: recipe for target 'arch/x86' failed
make: *** [arch/x86] Error 2
make: *** Waiting for unfinished jobs....
  LD      fs/btrfs/built-in.o
  LD      drivers/scsi/sd_mod.o
  LD      drivers/scsi/built-in.o
scripts/Makefile.build:553: recipe for target 'drivers/gpu/drm' failed
make[2]: *** [drivers/gpu/drm] Error 2
scripts/Makefile.build:553: recipe for target 'drivers/gpu' failed
make[1]: *** [drivers/gpu] Error 2
make[1]: *** Waiting for unfinished jobs....
  LD      drivers/usb/host/xhci-hcd.o
  LD      drivers/tty/vt/built-in.o
  LD      drivers/tty/built-in.o
  LD      drivers/md/md-mod.o
  LD      drivers/md/built-in.o
  LD      net/core/built-in.o
  LD      fs/ext4/ext4.o
  LD      fs/ext4/built-in.o
  LD      net/built-in.o
  LD      fs/built-in.o
  LD      drivers/usb/host/built-in.o
  LD      drivers/usb/built-in.o
  LD [M]  drivers/net/ethernet/intel/igb/igb.o
  LD [M]  drivers/net/ethernet/intel/e1000e/e1000e.o
  LD      drivers/net/ethernet/built-in.o
  LD      drivers/net/built-in.o
Makefile:1002: recipe for target 'drivers' failed
make: *** [drivers] Error 2

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v6 12/15] drm/i915/perf: Add OA unit support for Gen 8+
  2017-04-24 18:49       ` [PATCH v6 " Lionel Landwerlin
@ 2017-04-25 16:20         ` Lionel Landwerlin
  2017-04-25 16:42         ` Matthew Auld
  1 sibling, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-04-25 16:20 UTC (permalink / raw)
  To: intel-gfx, Auld, Matthew

Hey Matt,

This commit had your reviewed-by on v4, are you still okay with it?

Thanks!

-
Lionel

On 24/04/17 11:49, Lionel Landwerlin wrote:
> From: Robert Bragg <robert@sixbynine.org>
>
> Enables access to OA unit metrics for BDW, CHV, SKL and BXT which all
> share (more-or-less) the same OA unit design.
>
> Of particular note in comparison to Haswell: some OA unit HW config
> state has become per-context state and as a consequence it is somewhat
> more complicated to manage synchronous state changes from the cpu while
> there's no guarantee of what context (if any) is currently actively
> running on the gpu.
>
> The periodic sampling frequency which can be particularly useful for
> system-wide analysis (as opposed to command stream synchronised
> MI_REPORT_PERF_COUNT commands) is perhaps the most surprising state to
> have become per-context save and restored (while the OABUFFER
> destination is still a shared, system-wide resource).
>
> This support for gen8+ takes care to consider a number of timing
> challenges involved in synchronously updating per-context state
> primarily by programming all config state from the cpu and updating all
> current and saved contexts synchronously while the OA unit is still
> disabled.
>
> The driver intentionally avoids depending on command streamer
> programming to update OA state considering the lack of synchronization
> between the automatic loading of OACTXCONTROL state (that includes the
> periodic sampling state and enable state) on context restore and the
> parsing of any general purpose BB the driver can control. I.e. this
> implementation is careful to avoid the possibility of a context restore
> temporarily enabling any out-of-date periodic sampling state. In
> addition to the risk of transiently-out-of-date state being loaded
> automatically; there are also internal HW latencies involved in the
> loading of MUX configurations which would be difficult to account for
> from the command streamer (and we only want to enable the unit when once
> the MUX configuration is complete).
>
> Since the Gen8+ OA unit design no longer supports clock gating the unit
> off for a single given context (which effectively stopped any progress
> of counters while any other context was running) and instead supports
> tagging OA reports with a context ID for filtering on the CPU, it means
> we can no longer hide the system-wide progress of counters from a
> non-privileged application only interested in metrics for its own
> context. Although we could theoretically try and subtract the progress
> of other contexts before forwarding reports via read() we aren't in a
> position to filter reports captured via MI_REPORT_PERF_COUNT commands.
> As a result, for Gen8+, we always require the
> dev.i915.perf_stream_paranoid to be unset for any access to OA metrics
> if not root.
>
> v5: Drain submitted requests when enabling metric set to ensure no
>      lite-restore erases the context image we just updated (Lionel)
>
> v6: In addition to drain, switch to kernel context & update all
>      context in place (Chris)
>
> Signed-off-by: Robert Bragg <robert@sixbynine.org>
> Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
> Reviewed-by: Matthew Auld <matthew.auld@intel.com> \o/
> ---
>   drivers/gpu/drm/i915/i915_drv.h  |  45 +-
>   drivers/gpu/drm/i915/i915_perf.c | 957 ++++++++++++++++++++++++++++++++++++---
>   drivers/gpu/drm/i915/i915_reg.h  |  22 +
>   drivers/gpu/drm/i915/intel_lrc.c |   2 +
>   include/uapi/drm/i915_drm.h      |  19 +-
>   5 files changed, 952 insertions(+), 93 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index ffa1fc5eddfd..676b1227067c 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -2064,9 +2064,17 @@ struct i915_oa_ops {
>   	void (*init_oa_buffer)(struct drm_i915_private *dev_priv);
>
>   	/**
> -	 * @enable_metric_set: Applies any MUX configuration to set up the
> -	 * Boolean and Custom (B/C) counters that are part of the counter
> -	 * reports being sampled. May apply system constraints such as
> +	 * @select_metric_set: The auto generated code that checks whether a
> +	 * requested OA config is applicable to the system and if so sets up
> +	 * the mux, oa and flex eu register config pointers according to the
> +	 * current dev_priv->perf.oa.metrics_set.
> +	 */
> +	int (*select_metric_set)(struct drm_i915_private *dev_priv);
> +
> +	/**
> +	 * @enable_metric_set: Selects and applies any MUX configuration to set
> +	 * up the Boolean and Custom (B/C) counters that are part of the
> +	 * counter reports being sampled. May apply system constraints such as
>   	 * disabling EU clock gating as required.
>   	 */
>   	int (*enable_metric_set)(struct drm_i915_private *dev_priv);
> @@ -2097,20 +2105,13 @@ struct i915_oa_ops {
>   		    size_t *offset);
>
>   	/**
> -	 * @oa_buffer_check: Check for OA buffer data + update tail
> -	 *
> -	 * This is either called via fops or the poll check hrtimer (atomic
> -	 * ctx) without any locks taken.
> +	 * @oa_hw_tail_read: read the OA tail pointer register
>   	 *
> -	 * It's safe to read OA config state here unlocked, assuming that this
> -	 * is only called while the stream is enabled, while the global OA
> -	 * configuration can't be modified.
> -	 *
> -	 * Efficiency is more important than avoiding some false positives
> -	 * here, which will be handled gracefully - likely resulting in an
> -	 * %EAGAIN error for userspace.
> +	 * In particular this enables us to share all the fiddly code for
> +	 * handling the OA unit tail pointer race that affects multiple
> +	 * generations.
>   	 */
> -	bool (*oa_buffer_check)(struct drm_i915_private *dev_priv);
> +	u32 (*oa_hw_tail_read)(struct drm_i915_private *dev_priv);
>   };
>
>   struct intel_cdclk_state {
> @@ -2472,6 +2473,7 @@ struct drm_i915_private {
>   			struct {
>   				struct i915_vma *vma;
>   				u8 *vaddr;
> +				u32 last_ctx_id;
>   				int format;
>   				int format_size;
>
> @@ -2541,6 +2543,14 @@ struct drm_i915_private {
>   			} oa_buffer;
>
>   			u32 gen7_latched_oastatus1;
> +			u32 ctx_oactxctrl_offset;
> +			u32 ctx_flexeu0_offset;
> +
> +			/* The RPT_ID/reason field for Gen8+ includes a bit
> +			 * to determine if the CTX ID in the report is valid
> +			 * but the specific bit differs between Gen 8 and 9
> +			 */
> +			u32 gen8_valid_ctx_bit;
>
>   			struct i915_oa_ops ops;
>   			const struct i915_oa_format *oa_formats;
> @@ -2851,6 +2861,8 @@ intel_info(const struct drm_i915_private *dev_priv)
>   #define IS_KBL_ULX(dev_priv)	(INTEL_DEVID(dev_priv) == 0x590E || \
>   				 INTEL_DEVID(dev_priv) == 0x5915 || \
>   				 INTEL_DEVID(dev_priv) == 0x591E)
> +#define IS_SKL_GT2(dev_priv)	(IS_SKYLAKE(dev_priv) && \
> +				 (INTEL_DEVID(dev_priv) & 0x00F0) == 0x0010)
>   #define IS_SKL_GT3(dev_priv)	(IS_SKYLAKE(dev_priv) && \
>   				 (INTEL_DEVID(dev_priv) & 0x00F0) == 0x0020)
>   #define IS_SKL_GT4(dev_priv)	(IS_SKYLAKE(dev_priv) && \
> @@ -3615,6 +3627,9 @@ i915_gem_context_lookup_timeline(struct i915_gem_context *ctx,
>
>   int i915_perf_open_ioctl(struct drm_device *dev, void *data,
>   			 struct drm_file *file);
> +void i915_oa_init_reg_state(struct intel_engine_cs *engine,
> +			    struct i915_gem_context *ctx,
> +			    uint32_t *reg_state);
>
>   /* i915_gem_evict.c */
>   int __must_check i915_gem_evict_something(struct i915_address_space *vm,
> diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
> index 3277a52ce98e..f2688dee5a03 100644
> --- a/drivers/gpu/drm/i915/i915_perf.c
> +++ b/drivers/gpu/drm/i915/i915_perf.c
> @@ -196,6 +196,12 @@
>
>   #include "i915_drv.h"
>   #include "i915_oa_hsw.h"
> +#include "i915_oa_bdw.h"
> +#include "i915_oa_chv.h"
> +#include "i915_oa_sklgt2.h"
> +#include "i915_oa_sklgt3.h"
> +#include "i915_oa_sklgt4.h"
> +#include "i915_oa_bxt.h"
>
>   /* HW requires this to be a power of two, between 128k and 16M, though driver
>    * is currently generally designed assuming the largest 16M size is used such
> @@ -215,7 +221,7 @@
>    *
>    * Although this can be observed explicitly while copying reports to userspace
>    * by checking for a zeroed report-id field in tail reports, we want to account
> - * for this earlier, as part of the _oa_buffer_check to avoid lots of redundant
> + * for this earlier, as part of the oa_buffer_check to avoid lots of redundant
>    * read() attempts.
>    *
>    * In effect we define a tail pointer for reading that lags the real tail
> @@ -237,7 +243,7 @@
>    * indicates that an updated tail pointer is needed.
>    *
>    * Most of the implementation details for this workaround are in
> - * gen7_oa_buffer_check_unlocked() and gen7_appand_oa_reports()
> + * oa_buffer_check_unlocked() and _append_oa_reports()
>    *
>    * Note for posterity: previously the driver used to define an effective tail
>    * pointer that lagged the real pointer by a 'tail margin' measured in bytes
> @@ -272,6 +278,13 @@ static u32 i915_perf_stream_paranoid = true;
>
>   #define INVALID_CTX_ID 0xffffffff
>
> +/* On Gen8+ automatically triggered OA reports include a 'reason' field... */
> +#define OAREPORT_REASON_MASK           0x3f
> +#define OAREPORT_REASON_SHIFT          19
> +#define OAREPORT_REASON_TIMER          (1<<0)
> +#define OAREPORT_REASON_CTX_SWITCH     (1<<3)
> +#define OAREPORT_REASON_CLK_RATIO      (1<<5)
> +
>
>   /* For sysctl proc_dointvec_minmax of i915_oa_max_sample_rate
>    *
> @@ -303,6 +316,13 @@ static struct i915_oa_format hsw_oa_formats[I915_OA_FORMAT_MAX] = {
>   	[I915_OA_FORMAT_C4_B8]	    = { 7, 64 },
>   };
>
> +static struct i915_oa_format gen8_plus_oa_formats[I915_OA_FORMAT_MAX] = {
> +	[I915_OA_FORMAT_A12]		    = { 0, 64 },
> +	[I915_OA_FORMAT_A12_B8_C8]	    = { 2, 128 },
> +	[I915_OA_FORMAT_A32u40_A4u32_B8_C8] = { 5, 256 },
> +	[I915_OA_FORMAT_C4_B8]		    = { 7, 64 },
> +};
> +
>   #define SAMPLE_OA_REPORT      (1<<0)
>
>   /**
> @@ -332,8 +352,20 @@ struct perf_open_properties {
>   	int oa_period_exponent;
>   };
>
> +static u32 gen8_oa_hw_tail_read(struct drm_i915_private *dev_priv)
> +{
> +	return I915_READ(GEN8_OATAILPTR) & GEN8_OATAILPTR_MASK;
> +}
> +
> +static u32 gen7_oa_hw_tail_read(struct drm_i915_private *dev_priv)
> +{
> +	u32 oastatus1 = I915_READ(GEN7_OASTATUS1);
> +
> +	return oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
> +}
> +
>   /**
> - * gen7_oa_buffer_check_unlocked - check for data and update tail ptr state
> + * oa_buffer_check_unlocked - check for data and update tail ptr state
>    * @dev_priv: i915 device instance
>    *
>    * This is either called via fops (for blocking reads in user ctx) or the poll
> @@ -356,12 +388,11 @@ struct perf_open_properties {
>    *
>    * Returns: %true if the OA buffer contains data, else %false
>    */
> -static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
> +static bool oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
>   {
>   	int report_size = dev_priv->perf.oa.oa_buffer.format_size;
>   	unsigned long flags;
>   	unsigned int aged_idx;
> -	u32 oastatus1;
>   	u32 head, hw_tail, aged_tail, aging_tail;
>   	u64 now;
>
> @@ -381,8 +412,7 @@ static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
>   	aged_tail = dev_priv->perf.oa.oa_buffer.tails[aged_idx].offset;
>   	aging_tail = dev_priv->perf.oa.oa_buffer.tails[!aged_idx].offset;
>
> -	oastatus1 = I915_READ(GEN7_OASTATUS1);
> -	hw_tail = oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
> +	hw_tail = dev_priv->perf.oa.ops.oa_hw_tail_read(dev_priv);
>
>   	/* The tail pointer increases in 64 byte increments,
>   	 * not in report_size steps...
> @@ -404,6 +434,7 @@ static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
>   	if (aging_tail != INVALID_TAIL_PTR &&
>   	    ((now - dev_priv->perf.oa.oa_buffer.aging_timestamp) >
>   	     OA_TAIL_MARGIN_NSEC)) {
> +
>   		aged_idx ^= 1;
>   		dev_priv->perf.oa.oa_buffer.aged_tail_idx = aged_idx;
>
> @@ -553,6 +584,287 @@ static int append_oa_sample(struct i915_perf_stream *stream,
>    *
>    * Returns: 0 on success, negative error code on failure.
>    */
> +static int gen8_append_oa_reports(struct i915_perf_stream *stream,
> +				  char __user *buf,
> +				  size_t count,
> +				  size_t *offset)
> +{
> +	struct drm_i915_private *dev_priv = stream->dev_priv;
> +	int report_size = dev_priv->perf.oa.oa_buffer.format_size;
> +	u8 *oa_buf_base = dev_priv->perf.oa.oa_buffer.vaddr;
> +	u32 gtt_offset = i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
> +	u32 mask = (OA_BUFFER_SIZE - 1);
> +	size_t start_offset = *offset;
> +	unsigned long flags;
> +	unsigned int aged_tail_idx;
> +	u32 head, tail;
> +	u32 taken;
> +	int ret = 0;
> +
> +	if (WARN_ON(!stream->enabled))
> +		return -EIO;
> +
> +	spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
> +
> +	head = dev_priv->perf.oa.oa_buffer.head;
> +	aged_tail_idx = dev_priv->perf.oa.oa_buffer.aged_tail_idx;
> +	tail = dev_priv->perf.oa.oa_buffer.tails[aged_tail_idx].offset;
> +
> +	spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
> +
> +	/* An invalid tail pointer here means we're still waiting for the poll
> +	 * hrtimer callback to give us a pointer
> +	 */
> +	if (tail == INVALID_TAIL_PTR)
> +		return -EAGAIN;
> +
> +	/* NB: oa_buffer.head/tail include the gtt_offset which we don't want
> +	 * while indexing relative to oa_buf_base.
> +	 */
> +	head -= gtt_offset;
> +	tail -= gtt_offset;
> +
> +	/* An out of bounds or misaligned head or tail pointer implies a driver
> +	 * bug since we validate + align the tail pointers we read from the
> +	 * hardware and we are in full control of the head pointer which should
> +	 * only be incremented by multiples of the report size (notably also
> +	 * all a power of two).
> +	 */
> +	if (WARN_ONCE(head > OA_BUFFER_SIZE || head % report_size ||
> +		      tail > OA_BUFFER_SIZE || tail % report_size,
> +		      "Inconsistent OA buffer pointers: head = %u, tail = %u\n",
> +		      head, tail))
> +		return -EIO;
> +
> +
> +	for (/* none */;
> +	     (taken = OA_TAKEN(tail, head));
> +	     head = (head + report_size) & mask) {
> +		u8 *report = oa_buf_base + head;
> +		u32 *report32 = (void *)report;
> +		u32 ctx_id;
> +		u32 reason;
> +
> +		/* All the report sizes factor neatly into the buffer
> +		 * size so we never expect to see a report split
> +		 * between the beginning and end of the buffer.
> +		 *
> +		 * Given the initial alignment check a misalignment
> +		 * here would imply a driver bug that would result
> +		 * in an overrun.
> +		 */
> +		if (WARN_ON((OA_BUFFER_SIZE - head) < report_size)) {
> +			DRM_ERROR("Spurious OA head ptr: non-integral report offset\n");
> +			break;
> +		}
> +
> +		/* The reason field includes flags identifying what
> +		 * triggered this specific report (mostly timer
> +		 * triggered or e.g. due to a context switch).
> +		 *
> +		 * This field is never expected to be zero so we can
> +		 * check that the report isn't invalid before copying
> +		 * it to userspace...
> +		 */
> +		reason = ((report32[0] >> OAREPORT_REASON_SHIFT) &
> +			  OAREPORT_REASON_MASK);
> +		if (reason == 0) {
> +			if (__ratelimit(&dev_priv->perf.oa.spurious_report_rs))
> +				DRM_NOTE("Skipping spurious, invalid OA report\n");
> +			continue;
> +		}
> +
> +		/* XXX: Just keep the lower 21 bits for now since I'm not
> +		 * entirely sure if the HW touches any of the higher bits in
> +		 * this field
> +		 */
> +		ctx_id = report32[2] & 0x1fffff;
> +
> +		/* Squash whatever is in the CTX_ID field if it's marked as
> +		 * invalid to be sure we avoid false-positive, single-context
> +		 * filtering below...
> +		 *
> +		 * Note: that we don't clear the valid_ctx_bit so userspace can
> +		 * understand that the ID has been squashed by the kernel.
> +		 */
> +		if (!(report32[0] & dev_priv->perf.oa.gen8_valid_ctx_bit))
> +			ctx_id = report32[2] = INVALID_CTX_ID;
> +
> +		/* NB: For Gen 8 the OA unit no longer supports clock gating
> +		 * off for a specific context and the kernel can't securely
> +		 * stop the counters from updating as system-wide / global
> +		 * values.
> +		 *
> +		 * Automatic reports now include a context ID so reports can be
> +		 * filtered on the cpu but it's not worth trying to
> +		 * automatically subtract/hide counter progress for other
> +		 * contexts while filtering since we can't stop userspace
> +		 * issuing MI_REPORT_PERF_COUNT commands which would still
> +		 * provide a side-band view of the real values.
> +		 *
> +		 * To allow userspace (such as Mesa/GL_INTEL_performance_query)
> +		 * to normalize counters for a single filtered context then it
> +		 * needs be forwarded bookend context-switch reports so that it
> +		 * can track switches in between MI_REPORT_PERF_COUNT commands
> +		 * and can itself subtract/ignore the progress of counters
> +		 * associated with other contexts. Note that the hardware
> +		 * automatically triggers reports when switching to a new
> +		 * context which are tagged with the ID of the newly active
> +		 * context. To avoid the complexity (and likely fragility) of
> +		 * reading ahead while parsing reports to try and minimize
> +		 * forwarding redundant context switch reports (i.e. between
> +		 * other, unrelated contexts) we simply elect to forward them
> +		 * all.
> +		 *
> +		 * We don't rely solely on the reason field to identify context
> +		 * switches since it's not-uncommon for periodic samples to
> +		 * identify a switch before any 'context switch' report.
> +		 */
> +		if (!dev_priv->perf.oa.exclusive_stream->ctx ||
> +		    dev_priv->perf.oa.specific_ctx_id == ctx_id ||
> +		    (dev_priv->perf.oa.oa_buffer.last_ctx_id ==
> +		     dev_priv->perf.oa.specific_ctx_id) ||
> +		    reason & OAREPORT_REASON_CTX_SWITCH) {
> +
> +			/* While filtering for a single context we avoid
> +			 * leaking the IDs of other contexts.
> +			 */
> +			if (dev_priv->perf.oa.exclusive_stream->ctx &&
> +			    dev_priv->perf.oa.specific_ctx_id != ctx_id) {
> +				report32[2] = INVALID_CTX_ID;
> +			}
> +
> +			ret = append_oa_sample(stream, buf, count, offset,
> +					       report);
> +			if (ret)
> +				break;
> +
> +			dev_priv->perf.oa.oa_buffer.last_ctx_id = ctx_id;
> +		}
> +
> +		/* The above reason field sanity check is based on
> +		 * the assumption that the OA buffer is initially
> +		 * zeroed and we reset the field after copying so the
> +		 * check is still meaningful once old reports start
> +		 * being overwritten.
> +		 */
> +		report32[0] = 0;
> +	}
> +
> +	if (start_offset != *offset) {
> +		spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
> +
> +		/* We removed the gtt_offset for the copy loop above, indexing
> +		 * relative to oa_buf_base so put back here...
> +		 */
> +		head += gtt_offset;
> +
> +		I915_WRITE(GEN8_OAHEADPTR, head & GEN8_OAHEADPTR_MASK);
> +		dev_priv->perf.oa.oa_buffer.head = head;
> +
> +		spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
> +	}
> +
> +	return ret;
> +}
> +
> +/**
> + * gen8_oa_read - copy status records then buffered OA reports
> + * @stream: An i915-perf stream opened for OA metrics
> + * @buf: destination buffer given by userspace
> + * @count: the number of bytes userspace wants to read
> + * @offset: (inout): the current position for writing into @buf
> + *
> + * Checks OA unit status registers and if necessary appends corresponding
> + * status records for userspace (such as for a buffer full condition) and then
> + * initiate appending any buffered OA reports.
> + *
> + * Updates @offset according to the number of bytes successfully copied into
> + * the userspace buffer.
> + *
> + * NB: some data may be successfully copied to the userspace buffer
> + * even if an error is returned, and this is reflected in the
> + * updated @offset.
> + *
> + * Returns: zero on success or a negative error code
> + */
> +static int gen8_oa_read(struct i915_perf_stream *stream,
> +			char __user *buf,
> +			size_t count,
> +			size_t *offset)
> +{
> +	struct drm_i915_private *dev_priv = stream->dev_priv;
> +	u32 oastatus;
> +	int ret;
> +
> +	if (WARN_ON(!dev_priv->perf.oa.oa_buffer.vaddr))
> +		return -EIO;
> +
> +	oastatus = I915_READ(GEN8_OASTATUS);
> +
> +	/* We treat OABUFFER_OVERFLOW as a significant error:
> +	 *
> +	 * Although theoretically we could handle this more gracefully
> +	 * sometimes, some Gens don't correctly suppress certain
> +	 * automatically triggered reports in this condition and so we
> +	 * have to assume that old reports are now being trampled
> +	 * over.
> +	 *
> +	 * Considering how we don't currently give userspace control
> +	 * over the OA buffer size and always configure a large 16MB
> +	 * buffer, then a buffer overflow does anyway likely indicate
> +	 * that something has gone quite badly wrong.
> +	 */
> +	if (oastatus & GEN8_OASTATUS_OABUFFER_OVERFLOW) {
> +		ret = append_oa_status(stream, buf, count, offset,
> +				       DRM_I915_PERF_RECORD_OA_BUFFER_LOST);
> +		if (ret)
> +			return ret;
> +
> +		DRM_DEBUG("OA buffer overflow (exponent = %d): force restart\n",
> +			  dev_priv->perf.oa.period_exponent);
> +
> +		dev_priv->perf.oa.ops.oa_disable(dev_priv);
> +		dev_priv->perf.oa.ops.oa_enable(dev_priv);
> +
> +		/* Note: .oa_enable() is expected to re-init the oabuffer
> +		 * and reset GEN8_OASTATUS for us
> +		 */
> +		oastatus = I915_READ(GEN8_OASTATUS);
> +	}
> +
> +	if (oastatus & GEN8_OASTATUS_REPORT_LOST) {
> +		ret = append_oa_status(stream, buf, count, offset,
> +				       DRM_I915_PERF_RECORD_OA_REPORT_LOST);
> +		if (ret)
> +			return ret;
> +		I915_WRITE(GEN8_OASTATUS,
> +			   oastatus & ~GEN8_OASTATUS_REPORT_LOST);
> +	}
> +
> +	return gen8_append_oa_reports(stream, buf, count, offset);
> +}
> +
> +/**
> + * Copies all buffered OA reports into userspace read() buffer.
> + * @stream: An i915-perf stream opened for OA metrics
> + * @buf: destination buffer given by userspace
> + * @count: the number of bytes userspace wants to read
> + * @offset: (inout): the current position for writing into @buf
> + *
> + * Notably any error condition resulting in a short read (-%ENOSPC or
> + * -%EFAULT) will be returned even though one or more records may
> + * have been successfully copied. In this case it's up to the caller
> + * to decide if the error should be squashed before returning to
> + * userspace.
> + *
> + * Note: reports are consumed from the head, and appended to the
> + * tail, so the tail chases the head?... If you think that's mad
> + * and back-to-front you're not alone, but this follows the
> + * Gen PRM naming convention.
> + *
> + * Returns: 0 on success, negative error code on failure.
> + */
>   static int gen7_append_oa_reports(struct i915_perf_stream *stream,
>   				  char __user *buf,
>   				  size_t count,
> @@ -733,7 +1045,8 @@ static int gen7_oa_read(struct i915_perf_stream *stream,
>   		if (ret)
>   			return ret;
>
> -		DRM_DEBUG("OA buffer overflow: force restart\n");
> +		DRM_DEBUG("OA buffer overflow (exponent = %d): force restart\n",
> +			  dev_priv->perf.oa.period_exponent);
>
>   		dev_priv->perf.oa.ops.oa_disable(dev_priv);
>   		dev_priv->perf.oa.ops.oa_enable(dev_priv);
> @@ -776,7 +1089,7 @@ static int i915_oa_wait_unlocked(struct i915_perf_stream *stream)
>   		return -EIO;
>
>   	return wait_event_interruptible(dev_priv->perf.oa.poll_wq,
> -					dev_priv->perf.oa.ops.oa_buffer_check(dev_priv));
> +					oa_buffer_check_unlocked(dev_priv));
>   }
>
>   /**
> @@ -833,33 +1146,37 @@ static int i915_oa_read(struct i915_perf_stream *stream,
>   static int oa_get_render_ctx_id(struct i915_perf_stream *stream)
>   {
>   	struct drm_i915_private *dev_priv = stream->dev_priv;
> -	struct intel_engine_cs *engine = dev_priv->engine[RCS];
> -	int ret;
>
> -	ret = i915_mutex_lock_interruptible(&dev_priv->drm);
> -	if (ret)
> -		return ret;
> +	if (i915.enable_execlists)
> +		dev_priv->perf.oa.specific_ctx_id = stream->ctx->hw_id;
> +	else {
> +		struct intel_engine_cs *engine = dev_priv->engine[RCS];
> +		int ret;
>
> -	/* As the ID is the gtt offset of the context's vma we pin
> -	 * the vma to ensure the ID remains fixed.
> -	 *
> -	 * NB: implied RCS engine...
> -	 */
> -	ret = engine->context_pin(engine, stream->ctx);
> -	if (ret)
> -		goto unlock;
> +		ret = i915_mutex_lock_interruptible(&dev_priv->drm);
> +		if (ret)
> +			return ret;
>
> -	/* Explicitly track the ID (instead of calling i915_ggtt_offset()
> -	 * on the fly) considering the difference with gen8+ and
> -	 * execlists
> -	 */
> -	dev_priv->perf.oa.specific_ctx_id =
> -		i915_ggtt_offset(stream->ctx->engine[engine->id].state);
> +		/* As the ID is the gtt offset of the context's vma we pin
> +		 * the vma to ensure the ID remains fixed.
> +		 */
> +		ret = engine->context_pin(engine, stream->ctx);
> +		if (ret) {
> +			mutex_unlock(&dev_priv->drm.struct_mutex);
> +			return ret;
> +		}
>
> -unlock:
> -	mutex_unlock(&dev_priv->drm.struct_mutex);
> +		/* Explicitly track the ID (instead of calling
> +		 * i915_ggtt_offset() on the fly) considering the difference
> +		 * with gen8+ and execlists
> +		 */
> +		dev_priv->perf.oa.specific_ctx_id =
> +			i915_ggtt_offset(stream->ctx->engine[engine->id].state);
>
> -	return ret;
> +		mutex_unlock(&dev_priv->drm.struct_mutex);
> +	}
> +
> +	return 0;
>   }
>
>   /**
> @@ -874,12 +1191,16 @@ static void oa_put_render_ctx_id(struct i915_perf_stream *stream)
>   	struct drm_i915_private *dev_priv = stream->dev_priv;
>   	struct intel_engine_cs *engine = dev_priv->engine[RCS];
>
> -	mutex_lock(&dev_priv->drm.struct_mutex);
> +	if (i915.enable_execlists) {
> +		dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
> +	} else {
> +		mutex_lock(&dev_priv->drm.struct_mutex);
>
> -	dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
> -	engine->context_unpin(engine, stream->ctx);
> +		dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
> +		engine->context_unpin(engine, stream->ctx);
>
> -	mutex_unlock(&dev_priv->drm.struct_mutex);
> +		mutex_unlock(&dev_priv->drm.struct_mutex);
> +	}
>   }
>
>   static void
> @@ -969,6 +1290,61 @@ static void gen7_init_oa_buffer(struct drm_i915_private *dev_priv)
>   	dev_priv->perf.oa.pollin = false;
>   }
>
> +static void gen8_init_oa_buffer(struct drm_i915_private *dev_priv)
> +{
> +	u32 gtt_offset = i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
> +
> +	I915_WRITE(GEN8_OASTATUS, 0);
> +	I915_WRITE(GEN8_OAHEADPTR, gtt_offset);
> +	dev_priv->perf.oa.oa_buffer.head = gtt_offset;
> +
> +	I915_WRITE(GEN8_OABUFFER_UDW, 0);
> +
> +	/* PRM says:
> +	 *
> +	 *  "This MMIO must be set before the OATAILPTR
> +	 *  register and after the OAHEADPTR register. This is
> +	 *  to enable proper functionality of the overflow
> +	 *  bit."
> +	 */
> +	I915_WRITE(GEN8_OABUFFER, gtt_offset |
> +		   OABUFFER_SIZE_16M | OA_MEM_SELECT_GGTT);
> +	I915_WRITE(GEN8_OATAILPTR, gtt_offset & GEN8_OATAILPTR_MASK);
> +
> +	/* Mark that we need updated tail pointers to read from... */
> +	dev_priv->perf.oa.oa_buffer.tails[0].offset = INVALID_TAIL_PTR;
> +	dev_priv->perf.oa.oa_buffer.tails[1].offset = INVALID_TAIL_PTR;
> +
> +	/* Reset state used to recognise context switches, affecting which
> +	 * reports we will forward to userspace while filtering for a single
> +	 * context.
> +	 */
> +	dev_priv->perf.oa.oa_buffer.last_ctx_id = INVALID_CTX_ID;
> +
> +	spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
> +
> +	/* NB: although the OA buffer will initially be allocated
> +	 * zeroed via shmfs (and so this memset is redundant when
> +	 * first allocating), we may re-init the OA buffer, either
> +	 * when re-enabling a stream or in error/reset paths.
> +	 *
> +	 * The reason we clear the buffer for each re-init is for the
> +	 * sanity check in gen8_append_oa_reports() that looks at the
> +	 * reason field to make sure it's non-zero which relies on
> +	 * the assumption that new reports are being written to zeroed
> +	 * memory...
> +	 */
> +	memset(dev_priv->perf.oa.oa_buffer.vaddr, 0, OA_BUFFER_SIZE);
> +
> +	/* Maybe make ->pollin per-stream state if we support multiple
> +	 * concurrent streams in the future.
> +	 */
> +	dev_priv->perf.oa.pollin = false;
> +}
> +
>   static int alloc_oa_buffer(struct drm_i915_private *dev_priv)
>   {
>   	struct drm_i915_gem_object *bo;
> @@ -1113,6 +1489,280 @@ static void hsw_disable_metric_set(struct drm_i915_private *dev_priv)
>   				      ~GT_NOA_ENABLE));
>   }
>
> +/*
> + * From Broadwell PRM, 3D-Media-GPGPU -> Register State Context
> + *
> + * MMIO reads or writes to any of the registers listed in the
> + * “Register State Context image” subsections through HOST/IA
> + * MMIO interface for debug purposes must follow the steps below:
> + *
> + * - SW should set the Force Wakeup bit to prevent GT from entering C6.
> + * - Write 0x2050[31:0] = 0x00010001 (disable sequence).
> + * - Disable IDLE messaging in CS (Write 0x2050[31:0] = 0x00010001).
> + * - BDW:  Poll/Wait for register bits of 0x22AC[6:0] turn to 0x30 value.
> + * - SKL+: Poll/Wait for register bits of 0x22A4[6:0] turn to 0x30 value.
> + * - Read/Write to desired MMIO registers.
> + * - Enable IDLE messaging in CS (Write 0x2050[31:0] = 0x00010000).
> + * - Force Wakeup bit should be reset to enable C6 entry.
> + *
> + * XXX: don't nest or overlap calls to this API, it has no ref
> + * counting to track how many entities require the RCS to be
> + * blocked from being idle.
> + */
> +static int gen8_begin_ctx_mmio(struct drm_i915_private *dev_priv)
> +{
> +	i915_reg_t fsm_reg = INTEL_GEN(dev_priv) > 8 ?
> +		GEN9_RCS_FE_FSM2 : GEN6_RCS_PWR_FSM;
> +	int ret = 0;
> +
> +	/* There's only no active context while idle in execlist mode
> +	 * (though we shouldn't be using this in any other case)
> +	 */
> +	if (WARN_ON(!i915.enable_execlists))
> +		return -EINVAL;
> +
> +	intel_uncore_forcewake_get(dev_priv, FORCEWAKE_RENDER);
> +
> +	I915_WRITE(GEN6_RC_SLEEP_PSMI_CONTROL,
> +		   _MASKED_BIT_ENABLE(GEN6_PSMI_SLEEP_MSG_DISABLE));
> +
> +	/* Note: we don't currently have a good handle on the maximum
> +	 * latency for this wake up so while we only need to hold rcs
> +	 * busy from process context we at least keep the waiting
> +	 * interruptible...
> +	 */
> +	ret = intel_wait_for_register(dev_priv, fsm_reg, 0x3f, 0x30, 50);
> +	if (ret == -ETIMEDOUT)
> +		DRM_ERROR("Timeout waiting to be able to begin per-ctx mmio\n");
> +
> +	if (ret)
> +		intel_uncore_forcewake_put(dev_priv, FORCEWAKE_RENDER);
> +
> +	return ret;
> +}
> +
> +static void gen8_end_ctx_mmio(struct drm_i915_private *dev_priv)
> +{
> +	if (WARN_ON(!i915.enable_execlists))
> +		return;
> +
> +	I915_WRITE(GEN6_RC_SLEEP_PSMI_CONTROL,
> +		   _MASKED_BIT_DISABLE(GEN6_PSMI_SLEEP_MSG_DISABLE));
> +	intel_uncore_forcewake_put(dev_priv, FORCEWAKE_RENDER);
> +}
> +
> +/* NB: It must always remain pointer safe to run this even if the OA unit
> + * has been disabled.
> + *
> + * It's fine to put out-of-date values into these per-context registers
> + * in the case that the OA unit has been disabled.
> + */
> +static void gen8_update_reg_state_unlocked(struct i915_gem_context *ctx,
> +					   u32 *reg_state)
> +{
> +	struct drm_i915_private *dev_priv = ctx->i915;
> +	const struct i915_oa_reg *flex_regs = dev_priv->perf.oa.flex_regs;
> +	int n_flex_regs = dev_priv->perf.oa.flex_regs_len;
> +	u32 ctx_oactxctrl = dev_priv->perf.oa.ctx_oactxctrl_offset;
> +	u32 ctx_flexeu0 = dev_priv->perf.oa.ctx_flexeu0_offset;
> +	/* The MMIO offsets for Flex EU registers aren't contiguous */
> +	u32 flex_mmio[] = {
> +		i915_mmio_reg_offset(EU_PERF_CNTL0),
> +		i915_mmio_reg_offset(EU_PERF_CNTL1),
> +		i915_mmio_reg_offset(EU_PERF_CNTL2),
> +		i915_mmio_reg_offset(EU_PERF_CNTL3),
> +		i915_mmio_reg_offset(EU_PERF_CNTL4),
> +		i915_mmio_reg_offset(EU_PERF_CNTL5),
> +		i915_mmio_reg_offset(EU_PERF_CNTL6),
> +	};
> +	int i;
> +
> +	reg_state[ctx_oactxctrl] = i915_mmio_reg_offset(GEN8_OACTXCONTROL);
> +	reg_state[ctx_oactxctrl+1] = (dev_priv->perf.oa.period_exponent <<
> +				      GEN8_OA_TIMER_PERIOD_SHIFT) |
> +				     (dev_priv->perf.oa.periodic ?
> +				      GEN8_OA_TIMER_ENABLE : 0) |
> +				     GEN8_OA_COUNTER_RESUME;
> +
> +	for (i = 0; i < ARRAY_SIZE(flex_mmio); i++) {
> +		u32 state_offset = ctx_flexeu0 + i * 2;
> +		u32 mmio = flex_mmio[i];
> +
> +		/* This arbitrary default will select the 'EU FPU0 Pipeline
> +		 * Active' event. In the future it's anticipated that there
> +		 * will be an explicit 'No Event' we can select, but not yet...
> +		 */
> +		u32 value = 0;
> +		int j;
> +
> +		for (j = 0; j < n_flex_regs; j++) {
> +			if (i915_mmio_reg_offset(flex_regs[j].addr) == mmio) {
> +				value = flex_regs[j].value;
> +				break;
> +			}
> +		}
> +
> +		reg_state[state_offset] = mmio;
> +		reg_state[state_offset+1] = value;
> +	}
> +}
> +
> +/* Manages updating the per-context aspects of the OA stream
> + * configuration across all contexts.
> + *
> + * The awkward consideration here is that OACTXCONTROL controls the
> + * exponent for periodic sampling which is primarily used for system
> + * wide profiling where we'd like a consistent sampling period even in
> + * the face of context switches.
> + *
> + * Our approach of updating the register state context (as opposed to
> + * say using a workaround batch buffer) ensures that the hardware
> + * won't automatically reload an out-of-date timer exponent even
> + * transiently before a WA BB could be parsed.
> + *
> + * This function needs to:
> + * - Ensure the currently running context's per-context OA state is
> + *   updated
> + * - Ensure that all existing contexts will have the correct per-context
> + *   OA state if they are scheduled for use.
> + * - Ensure any new contexts will be initialized with the correct
> + *   per-context OA state.
> + *
> + * Note: it's only the RCS/Render context that has any OA state.
> + */
> +static int gen8_configure_all_contexts(struct drm_i915_private *dev_priv)
> +{
> +	struct i915_gem_context *ctx;
> +	int ret;
> +
> +	ret = i915_mutex_lock_interruptible(&dev_priv->drm);
> +	if (ret)
> +		return ret;
> +
> +	/* Switch away from any user context.  */
> +	ret = i915_gem_switch_to_kernel_context(dev_priv);
> +	if (ret)
> +		return ret;
> +
> +	/* The OA register config is setup through the context image. This image
> +	 * might be written to by the GPU on context switch (in particular on
> +	 * lite-restore). This means we can't safely update a context's image,
> +	 * if this context is scheduled/submitted to run on the GPU.
> +	 *
> +	 * We could emit the OA register config through the batch buffer but
> +	 * this might leave small interval of time where the OA unit is
> +	 * configured at an invalid sampling period.
> +	 *
> +	 * So far the best way to work around this issue seems to be draining
> +	 * the GPU from any submitted work.
> +	 */
> +	ret = i915_gem_wait_for_idle(dev_priv,
> +				     I915_WAIT_INTERRUPTIBLE |
> +				     I915_WAIT_LOCKED);
> +	if (ret) {
> +		mutex_unlock(&dev_priv->drm.struct_mutex);
> +		return ret;
> +	}
> +
> +	/* Update all contexts now that we've stalled the submission. */
> +	list_for_each_entry(ctx, &dev_priv->context_list, link) {
> +		if (!ctx->engine[RCS].initialised)
> +			continue;
> +
> +		gen8_update_reg_state_unlocked(ctx,
> +					       ctx->engine[RCS].lrc_reg_state);
> +	}
> +
> +	mutex_unlock(&dev_priv->drm.struct_mutex);
> +
> +	/* Now update the current context.
> +	 *
> +	 * Note: Using MMIO to update per-context registers requires
> +	 * some extra care...
> +	 */
> +	ret = gen8_begin_ctx_mmio(dev_priv);
> +	if (ret) {
> +		DRM_ERROR("Failed to bring RCS out of idle to update current ctx OA state\n");
> +		return ret;
> +	}
> +
> +	I915_WRITE(GEN8_OACTXCONTROL, ((dev_priv->perf.oa.period_exponent <<
> +					GEN8_OA_TIMER_PERIOD_SHIFT) |
> +				      (dev_priv->perf.oa.periodic ?
> +				       GEN8_OA_TIMER_ENABLE : 0) |
> +				      GEN8_OA_COUNTER_RESUME));
> +
> +	config_oa_regs(dev_priv, dev_priv->perf.oa.flex_regs,
> +			dev_priv->perf.oa.flex_regs_len);
> +
> +	gen8_end_ctx_mmio(dev_priv);
> +
> +	return 0;
> +}
> +
> +static int gen8_enable_metric_set(struct drm_i915_private *dev_priv)
> +{
> +	int ret = dev_priv->perf.oa.ops.select_metric_set(dev_priv);
> +
> +	if (ret)
> +		return ret;
> +
> +	/* We disable slice/unslice clock ratio change reports on SKL since
> +	 * they are too noisy. The HW generates a lot of redundant reports
> +	 * where the ratio hasn't really changed causing a lot of redundant
> +	 * work to processes and increasing the chances we'll hit buffer
> +	 * overruns.
> +	 *
> +	 * Although we don't currently use the 'disable overrun' OABUFFER
> +	 * feature it's worth noting that clock ratio reports have to be
> +	 * disabled before considering to use that feature since the HW doesn't
> +	 * correctly block these reports.
> +	 *
> +	 * Currently none of the high-level metrics we have depend on knowing
> +	 * this ratio to normalize.
> +	 *
> +	 * Note: This register is not power context saved and restored, but
> +	 * that's OK considering that we disable RC6 while the OA unit is
> +	 * enabled.
> +	 *
> +	 * The _INCLUDE_CLK_RATIO bit allows the slice/unslice frequency to
> +	 * be read back from automatically triggered reports, as part of the
> +	 * RPT_ID field.
> +	 */
> +	if (IS_SKYLAKE(dev_priv) || IS_BROXTON(dev_priv)) {
> +		I915_WRITE(GEN8_OA_DEBUG,
> +			   _MASKED_BIT_ENABLE(GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS |
> +					      GEN9_OA_DEBUG_INCLUDE_CLK_RATIO));
> +	}
> +
> +	I915_WRITE(GDT_CHICKEN_BITS, 0xA0);
> +	config_oa_regs(dev_priv, dev_priv->perf.oa.mux_regs,
> +		       dev_priv->perf.oa.mux_regs_len);
> +	I915_WRITE(GDT_CHICKEN_BITS, 0x80);
> +
> +	/* It takes a fairly long time for a new MUX configuration to
> +	 * be be applied after these register writes. This delay
> +	 * duration is take from Haswell (derived empirically based on
> +	 * the render_basic config) but hopefully it covers the
> +	 * maximum configuration latency for Gen8+ too...
> +	 */
> +	usleep_range(15000, 20000);
> +
> +	config_oa_regs(dev_priv, dev_priv->perf.oa.b_counter_regs,
> +		       dev_priv->perf.oa.b_counter_regs_len);
> +
> +	ret = gen8_configure_all_contexts(dev_priv);
> +	if (ret)
> +		return ret;
> +
> +	return 0;
> +}
> +
> +static void gen8_disable_metric_set(struct drm_i915_private *dev_priv)
> +{
> +	/* NOP */
> +}
> +
>   static void gen7_update_oacontrol_locked(struct drm_i915_private *dev_priv)
>   {
>   	lockdep_assert_held(&dev_priv->perf.hook_lock);
> @@ -1157,6 +1807,29 @@ static void gen7_oa_enable(struct drm_i915_private *dev_priv)
>   	spin_unlock_irqrestore(&dev_priv->perf.hook_lock, flags);
>   }
>
> +static void gen8_oa_enable(struct drm_i915_private *dev_priv)
> +{
> +	u32 report_format = dev_priv->perf.oa.oa_buffer.format;
> +
> +	/* Reset buf pointers so we don't forward reports from before now.
> +	 *
> +	 * Think carefully if considering trying to avoid this, since it
> +	 * also ensures status flags and the buffer itself are cleared
> +	 * in error paths, and we have checks for invalid reports based
> +	 * on the assumption that certain fields are written to zeroed
> +	 * memory which this helps maintains.
> +	 */
> +	gen8_init_oa_buffer(dev_priv);
> +
> +	/* Note: we don't rely on the hardware to perform single context
> +	 * filtering and instead filter on the cpu based on the context-id
> +	 * field of reports
> +	 */
> +	I915_WRITE(GEN8_OACONTROL, (report_format <<
> +				    GEN8_OA_REPORT_FORMAT_SHIFT) |
> +				   GEN8_OA_COUNTER_ENABLE);
> +}
> +
>   /**
>    * i915_oa_stream_enable - handle `I915_PERF_IOCTL_ENABLE` for OA stream
>    * @stream: An i915 perf stream opened for OA metrics
> @@ -1183,6 +1856,11 @@ static void gen7_oa_disable(struct drm_i915_private *dev_priv)
>   	I915_WRITE(GEN7_OACONTROL, 0);
>   }
>
> +static void gen8_oa_disable(struct drm_i915_private *dev_priv)
> +{
> +	I915_WRITE(GEN8_OACONTROL, 0);
> +}
> +
>   /**
>    * i915_oa_stream_disable - handle `I915_PERF_IOCTL_DISABLE` for OA stream
>    * @stream: An i915 perf stream opened for OA metrics
> @@ -1361,6 +2039,21 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream,
>   	return ret;
>   }
>
> +void i915_oa_init_reg_state(struct intel_engine_cs *engine,
> +			    struct i915_gem_context *ctx,
> +			    u32 *reg_state)
> +{
> +	struct drm_i915_private *dev_priv = engine->i915;
> +
> +	if (engine->id != RCS)
> +		return;
> +
> +	if (!dev_priv->perf.initialized)
> +		return;
> +
> +	gen8_update_reg_state_unlocked(ctx, reg_state);
> +}
> +
>   /**
>    * i915_perf_read_locked - &i915_perf_stream_ops->read with error normalisation
>    * @stream: An i915 perf stream
> @@ -1486,7 +2179,7 @@ static enum hrtimer_restart oa_poll_check_timer_cb(struct hrtimer *hrtimer)
>   		container_of(hrtimer, typeof(*dev_priv),
>   			     perf.oa.poll_check_timer);
>
> -	if (dev_priv->perf.oa.ops.oa_buffer_check(dev_priv)) {
> +	if (oa_buffer_check_unlocked(dev_priv)) {
>   		dev_priv->perf.oa.pollin = true;
>   		wake_up(&dev_priv->perf.oa.poll_wq);
>   	}
> @@ -1775,6 +2468,7 @@ i915_perf_open_ioctl_locked(struct drm_i915_private *dev_priv,
>   	struct i915_gem_context *specific_ctx = NULL;
>   	struct i915_perf_stream *stream = NULL;
>   	unsigned long f_flags = 0;
> +	bool privileged_op = true;
>   	int stream_fd;
>   	int ret;
>
> @@ -1792,12 +2486,28 @@ i915_perf_open_ioctl_locked(struct drm_i915_private *dev_priv,
>   		}
>   	}
>
> +	/* On Haswell the OA unit supports clock gating off for a specific
> +	 * context and in this mode there's no visibility of metrics for the
> +	 * rest of the system, which we consider acceptable for a
> +	 * non-privileged client.
> +	 *
> +	 * For Gen8+ the OA unit no longer supports clock gating off for a
> +	 * specific context and the kernel can't securely stop the counters
> +	 * from updating as system-wide / global values. Even though we can
> +	 * filter reports based on the included context ID we can't block
> +	 * clients from seeing the raw / global counter values via
> +	 * MI_REPORT_PERF_COUNT commands and so consider it a privileged op to
> +	 * enable the OA unit by default.
> +	 */
> +	if (IS_HASWELL(dev_priv) && specific_ctx)
> +		privileged_op = false;
> +
>   	/* Similar to perf's kernel.perf_paranoid_cpu sysctl option
>   	 * we check a dev.i915.perf_stream_paranoid sysctl option
>   	 * to determine if it's ok to access system wide OA counters
>   	 * without CAP_SYS_ADMIN privileges.
>   	 */
> -	if (!specific_ctx &&
> +	if (privileged_op &&
>   	    i915_perf_stream_paranoid && !capable(CAP_SYS_ADMIN)) {
>   		DRM_DEBUG("Insufficient privileges to open system-wide i915 perf stream\n");
>   		ret = -EACCES;
> @@ -2069,9 +2779,6 @@ int i915_perf_open_ioctl(struct drm_device *dev, void *data,
>    */
>   void i915_perf_register(struct drm_i915_private *dev_priv)
>   {
> -	if (!IS_HASWELL(dev_priv))
> -		return;
> -
>   	if (!dev_priv->perf.initialized)
>   		return;
>
> @@ -2087,11 +2794,38 @@ void i915_perf_register(struct drm_i915_private *dev_priv)
>   	if (!dev_priv->perf.metrics_kobj)
>   		goto exit;
>
> -	if (i915_perf_register_sysfs_hsw(dev_priv)) {
> -		kobject_put(dev_priv->perf.metrics_kobj);
> -		dev_priv->perf.metrics_kobj = NULL;
> +	if (IS_HASWELL(dev_priv)) {
> +		if (i915_perf_register_sysfs_hsw(dev_priv))
> +			goto sysfs_error;
> +	} else if (IS_BROADWELL(dev_priv)) {
> +		if (i915_perf_register_sysfs_bdw(dev_priv))
> +			goto sysfs_error;
> +	} else if (IS_CHERRYVIEW(dev_priv)) {
> +		if (i915_perf_register_sysfs_chv(dev_priv))
> +			goto sysfs_error;
> +	} else if (IS_SKYLAKE(dev_priv)) {
> +		if (IS_SKL_GT2(dev_priv)) {
> +			if (i915_perf_register_sysfs_sklgt2(dev_priv))
> +				goto sysfs_error;
> +		} else if (IS_SKL_GT3(dev_priv)) {
> +			if (i915_perf_register_sysfs_sklgt3(dev_priv))
> +				goto sysfs_error;
> +		} else if (IS_SKL_GT4(dev_priv)) {
> +			if (i915_perf_register_sysfs_sklgt4(dev_priv))
> +				goto sysfs_error;
> +		} else
> +			goto sysfs_error;
> +	} else if (IS_BROXTON(dev_priv)) {
> +		if (i915_perf_register_sysfs_bxt(dev_priv))
> +			goto sysfs_error;
>   	}
>
> +	goto exit;
> +
> +sysfs_error:
> +	kobject_put(dev_priv->perf.metrics_kobj);
> +	dev_priv->perf.metrics_kobj = NULL;
> +
>   exit:
>   	mutex_unlock(&dev_priv->perf.lock);
>   }
> @@ -2107,13 +2841,24 @@ void i915_perf_register(struct drm_i915_private *dev_priv)
>    */
>   void i915_perf_unregister(struct drm_i915_private *dev_priv)
>   {
> -	if (!IS_HASWELL(dev_priv))
> -		return;
> -
>   	if (!dev_priv->perf.metrics_kobj)
>   		return;
>
> -	i915_perf_unregister_sysfs_hsw(dev_priv);
> +        if (IS_HASWELL(dev_priv))
> +                i915_perf_unregister_sysfs_hsw(dev_priv);
> +        else if (IS_BROADWELL(dev_priv))
> +                i915_perf_unregister_sysfs_bdw(dev_priv);
> +        else if (IS_CHERRYVIEW(dev_priv))
> +                i915_perf_unregister_sysfs_chv(dev_priv);
> +        else if (IS_SKYLAKE(dev_priv)) {
> +		if (IS_SKL_GT2(dev_priv))
> +			i915_perf_unregister_sysfs_sklgt2(dev_priv);
> +		else if (IS_SKL_GT3(dev_priv))
> +			i915_perf_unregister_sysfs_sklgt3(dev_priv);
> +		else if (IS_SKL_GT4(dev_priv))
> +			i915_perf_unregister_sysfs_sklgt4(dev_priv);
> +	} else if (IS_BROXTON(dev_priv))
> +                i915_perf_unregister_sysfs_bxt(dev_priv);
>
>   	kobject_put(dev_priv->perf.metrics_kobj);
>   	dev_priv->perf.metrics_kobj = NULL;
> @@ -2172,36 +2917,105 @@ static struct ctl_table dev_root[] = {
>    */
>   void i915_perf_init(struct drm_i915_private *dev_priv)
>   {
> -	if (!IS_HASWELL(dev_priv))
> -		return;
> -
> -	hrtimer_init(&dev_priv->perf.oa.poll_check_timer,
> -		     CLOCK_MONOTONIC, HRTIMER_MODE_REL);
> -	dev_priv->perf.oa.poll_check_timer.function = oa_poll_check_timer_cb;
> -	init_waitqueue_head(&dev_priv->perf.oa.poll_wq);
> +	dev_priv->perf.oa.n_builtin_sets = 0;
> +
> +	if (IS_HASWELL(dev_priv)) {
> +		dev_priv->perf.oa.ops.init_oa_buffer = gen7_init_oa_buffer;
> +		dev_priv->perf.oa.ops.enable_metric_set = hsw_enable_metric_set;
> +		dev_priv->perf.oa.ops.disable_metric_set = hsw_disable_metric_set;
> +		dev_priv->perf.oa.ops.oa_enable = gen7_oa_enable;
> +		dev_priv->perf.oa.ops.oa_disable = gen7_oa_disable;
> +		dev_priv->perf.oa.ops.read = gen7_oa_read;
> +		dev_priv->perf.oa.ops.oa_hw_tail_read =
> +			gen7_oa_hw_tail_read;
> +
> +		dev_priv->perf.oa.oa_formats = hsw_oa_formats;
> +
> +		dev_priv->perf.oa.n_builtin_sets =
> +			i915_oa_n_builtin_metric_sets_hsw;
> +	} else if (i915.enable_execlists) {
> +		/* Note: that although we could theoretically also support the
> +		 * legacy ringbuffer mode on BDW (and earlier iterations of
> +		 * this driver, before upstreaming did this) it didn't seem
> +		 * worth the complexity to maintain now that BDW+ enable
> +		 * execlist mode by default.
> +		 */
>
> -	INIT_LIST_HEAD(&dev_priv->perf.streams);
> -	mutex_init(&dev_priv->perf.lock);
> -	spin_lock_init(&dev_priv->perf.hook_lock);
> -	spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);
> +		if (IS_GEN8(dev_priv)) {
> +			dev_priv->perf.oa.ctx_oactxctrl_offset = 0x120;
> +			dev_priv->perf.oa.ctx_flexeu0_offset = 0x2ce;
> +			dev_priv->perf.oa.gen8_valid_ctx_bit = (1<<25);
> +
> +			if (IS_BROADWELL(dev_priv)) {
> +				dev_priv->perf.oa.n_builtin_sets =
> +					i915_oa_n_builtin_metric_sets_bdw;
> +				dev_priv->perf.oa.ops.select_metric_set =
> +					i915_oa_select_metric_set_bdw;
> +			} else if (IS_CHERRYVIEW(dev_priv)) {
> +				dev_priv->perf.oa.n_builtin_sets =
> +					i915_oa_n_builtin_metric_sets_chv;
> +				dev_priv->perf.oa.ops.select_metric_set =
> +					i915_oa_select_metric_set_chv;
> +			}
> +		} else if (IS_GEN9(dev_priv)) {
> +			dev_priv->perf.oa.ctx_oactxctrl_offset = 0x128;
> +			dev_priv->perf.oa.ctx_flexeu0_offset = 0x3de;
> +			dev_priv->perf.oa.gen8_valid_ctx_bit = (1<<16);
> +
> +			if (IS_SKL_GT2(dev_priv)) {
> +				dev_priv->perf.oa.n_builtin_sets =
> +					i915_oa_n_builtin_metric_sets_sklgt2;
> +				dev_priv->perf.oa.ops.select_metric_set =
> +					i915_oa_select_metric_set_sklgt2;
> +			} else if (IS_SKL_GT3(dev_priv)) {
> +				dev_priv->perf.oa.n_builtin_sets =
> +					i915_oa_n_builtin_metric_sets_sklgt3;
> +				dev_priv->perf.oa.ops.select_metric_set =
> +					i915_oa_select_metric_set_sklgt3;
> +			} else if (IS_SKL_GT4(dev_priv)) {
> +				dev_priv->perf.oa.n_builtin_sets =
> +					i915_oa_n_builtin_metric_sets_sklgt4;
> +				dev_priv->perf.oa.ops.select_metric_set =
> +					i915_oa_select_metric_set_sklgt4;
> +			} else if (IS_BROXTON(dev_priv)) {
> +				dev_priv->perf.oa.n_builtin_sets =
> +					i915_oa_n_builtin_metric_sets_bxt;
> +				dev_priv->perf.oa.ops.select_metric_set =
> +					i915_oa_select_metric_set_bxt;
> +			}
> +		}
>
> -	dev_priv->perf.oa.ops.init_oa_buffer = gen7_init_oa_buffer;
> -	dev_priv->perf.oa.ops.enable_metric_set = hsw_enable_metric_set;
> -	dev_priv->perf.oa.ops.disable_metric_set = hsw_disable_metric_set;
> -	dev_priv->perf.oa.ops.oa_enable = gen7_oa_enable;
> -	dev_priv->perf.oa.ops.oa_disable = gen7_oa_disable;
> -	dev_priv->perf.oa.ops.read = gen7_oa_read;
> -	dev_priv->perf.oa.ops.oa_buffer_check =
> -		gen7_oa_buffer_check_unlocked;
> +		if (dev_priv->perf.oa.n_builtin_sets) {
> +			dev_priv->perf.oa.ops.init_oa_buffer = gen8_init_oa_buffer;
> +			dev_priv->perf.oa.ops.enable_metric_set =
> +				gen8_enable_metric_set;
> +			dev_priv->perf.oa.ops.disable_metric_set =
> +				gen8_disable_metric_set;
> +			dev_priv->perf.oa.ops.oa_enable = gen8_oa_enable;
> +			dev_priv->perf.oa.ops.oa_disable = gen8_oa_disable;
> +			dev_priv->perf.oa.ops.read = gen8_oa_read;
> +			dev_priv->perf.oa.ops.oa_hw_tail_read =
> +				gen8_oa_hw_tail_read;
> +
> +			dev_priv->perf.oa.oa_formats = gen8_plus_oa_formats;
> +		}
> +	}
>
> -	dev_priv->perf.oa.oa_formats = hsw_oa_formats;
> +	if (dev_priv->perf.oa.n_builtin_sets) {
> +		hrtimer_init(&dev_priv->perf.oa.poll_check_timer,
> +				CLOCK_MONOTONIC, HRTIMER_MODE_REL);
> +		dev_priv->perf.oa.poll_check_timer.function = oa_poll_check_timer_cb;
> +		init_waitqueue_head(&dev_priv->perf.oa.poll_wq);
>
> -	dev_priv->perf.oa.n_builtin_sets =
> -		i915_oa_n_builtin_metric_sets_hsw;
> +		INIT_LIST_HEAD(&dev_priv->perf.streams);
> +		mutex_init(&dev_priv->perf.lock);
> +		spin_lock_init(&dev_priv->perf.hook_lock);
> +		spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);
>
> -	dev_priv->perf.sysctl_header = register_sysctl_table(dev_root);
> +		dev_priv->perf.sysctl_header = register_sysctl_table(dev_root);
>
> -	dev_priv->perf.initialized = true;
> +		dev_priv->perf.initialized = true;
> +	}
>   }
>
>   /**
> @@ -2216,5 +3030,6 @@ void i915_perf_fini(struct drm_i915_private *dev_priv)
>   	unregister_sysctl_table(dev_priv->perf.sysctl_header);
>
>   	memset(&dev_priv->perf.oa.ops, 0, sizeof(dev_priv->perf.oa.ops));
> +
>   	dev_priv->perf.initialized = false;
>   }
> diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
> index 4c72adae368b..c0bbba6d5b78 100644
> --- a/drivers/gpu/drm/i915/i915_reg.h
> +++ b/drivers/gpu/drm/i915/i915_reg.h
> @@ -653,6 +653,12 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
>
>   #define GEN8_OACTXID _MMIO(0x2364)
>
> +#define GEN8_OA_DEBUG _MMIO(0x2B04)
> +#define  GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS    (1<<5)
> +#define  GEN9_OA_DEBUG_INCLUDE_CLK_RATIO	    (1<<6)
> +#define  GEN9_OA_DEBUG_DISABLE_GO_1_0_REPORTS	    (1<<2)
> +#define  GEN9_OA_DEBUG_DISABLE_CTX_SWITCH_REPORTS   (1<<1)
> +
>   #define GEN8_OACONTROL _MMIO(0x2B00)
>   #define  GEN8_OA_REPORT_FORMAT_A12	    (0<<2)
>   #define  GEN8_OA_REPORT_FORMAT_A12_B8_C8    (2<<2)
> @@ -674,6 +680,7 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
>   #define  GEN7_OABUFFER_STOP_RESUME_ENABLE   (1<<1)
>   #define  GEN7_OABUFFER_RESUME		    (1<<0)
>
> +#define GEN8_OABUFFER_UDW _MMIO(0x23b4)
>   #define GEN8_OABUFFER _MMIO(0x2b14)
>
>   #define GEN7_OASTATUS1 _MMIO(0x2364)
> @@ -692,7 +699,9 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
>   #define  GEN8_OASTATUS_REPORT_LOST	    (1<<0)
>
>   #define GEN8_OAHEADPTR _MMIO(0x2B0C)
> +#define GEN8_OAHEADPTR_MASK    0xffffffc0
>   #define GEN8_OATAILPTR _MMIO(0x2B10)
> +#define GEN8_OATAILPTR_MASK    0xffffffc0
>
>   #define OABUFFER_SIZE_128K  (0<<3)
>   #define OABUFFER_SIZE_256K  (1<<3)
> @@ -705,7 +714,17 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
>
>   #define OA_MEM_SELECT_GGTT  (1<<0)
>
> +/*
> + * Flexible, Aggregate EU Counter Registers.
> + * Note: these aren't contiguous
> + */
>   #define EU_PERF_CNTL0	    _MMIO(0xe458)
> +#define EU_PERF_CNTL1	    _MMIO(0xe558)
> +#define EU_PERF_CNTL2	    _MMIO(0xe658)
> +#define EU_PERF_CNTL3	    _MMIO(0xe758)
> +#define EU_PERF_CNTL4	    _MMIO(0xe45c)
> +#define EU_PERF_CNTL5	    _MMIO(0xe55c)
> +#define EU_PERF_CNTL6	    _MMIO(0xe65c)
>
>   #define GDT_CHICKEN_BITS    _MMIO(0x9840)
>   #define GT_NOA_ENABLE	    0x00000080
> @@ -2325,6 +2344,9 @@ enum skl_disp_power_wells {
>   #define   GEN8_RC_SEMA_IDLE_MSG_DISABLE	(1 << 12)
>   #define   GEN8_FF_DOP_CLOCK_GATE_DISABLE	(1<<10)
>
> +#define GEN6_RCS_PWR_FSM _MMIO(0x22ac)
> +#define GEN9_RCS_FE_FSM2 _MMIO(0x22a4)
> +
>   /* Fuse readout registers for GT */
>   #define CHV_FUSE_GT			_MMIO(VLV_DISPLAY_BASE + 0x2168)
>   #define   CHV_FGT_DISABLE_SS0		(1 << 10)
> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> index 7df278fe492e..351e231a02f4 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -1879,6 +1879,8 @@ static void execlists_init_reg_state(u32 *regs,
>   		regs[CTX_LRI_HEADER_2] = MI_LOAD_REGISTER_IMM(1);
>   		CTX_REG(regs, CTX_R_PWR_CLK_STATE, GEN8_R_PWR_CLK_STATE,
>   			make_rpcs(dev_priv));
> +
> +		i915_oa_init_reg_state(engine, ctx, regs);
>   	}
>   }
>
> diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
> index 464547d08173..15bc9f78ba4d 100644
> --- a/include/uapi/drm/i915_drm.h
> +++ b/include/uapi/drm/i915_drm.h
> @@ -1316,13 +1316,18 @@ struct drm_i915_gem_context_param {
>   };
>
>   enum drm_i915_oa_format {
> -	I915_OA_FORMAT_A13 = 1,
> -	I915_OA_FORMAT_A29,
> -	I915_OA_FORMAT_A13_B8_C8,
> -	I915_OA_FORMAT_B4_C8,
> -	I915_OA_FORMAT_A45_B8_C8,
> -	I915_OA_FORMAT_B4_C8_A16,
> -	I915_OA_FORMAT_C4_B8,
> +	I915_OA_FORMAT_A13 = 1,	    /* HSW only */
> +	I915_OA_FORMAT_A29,	    /* HSW only */
> +	I915_OA_FORMAT_A13_B8_C8,   /* HSW only */
> +	I915_OA_FORMAT_B4_C8,	    /* HSW only */
> +	I915_OA_FORMAT_A45_B8_C8,   /* HSW only */
> +	I915_OA_FORMAT_B4_C8_A16,   /* HSW only */
> +	I915_OA_FORMAT_C4_B8,	    /* HSW+ */
> +
> +	/* Gen8+ */
> +	I915_OA_FORMAT_A12,
> +	I915_OA_FORMAT_A12_B8_C8,
> +	I915_OA_FORMAT_A32u40_A4u32_B8_C8,
>
>   	I915_OA_FORMAT_MAX	    /* non-ABI */
>   };
> --
> 2.11.0
>

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v6 12/15] drm/i915/perf: Add OA unit support for Gen 8+
  2017-04-24 18:49       ` [PATCH v6 " Lionel Landwerlin
  2017-04-25 16:20         ` Lionel Landwerlin
@ 2017-04-25 16:42         ` Matthew Auld
  2017-04-25 17:10           ` Lionel Landwerlin
  2017-04-25 17:30           ` [PATCH v7 " Lionel Landwerlin
  1 sibling, 2 replies; 87+ messages in thread
From: Matthew Auld @ 2017-04-25 16:42 UTC (permalink / raw)
  To: Lionel Landwerlin; +Cc: Intel Graphics Development

On 24 April 2017 at 19:49, Lionel Landwerlin
<lionel.g.landwerlin@intel.com> wrote:
> From: Robert Bragg <robert@sixbynine.org>
>
> Enables access to OA unit metrics for BDW, CHV, SKL and BXT which all
> share (more-or-less) the same OA unit design.
>
> Of particular note in comparison to Haswell: some OA unit HW config
> state has become per-context state and as a consequence it is somewhat
> more complicated to manage synchronous state changes from the cpu while
> there's no guarantee of what context (if any) is currently actively
> running on the gpu.
>
> The periodic sampling frequency which can be particularly useful for
> system-wide analysis (as opposed to command stream synchronised
> MI_REPORT_PERF_COUNT commands) is perhaps the most surprising state to
> have become per-context save and restored (while the OABUFFER
> destination is still a shared, system-wide resource).
>
> This support for gen8+ takes care to consider a number of timing
> challenges involved in synchronously updating per-context state
> primarily by programming all config state from the cpu and updating all
> current and saved contexts synchronously while the OA unit is still
> disabled.
>
> The driver intentionally avoids depending on command streamer
> programming to update OA state considering the lack of synchronization
> between the automatic loading of OACTXCONTROL state (that includes the
> periodic sampling state and enable state) on context restore and the
> parsing of any general purpose BB the driver can control. I.e. this
> implementation is careful to avoid the possibility of a context restore
> temporarily enabling any out-of-date periodic sampling state. In
> addition to the risk of transiently-out-of-date state being loaded
> automatically; there are also internal HW latencies involved in the
> loading of MUX configurations which would be difficult to account for
> from the command streamer (and we only want to enable the unit when once
> the MUX configuration is complete).
>
> Since the Gen8+ OA unit design no longer supports clock gating the unit
> off for a single given context (which effectively stopped any progress
> of counters while any other context was running) and instead supports
> tagging OA reports with a context ID for filtering on the CPU, it means
> we can no longer hide the system-wide progress of counters from a
> non-privileged application only interested in metrics for its own
> context. Although we could theoretically try and subtract the progress
> of other contexts before forwarding reports via read() we aren't in a
> position to filter reports captured via MI_REPORT_PERF_COUNT commands.
> As a result, for Gen8+, we always require the
> dev.i915.perf_stream_paranoid to be unset for any access to OA metrics
> if not root.
>
> v5: Drain submitted requests when enabling metric set to ensure no
>     lite-restore erases the context image we just updated (Lionel)
>
> v6: In addition to drain, switch to kernel context & update all
>     context in place (Chris)
>
> Signed-off-by: Robert Bragg <robert@sixbynine.org>
> Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
> Reviewed-by: Matthew Auld <matthew.auld@intel.com> \o/
> ---
>  drivers/gpu/drm/i915/i915_drv.h  |  45 +-
>  drivers/gpu/drm/i915/i915_perf.c | 957 ++++++++++++++++++++++++++++++++++++---
>  drivers/gpu/drm/i915/i915_reg.h  |  22 +
>  drivers/gpu/drm/i915/intel_lrc.c |   2 +
>  include/uapi/drm/i915_drm.h      |  19 +-
>  5 files changed, 952 insertions(+), 93 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index ffa1fc5eddfd..676b1227067c 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -2064,9 +2064,17 @@ struct i915_oa_ops {
>         void (*init_oa_buffer)(struct drm_i915_private *dev_priv);
>
>         /**
> -        * @enable_metric_set: Applies any MUX configuration to set up the
> -        * Boolean and Custom (B/C) counters that are part of the counter
> -        * reports being sampled. May apply system constraints such as
> +        * @select_metric_set: The auto generated code that checks whether a
> +        * requested OA config is applicable to the system and if so sets up
> +        * the mux, oa and flex eu register config pointers according to the
> +        * current dev_priv->perf.oa.metrics_set.
> +        */
> +       int (*select_metric_set)(struct drm_i915_private *dev_priv);
> +
> +       /**
> +        * @enable_metric_set: Selects and applies any MUX configuration to set
> +        * up the Boolean and Custom (B/C) counters that are part of the
> +        * counter reports being sampled. May apply system constraints such as
>          * disabling EU clock gating as required.
>          */
>         int (*enable_metric_set)(struct drm_i915_private *dev_priv);
> @@ -2097,20 +2105,13 @@ struct i915_oa_ops {
>                     size_t *offset);
>
>         /**
> -        * @oa_buffer_check: Check for OA buffer data + update tail
> -        *
> -        * This is either called via fops or the poll check hrtimer (atomic
> -        * ctx) without any locks taken.
> +        * @oa_hw_tail_read: read the OA tail pointer register
>          *
> -        * It's safe to read OA config state here unlocked, assuming that this
> -        * is only called while the stream is enabled, while the global OA
> -        * configuration can't be modified.
> -        *
> -        * Efficiency is more important than avoiding some false positives
> -        * here, which will be handled gracefully - likely resulting in an
> -        * %EAGAIN error for userspace.
> +        * In particular this enables us to share all the fiddly code for
> +        * handling the OA unit tail pointer race that affects multiple
> +        * generations.
>          */
> -       bool (*oa_buffer_check)(struct drm_i915_private *dev_priv);
> +       u32 (*oa_hw_tail_read)(struct drm_i915_private *dev_priv);
>  };
>
>  struct intel_cdclk_state {
> @@ -2472,6 +2473,7 @@ struct drm_i915_private {
>                         struct {
>                                 struct i915_vma *vma;
>                                 u8 *vaddr;
> +                               u32 last_ctx_id;
>                                 int format;
>                                 int format_size;
>
> @@ -2541,6 +2543,14 @@ struct drm_i915_private {
>                         } oa_buffer;
>
>                         u32 gen7_latched_oastatus1;
> +                       u32 ctx_oactxctrl_offset;
> +                       u32 ctx_flexeu0_offset;
> +
> +                       /* The RPT_ID/reason field for Gen8+ includes a bit
> +                        * to determine if the CTX ID in the report is valid
> +                        * but the specific bit differs between Gen 8 and 9
> +                        */
> +                       u32 gen8_valid_ctx_bit;
>
>                         struct i915_oa_ops ops;
>                         const struct i915_oa_format *oa_formats;
> @@ -2851,6 +2861,8 @@ intel_info(const struct drm_i915_private *dev_priv)
>  #define IS_KBL_ULX(dev_priv)   (INTEL_DEVID(dev_priv) == 0x590E || \
>                                  INTEL_DEVID(dev_priv) == 0x5915 || \
>                                  INTEL_DEVID(dev_priv) == 0x591E)
> +#define IS_SKL_GT2(dev_priv)   (IS_SKYLAKE(dev_priv) && \
> +                                (INTEL_DEVID(dev_priv) & 0x00F0) == 0x0010)
>  #define IS_SKL_GT3(dev_priv)   (IS_SKYLAKE(dev_priv) && \
>                                  (INTEL_DEVID(dev_priv) & 0x00F0) == 0x0020)
>  #define IS_SKL_GT4(dev_priv)   (IS_SKYLAKE(dev_priv) && \
> @@ -3615,6 +3627,9 @@ i915_gem_context_lookup_timeline(struct i915_gem_context *ctx,
>
>  int i915_perf_open_ioctl(struct drm_device *dev, void *data,
>                          struct drm_file *file);
> +void i915_oa_init_reg_state(struct intel_engine_cs *engine,
> +                           struct i915_gem_context *ctx,
> +                           uint32_t *reg_state);
>
>  /* i915_gem_evict.c */
>  int __must_check i915_gem_evict_something(struct i915_address_space *vm,
> diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
> index 3277a52ce98e..f2688dee5a03 100644
> --- a/drivers/gpu/drm/i915/i915_perf.c
> +++ b/drivers/gpu/drm/i915/i915_perf.c
> @@ -196,6 +196,12 @@
>
>  #include "i915_drv.h"
>  #include "i915_oa_hsw.h"
> +#include "i915_oa_bdw.h"
> +#include "i915_oa_chv.h"
> +#include "i915_oa_sklgt2.h"
> +#include "i915_oa_sklgt3.h"
> +#include "i915_oa_sklgt4.h"
> +#include "i915_oa_bxt.h"
>
>  /* HW requires this to be a power of two, between 128k and 16M, though driver
>   * is currently generally designed assuming the largest 16M size is used such
> @@ -215,7 +221,7 @@
>   *
>   * Although this can be observed explicitly while copying reports to userspace
>   * by checking for a zeroed report-id field in tail reports, we want to account
> - * for this earlier, as part of the _oa_buffer_check to avoid lots of redundant
> + * for this earlier, as part of the oa_buffer_check to avoid lots of redundant
>   * read() attempts.
>   *
>   * In effect we define a tail pointer for reading that lags the real tail
> @@ -237,7 +243,7 @@
>   * indicates that an updated tail pointer is needed.
>   *
>   * Most of the implementation details for this workaround are in
> - * gen7_oa_buffer_check_unlocked() and gen7_appand_oa_reports()
> + * oa_buffer_check_unlocked() and _append_oa_reports()
>   *
>   * Note for posterity: previously the driver used to define an effective tail
>   * pointer that lagged the real pointer by a 'tail margin' measured in bytes
> @@ -272,6 +278,13 @@ static u32 i915_perf_stream_paranoid = true;
>
>  #define INVALID_CTX_ID 0xffffffff
>
> +/* On Gen8+ automatically triggered OA reports include a 'reason' field... */
> +#define OAREPORT_REASON_MASK           0x3f
> +#define OAREPORT_REASON_SHIFT          19
> +#define OAREPORT_REASON_TIMER          (1<<0)
> +#define OAREPORT_REASON_CTX_SWITCH     (1<<3)
> +#define OAREPORT_REASON_CLK_RATIO      (1<<5)
> +
>
>  /* For sysctl proc_dointvec_minmax of i915_oa_max_sample_rate
>   *
> @@ -303,6 +316,13 @@ static struct i915_oa_format hsw_oa_formats[I915_OA_FORMAT_MAX] = {
>         [I915_OA_FORMAT_C4_B8]      = { 7, 64 },
>  };
>
> +static struct i915_oa_format gen8_plus_oa_formats[I915_OA_FORMAT_MAX] = {
> +       [I915_OA_FORMAT_A12]                = { 0, 64 },
> +       [I915_OA_FORMAT_A12_B8_C8]          = { 2, 128 },
> +       [I915_OA_FORMAT_A32u40_A4u32_B8_C8] = { 5, 256 },
> +       [I915_OA_FORMAT_C4_B8]              = { 7, 64 },
> +};
> +
>  #define SAMPLE_OA_REPORT      (1<<0)
>
>  /**
> @@ -332,8 +352,20 @@ struct perf_open_properties {
>         int oa_period_exponent;
>  };
>
> +static u32 gen8_oa_hw_tail_read(struct drm_i915_private *dev_priv)
> +{
> +       return I915_READ(GEN8_OATAILPTR) & GEN8_OATAILPTR_MASK;
> +}
> +
> +static u32 gen7_oa_hw_tail_read(struct drm_i915_private *dev_priv)
> +{
> +       u32 oastatus1 = I915_READ(GEN7_OASTATUS1);
> +
> +       return oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
> +}
> +
>  /**
> - * gen7_oa_buffer_check_unlocked - check for data and update tail ptr state
> + * oa_buffer_check_unlocked - check for data and update tail ptr state
>   * @dev_priv: i915 device instance
>   *
>   * This is either called via fops (for blocking reads in user ctx) or the poll
> @@ -356,12 +388,11 @@ struct perf_open_properties {
>   *
>   * Returns: %true if the OA buffer contains data, else %false
>   */
> -static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
> +static bool oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
>  {
>         int report_size = dev_priv->perf.oa.oa_buffer.format_size;
>         unsigned long flags;
>         unsigned int aged_idx;
> -       u32 oastatus1;
>         u32 head, hw_tail, aged_tail, aging_tail;
>         u64 now;
>
> @@ -381,8 +412,7 @@ static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
>         aged_tail = dev_priv->perf.oa.oa_buffer.tails[aged_idx].offset;
>         aging_tail = dev_priv->perf.oa.oa_buffer.tails[!aged_idx].offset;
>
> -       oastatus1 = I915_READ(GEN7_OASTATUS1);
> -       hw_tail = oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
> +       hw_tail = dev_priv->perf.oa.ops.oa_hw_tail_read(dev_priv);
>
>         /* The tail pointer increases in 64 byte increments,
>          * not in report_size steps...
> @@ -404,6 +434,7 @@ static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
>         if (aging_tail != INVALID_TAIL_PTR &&
>             ((now - dev_priv->perf.oa.oa_buffer.aging_timestamp) >
>              OA_TAIL_MARGIN_NSEC)) {
> +
>                 aged_idx ^= 1;
>                 dev_priv->perf.oa.oa_buffer.aged_tail_idx = aged_idx;
>
> @@ -553,6 +584,287 @@ static int append_oa_sample(struct i915_perf_stream *stream,
>   *
>   * Returns: 0 on success, negative error code on failure.
>   */
> +static int gen8_append_oa_reports(struct i915_perf_stream *stream,
> +                                 char __user *buf,
> +                                 size_t count,
> +                                 size_t *offset)
> +{
> +       struct drm_i915_private *dev_priv = stream->dev_priv;
> +       int report_size = dev_priv->perf.oa.oa_buffer.format_size;
> +       u8 *oa_buf_base = dev_priv->perf.oa.oa_buffer.vaddr;
> +       u32 gtt_offset = i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
> +       u32 mask = (OA_BUFFER_SIZE - 1);
> +       size_t start_offset = *offset;
> +       unsigned long flags;
> +       unsigned int aged_tail_idx;
> +       u32 head, tail;
> +       u32 taken;
> +       int ret = 0;
> +
> +       if (WARN_ON(!stream->enabled))
> +               return -EIO;
> +
> +       spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
> +
> +       head = dev_priv->perf.oa.oa_buffer.head;
> +       aged_tail_idx = dev_priv->perf.oa.oa_buffer.aged_tail_idx;
> +       tail = dev_priv->perf.oa.oa_buffer.tails[aged_tail_idx].offset;
> +
> +       spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
> +
> +       /* An invalid tail pointer here means we're still waiting for the poll
> +        * hrtimer callback to give us a pointer
> +        */
> +       if (tail == INVALID_TAIL_PTR)
> +               return -EAGAIN;
> +
> +       /* NB: oa_buffer.head/tail include the gtt_offset which we don't want
> +        * while indexing relative to oa_buf_base.
> +        */
> +       head -= gtt_offset;
> +       tail -= gtt_offset;
> +
> +       /* An out of bounds or misaligned head or tail pointer implies a driver
> +        * bug since we validate + align the tail pointers we read from the
> +        * hardware and we are in full control of the head pointer which should
> +        * only be incremented by multiples of the report size (notably also
> +        * all a power of two).
> +        */
> +       if (WARN_ONCE(head > OA_BUFFER_SIZE || head % report_size ||
> +                     tail > OA_BUFFER_SIZE || tail % report_size,
> +                     "Inconsistent OA buffer pointers: head = %u, tail = %u\n",
> +                     head, tail))
> +               return -EIO;
> +
> +
> +       for (/* none */;
> +            (taken = OA_TAKEN(tail, head));
> +            head = (head + report_size) & mask) {
> +               u8 *report = oa_buf_base + head;
> +               u32 *report32 = (void *)report;
> +               u32 ctx_id;
> +               u32 reason;
> +
> +               /* All the report sizes factor neatly into the buffer
> +                * size so we never expect to see a report split
> +                * between the beginning and end of the buffer.
> +                *
> +                * Given the initial alignment check a misalignment
> +                * here would imply a driver bug that would result
> +                * in an overrun.
> +                */
> +               if (WARN_ON((OA_BUFFER_SIZE - head) < report_size)) {
> +                       DRM_ERROR("Spurious OA head ptr: non-integral report offset\n");
> +                       break;
> +               }
> +
> +               /* The reason field includes flags identifying what
> +                * triggered this specific report (mostly timer
> +                * triggered or e.g. due to a context switch).
> +                *
> +                * This field is never expected to be zero so we can
> +                * check that the report isn't invalid before copying
> +                * it to userspace...
> +                */
> +               reason = ((report32[0] >> OAREPORT_REASON_SHIFT) &
> +                         OAREPORT_REASON_MASK);
> +               if (reason == 0) {
> +                       if (__ratelimit(&dev_priv->perf.oa.spurious_report_rs))
> +                               DRM_NOTE("Skipping spurious, invalid OA report\n");
> +                       continue;
> +               }
> +
> +               /* XXX: Just keep the lower 21 bits for now since I'm not
> +                * entirely sure if the HW touches any of the higher bits in
> +                * this field
> +                */
> +               ctx_id = report32[2] & 0x1fffff;
> +
> +               /* Squash whatever is in the CTX_ID field if it's marked as
> +                * invalid to be sure we avoid false-positive, single-context
> +                * filtering below...
> +                *
> +                * Note: that we don't clear the valid_ctx_bit so userspace can
> +                * understand that the ID has been squashed by the kernel.
> +                */
> +               if (!(report32[0] & dev_priv->perf.oa.gen8_valid_ctx_bit))
> +                       ctx_id = report32[2] = INVALID_CTX_ID;
> +
> +               /* NB: For Gen 8 the OA unit no longer supports clock gating
> +                * off for a specific context and the kernel can't securely
> +                * stop the counters from updating as system-wide / global
> +                * values.
> +                *
> +                * Automatic reports now include a context ID so reports can be
> +                * filtered on the cpu but it's not worth trying to
> +                * automatically subtract/hide counter progress for other
> +                * contexts while filtering since we can't stop userspace
> +                * issuing MI_REPORT_PERF_COUNT commands which would still
> +                * provide a side-band view of the real values.
> +                *
> +                * To allow userspace (such as Mesa/GL_INTEL_performance_query)
> +                * to normalize counters for a single filtered context then it
> +                * needs be forwarded bookend context-switch reports so that it
> +                * can track switches in between MI_REPORT_PERF_COUNT commands
> +                * and can itself subtract/ignore the progress of counters
> +                * associated with other contexts. Note that the hardware
> +                * automatically triggers reports when switching to a new
> +                * context which are tagged with the ID of the newly active
> +                * context. To avoid the complexity (and likely fragility) of
> +                * reading ahead while parsing reports to try and minimize
> +                * forwarding redundant context switch reports (i.e. between
> +                * other, unrelated contexts) we simply elect to forward them
> +                * all.
> +                *
> +                * We don't rely solely on the reason field to identify context
> +                * switches since it's not-uncommon for periodic samples to
> +                * identify a switch before any 'context switch' report.
> +                */
> +               if (!dev_priv->perf.oa.exclusive_stream->ctx ||
> +                   dev_priv->perf.oa.specific_ctx_id == ctx_id ||
> +                   (dev_priv->perf.oa.oa_buffer.last_ctx_id ==
> +                    dev_priv->perf.oa.specific_ctx_id) ||
> +                   reason & OAREPORT_REASON_CTX_SWITCH) {
> +
> +                       /* While filtering for a single context we avoid
> +                        * leaking the IDs of other contexts.
> +                        */
> +                       if (dev_priv->perf.oa.exclusive_stream->ctx &&
> +                           dev_priv->perf.oa.specific_ctx_id != ctx_id) {
> +                               report32[2] = INVALID_CTX_ID;
> +                       }
> +
> +                       ret = append_oa_sample(stream, buf, count, offset,
> +                                              report);
> +                       if (ret)
> +                               break;
> +
> +                       dev_priv->perf.oa.oa_buffer.last_ctx_id = ctx_id;
> +               }
> +
> +               /* The above reason field sanity check is based on
> +                * the assumption that the OA buffer is initially
> +                * zeroed and we reset the field after copying so the
> +                * check is still meaningful once old reports start
> +                * being overwritten.
> +                */
> +               report32[0] = 0;
> +       }
> +
> +       if (start_offset != *offset) {
> +               spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
> +
> +               /* We removed the gtt_offset for the copy loop above, indexing
> +                * relative to oa_buf_base so put back here...
> +                */
> +               head += gtt_offset;
> +
> +               I915_WRITE(GEN8_OAHEADPTR, head & GEN8_OAHEADPTR_MASK);
> +               dev_priv->perf.oa.oa_buffer.head = head;
> +
> +               spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
> +       }
> +
> +       return ret;
> +}
> +
> +/**
> + * gen8_oa_read - copy status records then buffered OA reports
> + * @stream: An i915-perf stream opened for OA metrics
> + * @buf: destination buffer given by userspace
> + * @count: the number of bytes userspace wants to read
> + * @offset: (inout): the current position for writing into @buf
> + *
> + * Checks OA unit status registers and if necessary appends corresponding
> + * status records for userspace (such as for a buffer full condition) and then
> + * initiate appending any buffered OA reports.
> + *
> + * Updates @offset according to the number of bytes successfully copied into
> + * the userspace buffer.
> + *
> + * NB: some data may be successfully copied to the userspace buffer
> + * even if an error is returned, and this is reflected in the
> + * updated @offset.
> + *
> + * Returns: zero on success or a negative error code
> + */
> +static int gen8_oa_read(struct i915_perf_stream *stream,
> +                       char __user *buf,
> +                       size_t count,
> +                       size_t *offset)
> +{
> +       struct drm_i915_private *dev_priv = stream->dev_priv;
> +       u32 oastatus;
> +       int ret;
> +
> +       if (WARN_ON(!dev_priv->perf.oa.oa_buffer.vaddr))
> +               return -EIO;
> +
> +       oastatus = I915_READ(GEN8_OASTATUS);
> +
> +       /* We treat OABUFFER_OVERFLOW as a significant error:
> +        *
> +        * Although theoretically we could handle this more gracefully
> +        * sometimes, some Gens don't correctly suppress certain
> +        * automatically triggered reports in this condition and so we
> +        * have to assume that old reports are now being trampled
> +        * over.
> +        *
> +        * Considering how we don't currently give userspace control
> +        * over the OA buffer size and always configure a large 16MB
> +        * buffer, then a buffer overflow does anyway likely indicate
> +        * that something has gone quite badly wrong.
> +        */
> +       if (oastatus & GEN8_OASTATUS_OABUFFER_OVERFLOW) {
> +               ret = append_oa_status(stream, buf, count, offset,
> +                                      DRM_I915_PERF_RECORD_OA_BUFFER_LOST);
> +               if (ret)
> +                       return ret;
> +
> +               DRM_DEBUG("OA buffer overflow (exponent = %d): force restart\n",
> +                         dev_priv->perf.oa.period_exponent);
> +
> +               dev_priv->perf.oa.ops.oa_disable(dev_priv);
> +               dev_priv->perf.oa.ops.oa_enable(dev_priv);
> +
> +               /* Note: .oa_enable() is expected to re-init the oabuffer
> +                * and reset GEN8_OASTATUS for us
> +                */
> +               oastatus = I915_READ(GEN8_OASTATUS);
> +       }
> +
> +       if (oastatus & GEN8_OASTATUS_REPORT_LOST) {
> +               ret = append_oa_status(stream, buf, count, offset,
> +                                      DRM_I915_PERF_RECORD_OA_REPORT_LOST);
> +               if (ret)
> +                       return ret;
> +               I915_WRITE(GEN8_OASTATUS,
> +                          oastatus & ~GEN8_OASTATUS_REPORT_LOST);
> +       }
> +
> +       return gen8_append_oa_reports(stream, buf, count, offset);
> +}
> +
> +/**
> + * Copies all buffered OA reports into userspace read() buffer.
> + * @stream: An i915-perf stream opened for OA metrics
> + * @buf: destination buffer given by userspace
> + * @count: the number of bytes userspace wants to read
> + * @offset: (inout): the current position for writing into @buf
> + *
> + * Notably any error condition resulting in a short read (-%ENOSPC or
> + * -%EFAULT) will be returned even though one or more records may
> + * have been successfully copied. In this case it's up to the caller
> + * to decide if the error should be squashed before returning to
> + * userspace.
> + *
> + * Note: reports are consumed from the head, and appended to the
> + * tail, so the tail chases the head?... If you think that's mad
> + * and back-to-front you're not alone, but this follows the
> + * Gen PRM naming convention.
> + *
> + * Returns: 0 on success, negative error code on failure.
> + */
>  static int gen7_append_oa_reports(struct i915_perf_stream *stream,
>                                   char __user *buf,
>                                   size_t count,
> @@ -733,7 +1045,8 @@ static int gen7_oa_read(struct i915_perf_stream *stream,
>                 if (ret)
>                         return ret;
>
> -               DRM_DEBUG("OA buffer overflow: force restart\n");
> +               DRM_DEBUG("OA buffer overflow (exponent = %d): force restart\n",
> +                         dev_priv->perf.oa.period_exponent);
>
>                 dev_priv->perf.oa.ops.oa_disable(dev_priv);
>                 dev_priv->perf.oa.ops.oa_enable(dev_priv);
> @@ -776,7 +1089,7 @@ static int i915_oa_wait_unlocked(struct i915_perf_stream *stream)
>                 return -EIO;
>
>         return wait_event_interruptible(dev_priv->perf.oa.poll_wq,
> -                                       dev_priv->perf.oa.ops.oa_buffer_check(dev_priv));
> +                                       oa_buffer_check_unlocked(dev_priv));
>  }
>
>  /**
> @@ -833,33 +1146,37 @@ static int i915_oa_read(struct i915_perf_stream *stream,
>  static int oa_get_render_ctx_id(struct i915_perf_stream *stream)
>  {
>         struct drm_i915_private *dev_priv = stream->dev_priv;
> -       struct intel_engine_cs *engine = dev_priv->engine[RCS];
> -       int ret;
>
> -       ret = i915_mutex_lock_interruptible(&dev_priv->drm);
> -       if (ret)
> -               return ret;
> +       if (i915.enable_execlists)
> +               dev_priv->perf.oa.specific_ctx_id = stream->ctx->hw_id;
> +       else {
> +               struct intel_engine_cs *engine = dev_priv->engine[RCS];
> +               int ret;
>
> -       /* As the ID is the gtt offset of the context's vma we pin
> -        * the vma to ensure the ID remains fixed.
> -        *
> -        * NB: implied RCS engine...
> -        */
> -       ret = engine->context_pin(engine, stream->ctx);
> -       if (ret)
> -               goto unlock;
> +               ret = i915_mutex_lock_interruptible(&dev_priv->drm);
> +               if (ret)
> +                       return ret;
>
> -       /* Explicitly track the ID (instead of calling i915_ggtt_offset()
> -        * on the fly) considering the difference with gen8+ and
> -        * execlists
> -        */
> -       dev_priv->perf.oa.specific_ctx_id =
> -               i915_ggtt_offset(stream->ctx->engine[engine->id].state);
> +               /* As the ID is the gtt offset of the context's vma we pin
> +                * the vma to ensure the ID remains fixed.
> +                */
> +               ret = engine->context_pin(engine, stream->ctx);
> +               if (ret) {
> +                       mutex_unlock(&dev_priv->drm.struct_mutex);
> +                       return ret;
> +               }
>
> -unlock:
> -       mutex_unlock(&dev_priv->drm.struct_mutex);
> +               /* Explicitly track the ID (instead of calling
> +                * i915_ggtt_offset() on the fly) considering the difference
> +                * with gen8+ and execlists
> +                */
> +               dev_priv->perf.oa.specific_ctx_id =
> +                       i915_ggtt_offset(stream->ctx->engine[engine->id].state);
>
> -       return ret;
> +               mutex_unlock(&dev_priv->drm.struct_mutex);
> +       }
> +
> +       return 0;
>  }
>
>  /**
> @@ -874,12 +1191,16 @@ static void oa_put_render_ctx_id(struct i915_perf_stream *stream)
>         struct drm_i915_private *dev_priv = stream->dev_priv;
>         struct intel_engine_cs *engine = dev_priv->engine[RCS];
>
> -       mutex_lock(&dev_priv->drm.struct_mutex);
> +       if (i915.enable_execlists) {
> +               dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
> +       } else {
> +               mutex_lock(&dev_priv->drm.struct_mutex);
>
> -       dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
> -       engine->context_unpin(engine, stream->ctx);
> +               dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
> +               engine->context_unpin(engine, stream->ctx);
>
> -       mutex_unlock(&dev_priv->drm.struct_mutex);
> +               mutex_unlock(&dev_priv->drm.struct_mutex);
> +       }
>  }
>
>  static void
> @@ -969,6 +1290,61 @@ static void gen7_init_oa_buffer(struct drm_i915_private *dev_priv)
>         dev_priv->perf.oa.pollin = false;
>  }
>
> +static void gen8_init_oa_buffer(struct drm_i915_private *dev_priv)
> +{
> +       u32 gtt_offset = i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
> +       unsigned long flags;
> +
> +       spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
> +
> +       I915_WRITE(GEN8_OASTATUS, 0);
> +       I915_WRITE(GEN8_OAHEADPTR, gtt_offset);
> +       dev_priv->perf.oa.oa_buffer.head = gtt_offset;
> +
> +       I915_WRITE(GEN8_OABUFFER_UDW, 0);
> +
> +       /* PRM says:
> +        *
> +        *  "This MMIO must be set before the OATAILPTR
> +        *  register and after the OAHEADPTR register. This is
> +        *  to enable proper functionality of the overflow
> +        *  bit."
> +        */
> +       I915_WRITE(GEN8_OABUFFER, gtt_offset |
> +                  OABUFFER_SIZE_16M | OA_MEM_SELECT_GGTT);
> +       I915_WRITE(GEN8_OATAILPTR, gtt_offset & GEN8_OATAILPTR_MASK);
> +
> +       /* Mark that we need updated tail pointers to read from... */
> +       dev_priv->perf.oa.oa_buffer.tails[0].offset = INVALID_TAIL_PTR;
> +       dev_priv->perf.oa.oa_buffer.tails[1].offset = INVALID_TAIL_PTR;
> +
> +       /* Reset state used to recognise context switches, affecting which
> +        * reports we will forward to userspace while filtering for a single
> +        * context.
> +        */
> +       dev_priv->perf.oa.oa_buffer.last_ctx_id = INVALID_CTX_ID;
> +
> +       spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
> +
> +       /* NB: although the OA buffer will initially be allocated
> +        * zeroed via shmfs (and so this memset is redundant when
> +        * first allocating), we may re-init the OA buffer, either
> +        * when re-enabling a stream or in error/reset paths.
> +        *
> +        * The reason we clear the buffer for each re-init is for the
> +        * sanity check in gen8_append_oa_reports() that looks at the
> +        * reason field to make sure it's non-zero which relies on
> +        * the assumption that new reports are being written to zeroed
> +        * memory...
> +        */
> +       memset(dev_priv->perf.oa.oa_buffer.vaddr, 0, OA_BUFFER_SIZE);
> +
> +       /* Maybe make ->pollin per-stream state if we support multiple
> +        * concurrent streams in the future.
> +        */
> +       dev_priv->perf.oa.pollin = false;
> +}
> +
>  static int alloc_oa_buffer(struct drm_i915_private *dev_priv)
>  {
>         struct drm_i915_gem_object *bo;
> @@ -1113,6 +1489,280 @@ static void hsw_disable_metric_set(struct drm_i915_private *dev_priv)
>                                       ~GT_NOA_ENABLE));
>  }
>
> +/*
> + * From Broadwell PRM, 3D-Media-GPGPU -> Register State Context
> + *
> + * MMIO reads or writes to any of the registers listed in the
> + * “Register State Context image” subsections through HOST/IA
> + * MMIO interface for debug purposes must follow the steps below:
> + *
> + * - SW should set the Force Wakeup bit to prevent GT from entering C6.
> + * - Write 0x2050[31:0] = 0x00010001 (disable sequence).
> + * - Disable IDLE messaging in CS (Write 0x2050[31:0] = 0x00010001).
> + * - BDW:  Poll/Wait for register bits of 0x22AC[6:0] turn to 0x30 value.
> + * - SKL+: Poll/Wait for register bits of 0x22A4[6:0] turn to 0x30 value.
> + * - Read/Write to desired MMIO registers.
> + * - Enable IDLE messaging in CS (Write 0x2050[31:0] = 0x00010000).
> + * - Force Wakeup bit should be reset to enable C6 entry.
> + *
> + * XXX: don't nest or overlap calls to this API, it has no ref
> + * counting to track how many entities require the RCS to be
> + * blocked from being idle.
> + */
> +static int gen8_begin_ctx_mmio(struct drm_i915_private *dev_priv)
> +{
> +       i915_reg_t fsm_reg = INTEL_GEN(dev_priv) > 8 ?
> +               GEN9_RCS_FE_FSM2 : GEN6_RCS_PWR_FSM;
> +       int ret = 0;
> +
> +       /* There's only no active context while idle in execlist mode
> +        * (though we shouldn't be using this in any other case)
> +        */
> +       if (WARN_ON(!i915.enable_execlists))
> +               return -EINVAL;
> +
> +       intel_uncore_forcewake_get(dev_priv, FORCEWAKE_RENDER);
> +
> +       I915_WRITE(GEN6_RC_SLEEP_PSMI_CONTROL,
> +                  _MASKED_BIT_ENABLE(GEN6_PSMI_SLEEP_MSG_DISABLE));
> +
> +       /* Note: we don't currently have a good handle on the maximum
> +        * latency for this wake up so while we only need to hold rcs
> +        * busy from process context we at least keep the waiting
> +        * interruptible...
> +        */
> +       ret = intel_wait_for_register(dev_priv, fsm_reg, 0x3f, 0x30, 50);
> +       if (ret == -ETIMEDOUT)
> +               DRM_ERROR("Timeout waiting to be able to begin per-ctx mmio\n");
> +
> +       if (ret)
> +               intel_uncore_forcewake_put(dev_priv, FORCEWAKE_RENDER);
> +
> +       return ret;
> +}
> +
> +static void gen8_end_ctx_mmio(struct drm_i915_private *dev_priv)
> +{
> +       if (WARN_ON(!i915.enable_execlists))
> +               return;
> +
> +       I915_WRITE(GEN6_RC_SLEEP_PSMI_CONTROL,
> +                  _MASKED_BIT_DISABLE(GEN6_PSMI_SLEEP_MSG_DISABLE));
> +       intel_uncore_forcewake_put(dev_priv, FORCEWAKE_RENDER);
> +}
> +
> +/* NB: It must always remain pointer safe to run this even if the OA unit
> + * has been disabled.
> + *
> + * It's fine to put out-of-date values into these per-context registers
> + * in the case that the OA unit has been disabled.
> + */
> +static void gen8_update_reg_state_unlocked(struct i915_gem_context *ctx,
> +                                          u32 *reg_state)
> +{
> +       struct drm_i915_private *dev_priv = ctx->i915;
> +       const struct i915_oa_reg *flex_regs = dev_priv->perf.oa.flex_regs;
> +       int n_flex_regs = dev_priv->perf.oa.flex_regs_len;
> +       u32 ctx_oactxctrl = dev_priv->perf.oa.ctx_oactxctrl_offset;
> +       u32 ctx_flexeu0 = dev_priv->perf.oa.ctx_flexeu0_offset;
> +       /* The MMIO offsets for Flex EU registers aren't contiguous */
> +       u32 flex_mmio[] = {
> +               i915_mmio_reg_offset(EU_PERF_CNTL0),
> +               i915_mmio_reg_offset(EU_PERF_CNTL1),
> +               i915_mmio_reg_offset(EU_PERF_CNTL2),
> +               i915_mmio_reg_offset(EU_PERF_CNTL3),
> +               i915_mmio_reg_offset(EU_PERF_CNTL4),
> +               i915_mmio_reg_offset(EU_PERF_CNTL5),
> +               i915_mmio_reg_offset(EU_PERF_CNTL6),
> +       };
> +       int i;
> +
> +       reg_state[ctx_oactxctrl] = i915_mmio_reg_offset(GEN8_OACTXCONTROL);
> +       reg_state[ctx_oactxctrl+1] = (dev_priv->perf.oa.period_exponent <<
> +                                     GEN8_OA_TIMER_PERIOD_SHIFT) |
> +                                    (dev_priv->perf.oa.periodic ?
> +                                     GEN8_OA_TIMER_ENABLE : 0) |
> +                                    GEN8_OA_COUNTER_RESUME;
> +
> +       for (i = 0; i < ARRAY_SIZE(flex_mmio); i++) {
> +               u32 state_offset = ctx_flexeu0 + i * 2;
> +               u32 mmio = flex_mmio[i];
> +
> +               /* This arbitrary default will select the 'EU FPU0 Pipeline
> +                * Active' event. In the future it's anticipated that there
> +                * will be an explicit 'No Event' we can select, but not yet...
> +                */
> +               u32 value = 0;
> +               int j;
> +
> +               for (j = 0; j < n_flex_regs; j++) {
> +                       if (i915_mmio_reg_offset(flex_regs[j].addr) == mmio) {
> +                               value = flex_regs[j].value;
> +                               break;
> +                       }
> +               }
> +
> +               reg_state[state_offset] = mmio;
> +               reg_state[state_offset+1] = value;
> +       }
> +}
> +
> +/* Manages updating the per-context aspects of the OA stream
> + * configuration across all contexts.
> + *
> + * The awkward consideration here is that OACTXCONTROL controls the
> + * exponent for periodic sampling which is primarily used for system
> + * wide profiling where we'd like a consistent sampling period even in
> + * the face of context switches.
> + *
> + * Our approach of updating the register state context (as opposed to
> + * say using a workaround batch buffer) ensures that the hardware
> + * won't automatically reload an out-of-date timer exponent even
> + * transiently before a WA BB could be parsed.
> + *
> + * This function needs to:
> + * - Ensure the currently running context's per-context OA state is
> + *   updated
> + * - Ensure that all existing contexts will have the correct per-context
> + *   OA state if they are scheduled for use.
> + * - Ensure any new contexts will be initialized with the correct
> + *   per-context OA state.
> + *
> + * Note: it's only the RCS/Render context that has any OA state.
> + */
> +static int gen8_configure_all_contexts(struct drm_i915_private *dev_priv)
> +{
> +       struct i915_gem_context *ctx;
> +       int ret;
> +
> +       ret = i915_mutex_lock_interruptible(&dev_priv->drm);
> +       if (ret)
> +               return ret;
> +
> +       /* Switch away from any user context.  */
> +       ret = i915_gem_switch_to_kernel_context(dev_priv);
> +       if (ret)
 mutex_unlock(&dev_priv->drm.struct_mutex);

> +               return ret;
> +
> +       /* The OA register config is setup through the context image. This image
> +        * might be written to by the GPU on context switch (in particular on
> +        * lite-restore). This means we can't safely update a context's image,
> +        * if this context is scheduled/submitted to run on the GPU.
> +        *
> +        * We could emit the OA register config through the batch buffer but
> +        * this might leave small interval of time where the OA unit is
> +        * configured at an invalid sampling period.
> +        *
> +        * So far the best way to work around this issue seems to be draining
> +        * the GPU from any submitted work.
> +        */
> +       ret = i915_gem_wait_for_idle(dev_priv,
> +                                    I915_WAIT_INTERRUPTIBLE |
> +                                    I915_WAIT_LOCKED);
> +       if (ret) {
> +               mutex_unlock(&dev_priv->drm.struct_mutex);
> +               return ret;
> +       }
> +
> +       /* Update all contexts now that we've stalled the submission. */
> +       list_for_each_entry(ctx, &dev_priv->context_list, link) {
> +               if (!ctx->engine[RCS].initialised)
> +                       continue;
> +
> +               gen8_update_reg_state_unlocked(ctx,
> +                                              ctx->engine[RCS].lrc_reg_state);
> +       }
> +
> +       mutex_unlock(&dev_priv->drm.struct_mutex);
> +
> +       /* Now update the current context.
> +        *
> +        * Note: Using MMIO to update per-context registers requires
> +        * some extra care...
> +        */
> +       ret = gen8_begin_ctx_mmio(dev_priv);
> +       if (ret) {
> +               DRM_ERROR("Failed to bring RCS out of idle to update current ctx OA state\n");
> +               return ret;
> +       }
So do we still need the mmio for the current context, given that we
now idle everything and then update the reg state?

> +
> +       I915_WRITE(GEN8_OACTXCONTROL, ((dev_priv->perf.oa.period_exponent <<
> +                                       GEN8_OA_TIMER_PERIOD_SHIFT) |
> +                                     (dev_priv->perf.oa.periodic ?
> +                                      GEN8_OA_TIMER_ENABLE : 0) |
> +                                     GEN8_OA_COUNTER_RESUME));
> +
> +       config_oa_regs(dev_priv, dev_priv->perf.oa.flex_regs,
> +                       dev_priv->perf.oa.flex_regs_len);
> +
> +       gen8_end_ctx_mmio(dev_priv);
> +
> +       return 0;
> +}
> +
> +static int gen8_enable_metric_set(struct drm_i915_private *dev_priv)
> +{
> +       int ret = dev_priv->perf.oa.ops.select_metric_set(dev_priv);
> +
> +       if (ret)
> +               return ret;
> +
> +       /* We disable slice/unslice clock ratio change reports on SKL since
> +        * they are too noisy. The HW generates a lot of redundant reports
> +        * where the ratio hasn't really changed causing a lot of redundant
> +        * work to processes and increasing the chances we'll hit buffer
> +        * overruns.
> +        *
> +        * Although we don't currently use the 'disable overrun' OABUFFER
> +        * feature it's worth noting that clock ratio reports have to be
> +        * disabled before considering to use that feature since the HW doesn't
> +        * correctly block these reports.
> +        *
> +        * Currently none of the high-level metrics we have depend on knowing
> +        * this ratio to normalize.
> +        *
> +        * Note: This register is not power context saved and restored, but
> +        * that's OK considering that we disable RC6 while the OA unit is
> +        * enabled.
> +        *
> +        * The _INCLUDE_CLK_RATIO bit allows the slice/unslice frequency to
> +        * be read back from automatically triggered reports, as part of the
> +        * RPT_ID field.
> +        */
> +       if (IS_SKYLAKE(dev_priv) || IS_BROXTON(dev_priv)) {
> +               I915_WRITE(GEN8_OA_DEBUG,
> +                          _MASKED_BIT_ENABLE(GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS |
> +                                             GEN9_OA_DEBUG_INCLUDE_CLK_RATIO));
> +       }
> +
> +       I915_WRITE(GDT_CHICKEN_BITS, 0xA0);
> +       config_oa_regs(dev_priv, dev_priv->perf.oa.mux_regs,
> +                      dev_priv->perf.oa.mux_regs_len);
> +       I915_WRITE(GDT_CHICKEN_BITS, 0x80);
> +
> +       /* It takes a fairly long time for a new MUX configuration to
> +        * be be applied after these register writes. This delay
> +        * duration is take from Haswell (derived empirically based on
> +        * the render_basic config) but hopefully it covers the
> +        * maximum configuration latency for Gen8+ too...
> +        */
> +       usleep_range(15000, 20000);
> +
> +       config_oa_regs(dev_priv, dev_priv->perf.oa.b_counter_regs,
> +                      dev_priv->perf.oa.b_counter_regs_len);
> +
> +       ret = gen8_configure_all_contexts(dev_priv);
> +       if (ret)
> +               return ret;
> +
> +       return 0;
> +}
> +
> +static void gen8_disable_metric_set(struct drm_i915_private *dev_priv)
> +{
> +       /* NOP */
> +}
> +
>  static void gen7_update_oacontrol_locked(struct drm_i915_private *dev_priv)
>  {
>         lockdep_assert_held(&dev_priv->perf.hook_lock);
> @@ -1157,6 +1807,29 @@ static void gen7_oa_enable(struct drm_i915_private *dev_priv)
>         spin_unlock_irqrestore(&dev_priv->perf.hook_lock, flags);
>  }
>
> +static void gen8_oa_enable(struct drm_i915_private *dev_priv)
> +{
> +       u32 report_format = dev_priv->perf.oa.oa_buffer.format;
> +
> +       /* Reset buf pointers so we don't forward reports from before now.
> +        *
> +        * Think carefully if considering trying to avoid this, since it
> +        * also ensures status flags and the buffer itself are cleared
> +        * in error paths, and we have checks for invalid reports based
> +        * on the assumption that certain fields are written to zeroed
> +        * memory which this helps maintains.
> +        */
> +       gen8_init_oa_buffer(dev_priv);
> +
> +       /* Note: we don't rely on the hardware to perform single context
> +        * filtering and instead filter on the cpu based on the context-id
> +        * field of reports
> +        */
> +       I915_WRITE(GEN8_OACONTROL, (report_format <<
> +                                   GEN8_OA_REPORT_FORMAT_SHIFT) |
> +                                  GEN8_OA_COUNTER_ENABLE);
> +}
> +
>  /**
>   * i915_oa_stream_enable - handle `I915_PERF_IOCTL_ENABLE` for OA stream
>   * @stream: An i915 perf stream opened for OA metrics
> @@ -1183,6 +1856,11 @@ static void gen7_oa_disable(struct drm_i915_private *dev_priv)
>         I915_WRITE(GEN7_OACONTROL, 0);
>  }
>
> +static void gen8_oa_disable(struct drm_i915_private *dev_priv)
> +{
> +       I915_WRITE(GEN8_OACONTROL, 0);
> +}
> +
>  /**
>   * i915_oa_stream_disable - handle `I915_PERF_IOCTL_DISABLE` for OA stream
>   * @stream: An i915 perf stream opened for OA metrics
> @@ -1361,6 +2039,21 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream,
>         return ret;
>  }
>
> +void i915_oa_init_reg_state(struct intel_engine_cs *engine,
> +                           struct i915_gem_context *ctx,
> +                           u32 *reg_state)
> +{
> +       struct drm_i915_private *dev_priv = engine->i915;
> +
> +       if (engine->id != RCS)
> +               return;
> +
> +       if (!dev_priv->perf.initialized)
> +               return;
> +
> +       gen8_update_reg_state_unlocked(ctx, reg_state);
> +}
> +
>  /**
>   * i915_perf_read_locked - &i915_perf_stream_ops->read with error normalisation
>   * @stream: An i915 perf stream
> @@ -1486,7 +2179,7 @@ static enum hrtimer_restart oa_poll_check_timer_cb(struct hrtimer *hrtimer)
>                 container_of(hrtimer, typeof(*dev_priv),
>                              perf.oa.poll_check_timer);
>
> -       if (dev_priv->perf.oa.ops.oa_buffer_check(dev_priv)) {
> +       if (oa_buffer_check_unlocked(dev_priv)) {
>                 dev_priv->perf.oa.pollin = true;
>                 wake_up(&dev_priv->perf.oa.poll_wq);
>         }
> @@ -1775,6 +2468,7 @@ i915_perf_open_ioctl_locked(struct drm_i915_private *dev_priv,
>         struct i915_gem_context *specific_ctx = NULL;
>         struct i915_perf_stream *stream = NULL;
>         unsigned long f_flags = 0;
> +       bool privileged_op = true;
>         int stream_fd;
>         int ret;
>
> @@ -1792,12 +2486,28 @@ i915_perf_open_ioctl_locked(struct drm_i915_private *dev_priv,
>                 }
>         }
>
> +       /* On Haswell the OA unit supports clock gating off for a specific
> +        * context and in this mode there's no visibility of metrics for the
> +        * rest of the system, which we consider acceptable for a
> +        * non-privileged client.
> +        *
> +        * For Gen8+ the OA unit no longer supports clock gating off for a
> +        * specific context and the kernel can't securely stop the counters
> +        * from updating as system-wide / global values. Even though we can
> +        * filter reports based on the included context ID we can't block
> +        * clients from seeing the raw / global counter values via
> +        * MI_REPORT_PERF_COUNT commands and so consider it a privileged op to
> +        * enable the OA unit by default.
> +        */
> +       if (IS_HASWELL(dev_priv) && specific_ctx)
> +               privileged_op = false;
> +
>         /* Similar to perf's kernel.perf_paranoid_cpu sysctl option
>          * we check a dev.i915.perf_stream_paranoid sysctl option
>          * to determine if it's ok to access system wide OA counters
>          * without CAP_SYS_ADMIN privileges.
>          */
> -       if (!specific_ctx &&
> +       if (privileged_op &&
>             i915_perf_stream_paranoid && !capable(CAP_SYS_ADMIN)) {
>                 DRM_DEBUG("Insufficient privileges to open system-wide i915 perf stream\n");
>                 ret = -EACCES;
> @@ -2069,9 +2779,6 @@ int i915_perf_open_ioctl(struct drm_device *dev, void *data,
>   */
>  void i915_perf_register(struct drm_i915_private *dev_priv)
>  {
> -       if (!IS_HASWELL(dev_priv))
> -               return;
> -
>         if (!dev_priv->perf.initialized)
>                 return;
>
> @@ -2087,11 +2794,38 @@ void i915_perf_register(struct drm_i915_private *dev_priv)
>         if (!dev_priv->perf.metrics_kobj)
>                 goto exit;
>
> -       if (i915_perf_register_sysfs_hsw(dev_priv)) {
> -               kobject_put(dev_priv->perf.metrics_kobj);
> -               dev_priv->perf.metrics_kobj = NULL;
> +       if (IS_HASWELL(dev_priv)) {
> +               if (i915_perf_register_sysfs_hsw(dev_priv))
> +                       goto sysfs_error;
> +       } else if (IS_BROADWELL(dev_priv)) {
> +               if (i915_perf_register_sysfs_bdw(dev_priv))
> +                       goto sysfs_error;
> +       } else if (IS_CHERRYVIEW(dev_priv)) {
> +               if (i915_perf_register_sysfs_chv(dev_priv))
> +                       goto sysfs_error;
> +       } else if (IS_SKYLAKE(dev_priv)) {
> +               if (IS_SKL_GT2(dev_priv)) {
> +                       if (i915_perf_register_sysfs_sklgt2(dev_priv))
> +                               goto sysfs_error;
> +               } else if (IS_SKL_GT3(dev_priv)) {
> +                       if (i915_perf_register_sysfs_sklgt3(dev_priv))
> +                               goto sysfs_error;
> +               } else if (IS_SKL_GT4(dev_priv)) {
> +                       if (i915_perf_register_sysfs_sklgt4(dev_priv))
> +                               goto sysfs_error;
> +               } else
> +                       goto sysfs_error;
> +       } else if (IS_BROXTON(dev_priv)) {
> +               if (i915_perf_register_sysfs_bxt(dev_priv))
> +                       goto sysfs_error;
>         }
>
> +       goto exit;
> +
> +sysfs_error:
> +       kobject_put(dev_priv->perf.metrics_kobj);
> +       dev_priv->perf.metrics_kobj = NULL;
> +
>  exit:
>         mutex_unlock(&dev_priv->perf.lock);
>  }
> @@ -2107,13 +2841,24 @@ void i915_perf_register(struct drm_i915_private *dev_priv)
>   */
>  void i915_perf_unregister(struct drm_i915_private *dev_priv)
>  {
> -       if (!IS_HASWELL(dev_priv))
> -               return;
> -
>         if (!dev_priv->perf.metrics_kobj)
>                 return;
>
> -       i915_perf_unregister_sysfs_hsw(dev_priv);
> +        if (IS_HASWELL(dev_priv))
> +                i915_perf_unregister_sysfs_hsw(dev_priv);
> +        else if (IS_BROADWELL(dev_priv))
> +                i915_perf_unregister_sysfs_bdw(dev_priv);
> +        else if (IS_CHERRYVIEW(dev_priv))
> +                i915_perf_unregister_sysfs_chv(dev_priv);
> +        else if (IS_SKYLAKE(dev_priv)) {
> +               if (IS_SKL_GT2(dev_priv))
> +                       i915_perf_unregister_sysfs_sklgt2(dev_priv);
> +               else if (IS_SKL_GT3(dev_priv))
> +                       i915_perf_unregister_sysfs_sklgt3(dev_priv);
> +               else if (IS_SKL_GT4(dev_priv))
> +                       i915_perf_unregister_sysfs_sklgt4(dev_priv);
> +       } else if (IS_BROXTON(dev_priv))
> +                i915_perf_unregister_sysfs_bxt(dev_priv);
>
>         kobject_put(dev_priv->perf.metrics_kobj);
>         dev_priv->perf.metrics_kobj = NULL;
> @@ -2172,36 +2917,105 @@ static struct ctl_table dev_root[] = {
>   */
>  void i915_perf_init(struct drm_i915_private *dev_priv)
>  {
> -       if (!IS_HASWELL(dev_priv))
> -               return;
> -
> -       hrtimer_init(&dev_priv->perf.oa.poll_check_timer,
> -                    CLOCK_MONOTONIC, HRTIMER_MODE_REL);
> -       dev_priv->perf.oa.poll_check_timer.function = oa_poll_check_timer_cb;
> -       init_waitqueue_head(&dev_priv->perf.oa.poll_wq);
> +       dev_priv->perf.oa.n_builtin_sets = 0;
> +
> +       if (IS_HASWELL(dev_priv)) {
> +               dev_priv->perf.oa.ops.init_oa_buffer = gen7_init_oa_buffer;
> +               dev_priv->perf.oa.ops.enable_metric_set = hsw_enable_metric_set;
> +               dev_priv->perf.oa.ops.disable_metric_set = hsw_disable_metric_set;
> +               dev_priv->perf.oa.ops.oa_enable = gen7_oa_enable;
> +               dev_priv->perf.oa.ops.oa_disable = gen7_oa_disable;
> +               dev_priv->perf.oa.ops.read = gen7_oa_read;
> +               dev_priv->perf.oa.ops.oa_hw_tail_read =
> +                       gen7_oa_hw_tail_read;
> +
> +               dev_priv->perf.oa.oa_formats = hsw_oa_formats;
> +
> +               dev_priv->perf.oa.n_builtin_sets =
> +                       i915_oa_n_builtin_metric_sets_hsw;
> +       } else if (i915.enable_execlists) {
> +               /* Note: that although we could theoretically also support the
> +                * legacy ringbuffer mode on BDW (and earlier iterations of
> +                * this driver, before upstreaming did this) it didn't seem
> +                * worth the complexity to maintain now that BDW+ enable
> +                * execlist mode by default.
> +                */
>
> -       INIT_LIST_HEAD(&dev_priv->perf.streams);
> -       mutex_init(&dev_priv->perf.lock);
> -       spin_lock_init(&dev_priv->perf.hook_lock);
> -       spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);
> +               if (IS_GEN8(dev_priv)) {
> +                       dev_priv->perf.oa.ctx_oactxctrl_offset = 0x120;
> +                       dev_priv->perf.oa.ctx_flexeu0_offset = 0x2ce;
> +                       dev_priv->perf.oa.gen8_valid_ctx_bit = (1<<25);
> +
> +                       if (IS_BROADWELL(dev_priv)) {
> +                               dev_priv->perf.oa.n_builtin_sets =
> +                                       i915_oa_n_builtin_metric_sets_bdw;
> +                               dev_priv->perf.oa.ops.select_metric_set =
> +                                       i915_oa_select_metric_set_bdw;
> +                       } else if (IS_CHERRYVIEW(dev_priv)) {
> +                               dev_priv->perf.oa.n_builtin_sets =
> +                                       i915_oa_n_builtin_metric_sets_chv;
> +                               dev_priv->perf.oa.ops.select_metric_set =
> +                                       i915_oa_select_metric_set_chv;
> +                       }
> +               } else if (IS_GEN9(dev_priv)) {
> +                       dev_priv->perf.oa.ctx_oactxctrl_offset = 0x128;
> +                       dev_priv->perf.oa.ctx_flexeu0_offset = 0x3de;
> +                       dev_priv->perf.oa.gen8_valid_ctx_bit = (1<<16);
> +
> +                       if (IS_SKL_GT2(dev_priv)) {
> +                               dev_priv->perf.oa.n_builtin_sets =
> +                                       i915_oa_n_builtin_metric_sets_sklgt2;
> +                               dev_priv->perf.oa.ops.select_metric_set =
> +                                       i915_oa_select_metric_set_sklgt2;
> +                       } else if (IS_SKL_GT3(dev_priv)) {
> +                               dev_priv->perf.oa.n_builtin_sets =
> +                                       i915_oa_n_builtin_metric_sets_sklgt3;
> +                               dev_priv->perf.oa.ops.select_metric_set =
> +                                       i915_oa_select_metric_set_sklgt3;
> +                       } else if (IS_SKL_GT4(dev_priv)) {
> +                               dev_priv->perf.oa.n_builtin_sets =
> +                                       i915_oa_n_builtin_metric_sets_sklgt4;
> +                               dev_priv->perf.oa.ops.select_metric_set =
> +                                       i915_oa_select_metric_set_sklgt4;
> +                       } else if (IS_BROXTON(dev_priv)) {
> +                               dev_priv->perf.oa.n_builtin_sets =
> +                                       i915_oa_n_builtin_metric_sets_bxt;
> +                               dev_priv->perf.oa.ops.select_metric_set =
> +                                       i915_oa_select_metric_set_bxt;
> +                       }
> +               }
>
> -       dev_priv->perf.oa.ops.init_oa_buffer = gen7_init_oa_buffer;
> -       dev_priv->perf.oa.ops.enable_metric_set = hsw_enable_metric_set;
> -       dev_priv->perf.oa.ops.disable_metric_set = hsw_disable_metric_set;
> -       dev_priv->perf.oa.ops.oa_enable = gen7_oa_enable;
> -       dev_priv->perf.oa.ops.oa_disable = gen7_oa_disable;
> -       dev_priv->perf.oa.ops.read = gen7_oa_read;
> -       dev_priv->perf.oa.ops.oa_buffer_check =
> -               gen7_oa_buffer_check_unlocked;
> +               if (dev_priv->perf.oa.n_builtin_sets) {
> +                       dev_priv->perf.oa.ops.init_oa_buffer = gen8_init_oa_buffer;
> +                       dev_priv->perf.oa.ops.enable_metric_set =
> +                               gen8_enable_metric_set;
> +                       dev_priv->perf.oa.ops.disable_metric_set =
> +                               gen8_disable_metric_set;
> +                       dev_priv->perf.oa.ops.oa_enable = gen8_oa_enable;
> +                       dev_priv->perf.oa.ops.oa_disable = gen8_oa_disable;
> +                       dev_priv->perf.oa.ops.read = gen8_oa_read;
> +                       dev_priv->perf.oa.ops.oa_hw_tail_read =
> +                               gen8_oa_hw_tail_read;
> +
> +                       dev_priv->perf.oa.oa_formats = gen8_plus_oa_formats;
> +               }
> +       }
>
> -       dev_priv->perf.oa.oa_formats = hsw_oa_formats;
> +       if (dev_priv->perf.oa.n_builtin_sets) {
> +               hrtimer_init(&dev_priv->perf.oa.poll_check_timer,
> +                               CLOCK_MONOTONIC, HRTIMER_MODE_REL);
> +               dev_priv->perf.oa.poll_check_timer.function = oa_poll_check_timer_cb;
> +               init_waitqueue_head(&dev_priv->perf.oa.poll_wq);
>
> -       dev_priv->perf.oa.n_builtin_sets =
> -               i915_oa_n_builtin_metric_sets_hsw;
> +               INIT_LIST_HEAD(&dev_priv->perf.streams);
> +               mutex_init(&dev_priv->perf.lock);
> +               spin_lock_init(&dev_priv->perf.hook_lock);
> +               spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);
>
> -       dev_priv->perf.sysctl_header = register_sysctl_table(dev_root);
> +               dev_priv->perf.sysctl_header = register_sysctl_table(dev_root);
>
> -       dev_priv->perf.initialized = true;
> +               dev_priv->perf.initialized = true;
> +       }
>  }
>
>  /**
> @@ -2216,5 +3030,6 @@ void i915_perf_fini(struct drm_i915_private *dev_priv)
>         unregister_sysctl_table(dev_priv->perf.sysctl_header);
>
>         memset(&dev_priv->perf.oa.ops, 0, sizeof(dev_priv->perf.oa.ops));
> +
>         dev_priv->perf.initialized = false;
>  }
> diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
> index 4c72adae368b..c0bbba6d5b78 100644
> --- a/drivers/gpu/drm/i915/i915_reg.h
> +++ b/drivers/gpu/drm/i915/i915_reg.h
> @@ -653,6 +653,12 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
>
>  #define GEN8_OACTXID _MMIO(0x2364)
>
> +#define GEN8_OA_DEBUG _MMIO(0x2B04)
> +#define  GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS    (1<<5)
> +#define  GEN9_OA_DEBUG_INCLUDE_CLK_RATIO           (1<<6)
> +#define  GEN9_OA_DEBUG_DISABLE_GO_1_0_REPORTS      (1<<2)
> +#define  GEN9_OA_DEBUG_DISABLE_CTX_SWITCH_REPORTS   (1<<1)
> +
>  #define GEN8_OACONTROL _MMIO(0x2B00)
>  #define  GEN8_OA_REPORT_FORMAT_A12         (0<<2)
>  #define  GEN8_OA_REPORT_FORMAT_A12_B8_C8    (2<<2)
> @@ -674,6 +680,7 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
>  #define  GEN7_OABUFFER_STOP_RESUME_ENABLE   (1<<1)
>  #define  GEN7_OABUFFER_RESUME              (1<<0)
>
> +#define GEN8_OABUFFER_UDW _MMIO(0x23b4)
>  #define GEN8_OABUFFER _MMIO(0x2b14)
>
>  #define GEN7_OASTATUS1 _MMIO(0x2364)
> @@ -692,7 +699,9 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
>  #define  GEN8_OASTATUS_REPORT_LOST         (1<<0)
>
>  #define GEN8_OAHEADPTR _MMIO(0x2B0C)
> +#define GEN8_OAHEADPTR_MASK    0xffffffc0
>  #define GEN8_OATAILPTR _MMIO(0x2B10)
> +#define GEN8_OATAILPTR_MASK    0xffffffc0
>
>  #define OABUFFER_SIZE_128K  (0<<3)
>  #define OABUFFER_SIZE_256K  (1<<3)
> @@ -705,7 +714,17 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
>
>  #define OA_MEM_SELECT_GGTT  (1<<0)
>
> +/*
> + * Flexible, Aggregate EU Counter Registers.
> + * Note: these aren't contiguous
> + */
>  #define EU_PERF_CNTL0      _MMIO(0xe458)
> +#define EU_PERF_CNTL1      _MMIO(0xe558)
> +#define EU_PERF_CNTL2      _MMIO(0xe658)
> +#define EU_PERF_CNTL3      _MMIO(0xe758)
> +#define EU_PERF_CNTL4      _MMIO(0xe45c)
> +#define EU_PERF_CNTL5      _MMIO(0xe55c)
> +#define EU_PERF_CNTL6      _MMIO(0xe65c)
>
>  #define GDT_CHICKEN_BITS    _MMIO(0x9840)
>  #define GT_NOA_ENABLE      0x00000080
> @@ -2325,6 +2344,9 @@ enum skl_disp_power_wells {
>  #define   GEN8_RC_SEMA_IDLE_MSG_DISABLE        (1 << 12)
>  #define   GEN8_FF_DOP_CLOCK_GATE_DISABLE       (1<<10)
>
> +#define GEN6_RCS_PWR_FSM _MMIO(0x22ac)
> +#define GEN9_RCS_FE_FSM2 _MMIO(0x22a4)
> +
>  /* Fuse readout registers for GT */
>  #define CHV_FUSE_GT                    _MMIO(VLV_DISPLAY_BASE + 0x2168)
>  #define   CHV_FGT_DISABLE_SS0          (1 << 10)
> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> index 7df278fe492e..351e231a02f4 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -1879,6 +1879,8 @@ static void execlists_init_reg_state(u32 *regs,
>                 regs[CTX_LRI_HEADER_2] = MI_LOAD_REGISTER_IMM(1);
>                 CTX_REG(regs, CTX_R_PWR_CLK_STATE, GEN8_R_PWR_CLK_STATE,
>                         make_rpcs(dev_priv));
> +
> +               i915_oa_init_reg_state(engine, ctx, regs);
>         }
>  }
>
> diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
> index 464547d08173..15bc9f78ba4d 100644
> --- a/include/uapi/drm/i915_drm.h
> +++ b/include/uapi/drm/i915_drm.h
> @@ -1316,13 +1316,18 @@ struct drm_i915_gem_context_param {
>  };
>
>  enum drm_i915_oa_format {
> -       I915_OA_FORMAT_A13 = 1,
> -       I915_OA_FORMAT_A29,
> -       I915_OA_FORMAT_A13_B8_C8,
> -       I915_OA_FORMAT_B4_C8,
> -       I915_OA_FORMAT_A45_B8_C8,
> -       I915_OA_FORMAT_B4_C8_A16,
> -       I915_OA_FORMAT_C4_B8,
> +       I915_OA_FORMAT_A13 = 1,     /* HSW only */
> +       I915_OA_FORMAT_A29,         /* HSW only */
> +       I915_OA_FORMAT_A13_B8_C8,   /* HSW only */
> +       I915_OA_FORMAT_B4_C8,       /* HSW only */
> +       I915_OA_FORMAT_A45_B8_C8,   /* HSW only */
> +       I915_OA_FORMAT_B4_C8_A16,   /* HSW only */
> +       I915_OA_FORMAT_C4_B8,       /* HSW+ */
> +
> +       /* Gen8+ */
> +       I915_OA_FORMAT_A12,
> +       I915_OA_FORMAT_A12_B8_C8,
> +       I915_OA_FORMAT_A32u40_A4u32_B8_C8,
>
>         I915_OA_FORMAT_MAX          /* non-ABI */
>  };
> --
> 2.11.0
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v6 12/15] drm/i915/perf: Add OA unit support for Gen 8+
  2017-04-25 16:42         ` Matthew Auld
@ 2017-04-25 17:10           ` Lionel Landwerlin
  2017-04-25 17:47             ` Matthew Auld
  2017-04-25 17:30           ` [PATCH v7 " Lionel Landwerlin
  1 sibling, 1 reply; 87+ messages in thread
From: Lionel Landwerlin @ 2017-04-25 17:10 UTC (permalink / raw)
  To: Matthew Auld; +Cc: Intel Graphics Development

On 25/04/17 09:42, Matthew Auld wrote:
> On 24 April 2017 at 19:49, Lionel Landwerlin
> <lionel.g.landwerlin@intel.com> wrote:
>> From: Robert Bragg <robert@sixbynine.org>
>>
>> Enables access to OA unit metrics for BDW, CHV, SKL and BXT which all
>> share (more-or-less) the same OA unit design.
>>
>> Of particular note in comparison to Haswell: some OA unit HW config
>> state has become per-context state and as a consequence it is somewhat
>> more complicated to manage synchronous state changes from the cpu while
>> there's no guarantee of what context (if any) is currently actively
>> running on the gpu.
>>
>> The periodic sampling frequency which can be particularly useful for
>> system-wide analysis (as opposed to command stream synchronised
>> MI_REPORT_PERF_COUNT commands) is perhaps the most surprising state to
>> have become per-context save and restored (while the OABUFFER
>> destination is still a shared, system-wide resource).
>>
>> This support for gen8+ takes care to consider a number of timing
>> challenges involved in synchronously updating per-context state
>> primarily by programming all config state from the cpu and updating all
>> current and saved contexts synchronously while the OA unit is still
>> disabled.
>>
>> The driver intentionally avoids depending on command streamer
>> programming to update OA state considering the lack of synchronization
>> between the automatic loading of OACTXCONTROL state (that includes the
>> periodic sampling state and enable state) on context restore and the
>> parsing of any general purpose BB the driver can control. I.e. this
>> implementation is careful to avoid the possibility of a context restore
>> temporarily enabling any out-of-date periodic sampling state. In
>> addition to the risk of transiently-out-of-date state being loaded
>> automatically; there are also internal HW latencies involved in the
>> loading of MUX configurations which would be difficult to account for
>> from the command streamer (and we only want to enable the unit when once
>> the MUX configuration is complete).
>>
>> Since the Gen8+ OA unit design no longer supports clock gating the unit
>> off for a single given context (which effectively stopped any progress
>> of counters while any other context was running) and instead supports
>> tagging OA reports with a context ID for filtering on the CPU, it means
>> we can no longer hide the system-wide progress of counters from a
>> non-privileged application only interested in metrics for its own
>> context. Although we could theoretically try and subtract the progress
>> of other contexts before forwarding reports via read() we aren't in a
>> position to filter reports captured via MI_REPORT_PERF_COUNT commands.
>> As a result, for Gen8+, we always require the
>> dev.i915.perf_stream_paranoid to be unset for any access to OA metrics
>> if not root.
>>
>> v5: Drain submitted requests when enabling metric set to ensure no
>>      lite-restore erases the context image we just updated (Lionel)
>>
>> v6: In addition to drain, switch to kernel context & update all
>>      context in place (Chris)
>>
>> Signed-off-by: Robert Bragg <robert@sixbynine.org>
>> Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
>> Reviewed-by: Matthew Auld <matthew.auld@intel.com> \o/
>> ---
>>   drivers/gpu/drm/i915/i915_drv.h  |  45 +-
>>   drivers/gpu/drm/i915/i915_perf.c | 957 ++++++++++++++++++++++++++++++++++++---
>>   drivers/gpu/drm/i915/i915_reg.h  |  22 +
>>   drivers/gpu/drm/i915/intel_lrc.c |   2 +
>>   include/uapi/drm/i915_drm.h      |  19 +-
>>   5 files changed, 952 insertions(+), 93 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
>> index ffa1fc5eddfd..676b1227067c 100644
>> --- a/drivers/gpu/drm/i915/i915_drv.h
>> +++ b/drivers/gpu/drm/i915/i915_drv.h
>> @@ -2064,9 +2064,17 @@ struct i915_oa_ops {
>>          void (*init_oa_buffer)(struct drm_i915_private *dev_priv);
>>
>>          /**
>> -        * @enable_metric_set: Applies any MUX configuration to set up the
>> -        * Boolean and Custom (B/C) counters that are part of the counter
>> -        * reports being sampled. May apply system constraints such as
>> +        * @select_metric_set: The auto generated code that checks whether a
>> +        * requested OA config is applicable to the system and if so sets up
>> +        * the mux, oa and flex eu register config pointers according to the
>> +        * current dev_priv->perf.oa.metrics_set.
>> +        */
>> +       int (*select_metric_set)(struct drm_i915_private *dev_priv);
>> +
>> +       /**
>> +        * @enable_metric_set: Selects and applies any MUX configuration to set
>> +        * up the Boolean and Custom (B/C) counters that are part of the
>> +        * counter reports being sampled. May apply system constraints such as
>>           * disabling EU clock gating as required.
>>           */
>>          int (*enable_metric_set)(struct drm_i915_private *dev_priv);
>> @@ -2097,20 +2105,13 @@ struct i915_oa_ops {
>>                      size_t *offset);
>>
>>          /**
>> -        * @oa_buffer_check: Check for OA buffer data + update tail
>> -        *
>> -        * This is either called via fops or the poll check hrtimer (atomic
>> -        * ctx) without any locks taken.
>> +        * @oa_hw_tail_read: read the OA tail pointer register
>>           *
>> -        * It's safe to read OA config state here unlocked, assuming that this
>> -        * is only called while the stream is enabled, while the global OA
>> -        * configuration can't be modified.
>> -        *
>> -        * Efficiency is more important than avoiding some false positives
>> -        * here, which will be handled gracefully - likely resulting in an
>> -        * %EAGAIN error for userspace.
>> +        * In particular this enables us to share all the fiddly code for
>> +        * handling the OA unit tail pointer race that affects multiple
>> +        * generations.
>>           */
>> -       bool (*oa_buffer_check)(struct drm_i915_private *dev_priv);
>> +       u32 (*oa_hw_tail_read)(struct drm_i915_private *dev_priv);
>>   };
>>
>>   struct intel_cdclk_state {
>> @@ -2472,6 +2473,7 @@ struct drm_i915_private {
>>                          struct {
>>                                  struct i915_vma *vma;
>>                                  u8 *vaddr;
>> +                               u32 last_ctx_id;
>>                                  int format;
>>                                  int format_size;
>>
>> @@ -2541,6 +2543,14 @@ struct drm_i915_private {
>>                          } oa_buffer;
>>
>>                          u32 gen7_latched_oastatus1;
>> +                       u32 ctx_oactxctrl_offset;
>> +                       u32 ctx_flexeu0_offset;
>> +
>> +                       /* The RPT_ID/reason field for Gen8+ includes a bit
>> +                        * to determine if the CTX ID in the report is valid
>> +                        * but the specific bit differs between Gen 8 and 9
>> +                        */
>> +                       u32 gen8_valid_ctx_bit;
>>
>>                          struct i915_oa_ops ops;
>>                          const struct i915_oa_format *oa_formats;
>> @@ -2851,6 +2861,8 @@ intel_info(const struct drm_i915_private *dev_priv)
>>   #define IS_KBL_ULX(dev_priv)   (INTEL_DEVID(dev_priv) == 0x590E || \
>>                                   INTEL_DEVID(dev_priv) == 0x5915 || \
>>                                   INTEL_DEVID(dev_priv) == 0x591E)
>> +#define IS_SKL_GT2(dev_priv)   (IS_SKYLAKE(dev_priv) && \
>> +                                (INTEL_DEVID(dev_priv) & 0x00F0) == 0x0010)
>>   #define IS_SKL_GT3(dev_priv)   (IS_SKYLAKE(dev_priv) && \
>>                                   (INTEL_DEVID(dev_priv) & 0x00F0) == 0x0020)
>>   #define IS_SKL_GT4(dev_priv)   (IS_SKYLAKE(dev_priv) && \
>> @@ -3615,6 +3627,9 @@ i915_gem_context_lookup_timeline(struct i915_gem_context *ctx,
>>
>>   int i915_perf_open_ioctl(struct drm_device *dev, void *data,
>>                           struct drm_file *file);
>> +void i915_oa_init_reg_state(struct intel_engine_cs *engine,
>> +                           struct i915_gem_context *ctx,
>> +                           uint32_t *reg_state);
>>
>>   /* i915_gem_evict.c */
>>   int __must_check i915_gem_evict_something(struct i915_address_space *vm,
>> diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
>> index 3277a52ce98e..f2688dee5a03 100644
>> --- a/drivers/gpu/drm/i915/i915_perf.c
>> +++ b/drivers/gpu/drm/i915/i915_perf.c
>> @@ -196,6 +196,12 @@
>>
>>   #include "i915_drv.h"
>>   #include "i915_oa_hsw.h"
>> +#include "i915_oa_bdw.h"
>> +#include "i915_oa_chv.h"
>> +#include "i915_oa_sklgt2.h"
>> +#include "i915_oa_sklgt3.h"
>> +#include "i915_oa_sklgt4.h"
>> +#include "i915_oa_bxt.h"
>>
>>   /* HW requires this to be a power of two, between 128k and 16M, though driver
>>    * is currently generally designed assuming the largest 16M size is used such
>> @@ -215,7 +221,7 @@
>>    *
>>    * Although this can be observed explicitly while copying reports to userspace
>>    * by checking for a zeroed report-id field in tail reports, we want to account
>> - * for this earlier, as part of the _oa_buffer_check to avoid lots of redundant
>> + * for this earlier, as part of the oa_buffer_check to avoid lots of redundant
>>    * read() attempts.
>>    *
>>    * In effect we define a tail pointer for reading that lags the real tail
>> @@ -237,7 +243,7 @@
>>    * indicates that an updated tail pointer is needed.
>>    *
>>    * Most of the implementation details for this workaround are in
>> - * gen7_oa_buffer_check_unlocked() and gen7_appand_oa_reports()
>> + * oa_buffer_check_unlocked() and _append_oa_reports()
>>    *
>>    * Note for posterity: previously the driver used to define an effective tail
>>    * pointer that lagged the real pointer by a 'tail margin' measured in bytes
>> @@ -272,6 +278,13 @@ static u32 i915_perf_stream_paranoid = true;
>>
>>   #define INVALID_CTX_ID 0xffffffff
>>
>> +/* On Gen8+ automatically triggered OA reports include a 'reason' field... */
>> +#define OAREPORT_REASON_MASK           0x3f
>> +#define OAREPORT_REASON_SHIFT          19
>> +#define OAREPORT_REASON_TIMER          (1<<0)
>> +#define OAREPORT_REASON_CTX_SWITCH     (1<<3)
>> +#define OAREPORT_REASON_CLK_RATIO      (1<<5)
>> +
>>
>>   /* For sysctl proc_dointvec_minmax of i915_oa_max_sample_rate
>>    *
>> @@ -303,6 +316,13 @@ static struct i915_oa_format hsw_oa_formats[I915_OA_FORMAT_MAX] = {
>>          [I915_OA_FORMAT_C4_B8]      = { 7, 64 },
>>   };
>>
>> +static struct i915_oa_format gen8_plus_oa_formats[I915_OA_FORMAT_MAX] = {
>> +       [I915_OA_FORMAT_A12]                = { 0, 64 },
>> +       [I915_OA_FORMAT_A12_B8_C8]          = { 2, 128 },
>> +       [I915_OA_FORMAT_A32u40_A4u32_B8_C8] = { 5, 256 },
>> +       [I915_OA_FORMAT_C4_B8]              = { 7, 64 },
>> +};
>> +
>>   #define SAMPLE_OA_REPORT      (1<<0)
>>
>>   /**
>> @@ -332,8 +352,20 @@ struct perf_open_properties {
>>          int oa_period_exponent;
>>   };
>>
>> +static u32 gen8_oa_hw_tail_read(struct drm_i915_private *dev_priv)
>> +{
>> +       return I915_READ(GEN8_OATAILPTR) & GEN8_OATAILPTR_MASK;
>> +}
>> +
>> +static u32 gen7_oa_hw_tail_read(struct drm_i915_private *dev_priv)
>> +{
>> +       u32 oastatus1 = I915_READ(GEN7_OASTATUS1);
>> +
>> +       return oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
>> +}
>> +
>>   /**
>> - * gen7_oa_buffer_check_unlocked - check for data and update tail ptr state
>> + * oa_buffer_check_unlocked - check for data and update tail ptr state
>>    * @dev_priv: i915 device instance
>>    *
>>    * This is either called via fops (for blocking reads in user ctx) or the poll
>> @@ -356,12 +388,11 @@ struct perf_open_properties {
>>    *
>>    * Returns: %true if the OA buffer contains data, else %false
>>    */
>> -static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
>> +static bool oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
>>   {
>>          int report_size = dev_priv->perf.oa.oa_buffer.format_size;
>>          unsigned long flags;
>>          unsigned int aged_idx;
>> -       u32 oastatus1;
>>          u32 head, hw_tail, aged_tail, aging_tail;
>>          u64 now;
>>
>> @@ -381,8 +412,7 @@ static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
>>          aged_tail = dev_priv->perf.oa.oa_buffer.tails[aged_idx].offset;
>>          aging_tail = dev_priv->perf.oa.oa_buffer.tails[!aged_idx].offset;
>>
>> -       oastatus1 = I915_READ(GEN7_OASTATUS1);
>> -       hw_tail = oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
>> +       hw_tail = dev_priv->perf.oa.ops.oa_hw_tail_read(dev_priv);
>>
>>          /* The tail pointer increases in 64 byte increments,
>>           * not in report_size steps...
>> @@ -404,6 +434,7 @@ static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
>>          if (aging_tail != INVALID_TAIL_PTR &&
>>              ((now - dev_priv->perf.oa.oa_buffer.aging_timestamp) >
>>               OA_TAIL_MARGIN_NSEC)) {
>> +
>>                  aged_idx ^= 1;
>>                  dev_priv->perf.oa.oa_buffer.aged_tail_idx = aged_idx;
>>
>> @@ -553,6 +584,287 @@ static int append_oa_sample(struct i915_perf_stream *stream,
>>    *
>>    * Returns: 0 on success, negative error code on failure.
>>    */
>> +static int gen8_append_oa_reports(struct i915_perf_stream *stream,
>> +                                 char __user *buf,
>> +                                 size_t count,
>> +                                 size_t *offset)
>> +{
>> +       struct drm_i915_private *dev_priv = stream->dev_priv;
>> +       int report_size = dev_priv->perf.oa.oa_buffer.format_size;
>> +       u8 *oa_buf_base = dev_priv->perf.oa.oa_buffer.vaddr;
>> +       u32 gtt_offset = i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
>> +       u32 mask = (OA_BUFFER_SIZE - 1);
>> +       size_t start_offset = *offset;
>> +       unsigned long flags;
>> +       unsigned int aged_tail_idx;
>> +       u32 head, tail;
>> +       u32 taken;
>> +       int ret = 0;
>> +
>> +       if (WARN_ON(!stream->enabled))
>> +               return -EIO;
>> +
>> +       spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
>> +
>> +       head = dev_priv->perf.oa.oa_buffer.head;
>> +       aged_tail_idx = dev_priv->perf.oa.oa_buffer.aged_tail_idx;
>> +       tail = dev_priv->perf.oa.oa_buffer.tails[aged_tail_idx].offset;
>> +
>> +       spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
>> +
>> +       /* An invalid tail pointer here means we're still waiting for the poll
>> +        * hrtimer callback to give us a pointer
>> +        */
>> +       if (tail == INVALID_TAIL_PTR)
>> +               return -EAGAIN;
>> +
>> +       /* NB: oa_buffer.head/tail include the gtt_offset which we don't want
>> +        * while indexing relative to oa_buf_base.
>> +        */
>> +       head -= gtt_offset;
>> +       tail -= gtt_offset;
>> +
>> +       /* An out of bounds or misaligned head or tail pointer implies a driver
>> +        * bug since we validate + align the tail pointers we read from the
>> +        * hardware and we are in full control of the head pointer which should
>> +        * only be incremented by multiples of the report size (notably also
>> +        * all a power of two).
>> +        */
>> +       if (WARN_ONCE(head > OA_BUFFER_SIZE || head % report_size ||
>> +                     tail > OA_BUFFER_SIZE || tail % report_size,
>> +                     "Inconsistent OA buffer pointers: head = %u, tail = %u\n",
>> +                     head, tail))
>> +               return -EIO;
>> +
>> +
>> +       for (/* none */;
>> +            (taken = OA_TAKEN(tail, head));
>> +            head = (head + report_size) & mask) {
>> +               u8 *report = oa_buf_base + head;
>> +               u32 *report32 = (void *)report;
>> +               u32 ctx_id;
>> +               u32 reason;
>> +
>> +               /* All the report sizes factor neatly into the buffer
>> +                * size so we never expect to see a report split
>> +                * between the beginning and end of the buffer.
>> +                *
>> +                * Given the initial alignment check a misalignment
>> +                * here would imply a driver bug that would result
>> +                * in an overrun.
>> +                */
>> +               if (WARN_ON((OA_BUFFER_SIZE - head) < report_size)) {
>> +                       DRM_ERROR("Spurious OA head ptr: non-integral report offset\n");
>> +                       break;
>> +               }
>> +
>> +               /* The reason field includes flags identifying what
>> +                * triggered this specific report (mostly timer
>> +                * triggered or e.g. due to a context switch).
>> +                *
>> +                * This field is never expected to be zero so we can
>> +                * check that the report isn't invalid before copying
>> +                * it to userspace...
>> +                */
>> +               reason = ((report32[0] >> OAREPORT_REASON_SHIFT) &
>> +                         OAREPORT_REASON_MASK);
>> +               if (reason == 0) {
>> +                       if (__ratelimit(&dev_priv->perf.oa.spurious_report_rs))
>> +                               DRM_NOTE("Skipping spurious, invalid OA report\n");
>> +                       continue;
>> +               }
>> +
>> +               /* XXX: Just keep the lower 21 bits for now since I'm not
>> +                * entirely sure if the HW touches any of the higher bits in
>> +                * this field
>> +                */
>> +               ctx_id = report32[2] & 0x1fffff;
>> +
>> +               /* Squash whatever is in the CTX_ID field if it's marked as
>> +                * invalid to be sure we avoid false-positive, single-context
>> +                * filtering below...
>> +                *
>> +                * Note: that we don't clear the valid_ctx_bit so userspace can
>> +                * understand that the ID has been squashed by the kernel.
>> +                */
>> +               if (!(report32[0] & dev_priv->perf.oa.gen8_valid_ctx_bit))
>> +                       ctx_id = report32[2] = INVALID_CTX_ID;
>> +
>> +               /* NB: For Gen 8 the OA unit no longer supports clock gating
>> +                * off for a specific context and the kernel can't securely
>> +                * stop the counters from updating as system-wide / global
>> +                * values.
>> +                *
>> +                * Automatic reports now include a context ID so reports can be
>> +                * filtered on the cpu but it's not worth trying to
>> +                * automatically subtract/hide counter progress for other
>> +                * contexts while filtering since we can't stop userspace
>> +                * issuing MI_REPORT_PERF_COUNT commands which would still
>> +                * provide a side-band view of the real values.
>> +                *
>> +                * To allow userspace (such as Mesa/GL_INTEL_performance_query)
>> +                * to normalize counters for a single filtered context then it
>> +                * needs be forwarded bookend context-switch reports so that it
>> +                * can track switches in between MI_REPORT_PERF_COUNT commands
>> +                * and can itself subtract/ignore the progress of counters
>> +                * associated with other contexts. Note that the hardware
>> +                * automatically triggers reports when switching to a new
>> +                * context which are tagged with the ID of the newly active
>> +                * context. To avoid the complexity (and likely fragility) of
>> +                * reading ahead while parsing reports to try and minimize
>> +                * forwarding redundant context switch reports (i.e. between
>> +                * other, unrelated contexts) we simply elect to forward them
>> +                * all.
>> +                *
>> +                * We don't rely solely on the reason field to identify context
>> +                * switches since it's not-uncommon for periodic samples to
>> +                * identify a switch before any 'context switch' report.
>> +                */
>> +               if (!dev_priv->perf.oa.exclusive_stream->ctx ||
>> +                   dev_priv->perf.oa.specific_ctx_id == ctx_id ||
>> +                   (dev_priv->perf.oa.oa_buffer.last_ctx_id ==
>> +                    dev_priv->perf.oa.specific_ctx_id) ||
>> +                   reason & OAREPORT_REASON_CTX_SWITCH) {
>> +
>> +                       /* While filtering for a single context we avoid
>> +                        * leaking the IDs of other contexts.
>> +                        */
>> +                       if (dev_priv->perf.oa.exclusive_stream->ctx &&
>> +                           dev_priv->perf.oa.specific_ctx_id != ctx_id) {
>> +                               report32[2] = INVALID_CTX_ID;
>> +                       }
>> +
>> +                       ret = append_oa_sample(stream, buf, count, offset,
>> +                                              report);
>> +                       if (ret)
>> +                               break;
>> +
>> +                       dev_priv->perf.oa.oa_buffer.last_ctx_id = ctx_id;
>> +               }
>> +
>> +               /* The above reason field sanity check is based on
>> +                * the assumption that the OA buffer is initially
>> +                * zeroed and we reset the field after copying so the
>> +                * check is still meaningful once old reports start
>> +                * being overwritten.
>> +                */
>> +               report32[0] = 0;
>> +       }
>> +
>> +       if (start_offset != *offset) {
>> +               spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
>> +
>> +               /* We removed the gtt_offset for the copy loop above, indexing
>> +                * relative to oa_buf_base so put back here...
>> +                */
>> +               head += gtt_offset;
>> +
>> +               I915_WRITE(GEN8_OAHEADPTR, head & GEN8_OAHEADPTR_MASK);
>> +               dev_priv->perf.oa.oa_buffer.head = head;
>> +
>> +               spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
>> +       }
>> +
>> +       return ret;
>> +}
>> +
>> +/**
>> + * gen8_oa_read - copy status records then buffered OA reports
>> + * @stream: An i915-perf stream opened for OA metrics
>> + * @buf: destination buffer given by userspace
>> + * @count: the number of bytes userspace wants to read
>> + * @offset: (inout): the current position for writing into @buf
>> + *
>> + * Checks OA unit status registers and if necessary appends corresponding
>> + * status records for userspace (such as for a buffer full condition) and then
>> + * initiate appending any buffered OA reports.
>> + *
>> + * Updates @offset according to the number of bytes successfully copied into
>> + * the userspace buffer.
>> + *
>> + * NB: some data may be successfully copied to the userspace buffer
>> + * even if an error is returned, and this is reflected in the
>> + * updated @offset.
>> + *
>> + * Returns: zero on success or a negative error code
>> + */
>> +static int gen8_oa_read(struct i915_perf_stream *stream,
>> +                       char __user *buf,
>> +                       size_t count,
>> +                       size_t *offset)
>> +{
>> +       struct drm_i915_private *dev_priv = stream->dev_priv;
>> +       u32 oastatus;
>> +       int ret;
>> +
>> +       if (WARN_ON(!dev_priv->perf.oa.oa_buffer.vaddr))
>> +               return -EIO;
>> +
>> +       oastatus = I915_READ(GEN8_OASTATUS);
>> +
>> +       /* We treat OABUFFER_OVERFLOW as a significant error:
>> +        *
>> +        * Although theoretically we could handle this more gracefully
>> +        * sometimes, some Gens don't correctly suppress certain
>> +        * automatically triggered reports in this condition and so we
>> +        * have to assume that old reports are now being trampled
>> +        * over.
>> +        *
>> +        * Considering how we don't currently give userspace control
>> +        * over the OA buffer size and always configure a large 16MB
>> +        * buffer, then a buffer overflow does anyway likely indicate
>> +        * that something has gone quite badly wrong.
>> +        */
>> +       if (oastatus & GEN8_OASTATUS_OABUFFER_OVERFLOW) {
>> +               ret = append_oa_status(stream, buf, count, offset,
>> +                                      DRM_I915_PERF_RECORD_OA_BUFFER_LOST);
>> +               if (ret)
>> +                       return ret;
>> +
>> +               DRM_DEBUG("OA buffer overflow (exponent = %d): force restart\n",
>> +                         dev_priv->perf.oa.period_exponent);
>> +
>> +               dev_priv->perf.oa.ops.oa_disable(dev_priv);
>> +               dev_priv->perf.oa.ops.oa_enable(dev_priv);
>> +
>> +               /* Note: .oa_enable() is expected to re-init the oabuffer
>> +                * and reset GEN8_OASTATUS for us
>> +                */
>> +               oastatus = I915_READ(GEN8_OASTATUS);
>> +       }
>> +
>> +       if (oastatus & GEN8_OASTATUS_REPORT_LOST) {
>> +               ret = append_oa_status(stream, buf, count, offset,
>> +                                      DRM_I915_PERF_RECORD_OA_REPORT_LOST);
>> +               if (ret)
>> +                       return ret;
>> +               I915_WRITE(GEN8_OASTATUS,
>> +                          oastatus & ~GEN8_OASTATUS_REPORT_LOST);
>> +       }
>> +
>> +       return gen8_append_oa_reports(stream, buf, count, offset);
>> +}
>> +
>> +/**
>> + * Copies all buffered OA reports into userspace read() buffer.
>> + * @stream: An i915-perf stream opened for OA metrics
>> + * @buf: destination buffer given by userspace
>> + * @count: the number of bytes userspace wants to read
>> + * @offset: (inout): the current position for writing into @buf
>> + *
>> + * Notably any error condition resulting in a short read (-%ENOSPC or
>> + * -%EFAULT) will be returned even though one or more records may
>> + * have been successfully copied. In this case it's up to the caller
>> + * to decide if the error should be squashed before returning to
>> + * userspace.
>> + *
>> + * Note: reports are consumed from the head, and appended to the
>> + * tail, so the tail chases the head?... If you think that's mad
>> + * and back-to-front you're not alone, but this follows the
>> + * Gen PRM naming convention.
>> + *
>> + * Returns: 0 on success, negative error code on failure.
>> + */
>>   static int gen7_append_oa_reports(struct i915_perf_stream *stream,
>>                                    char __user *buf,
>>                                    size_t count,
>> @@ -733,7 +1045,8 @@ static int gen7_oa_read(struct i915_perf_stream *stream,
>>                  if (ret)
>>                          return ret;
>>
>> -               DRM_DEBUG("OA buffer overflow: force restart\n");
>> +               DRM_DEBUG("OA buffer overflow (exponent = %d): force restart\n",
>> +                         dev_priv->perf.oa.period_exponent);
>>
>>                  dev_priv->perf.oa.ops.oa_disable(dev_priv);
>>                  dev_priv->perf.oa.ops.oa_enable(dev_priv);
>> @@ -776,7 +1089,7 @@ static int i915_oa_wait_unlocked(struct i915_perf_stream *stream)
>>                  return -EIO;
>>
>>          return wait_event_interruptible(dev_priv->perf.oa.poll_wq,
>> -                                       dev_priv->perf.oa.ops.oa_buffer_check(dev_priv));
>> +                                       oa_buffer_check_unlocked(dev_priv));
>>   }
>>
>>   /**
>> @@ -833,33 +1146,37 @@ static int i915_oa_read(struct i915_perf_stream *stream,
>>   static int oa_get_render_ctx_id(struct i915_perf_stream *stream)
>>   {
>>          struct drm_i915_private *dev_priv = stream->dev_priv;
>> -       struct intel_engine_cs *engine = dev_priv->engine[RCS];
>> -       int ret;
>>
>> -       ret = i915_mutex_lock_interruptible(&dev_priv->drm);
>> -       if (ret)
>> -               return ret;
>> +       if (i915.enable_execlists)
>> +               dev_priv->perf.oa.specific_ctx_id = stream->ctx->hw_id;
>> +       else {
>> +               struct intel_engine_cs *engine = dev_priv->engine[RCS];
>> +               int ret;
>>
>> -       /* As the ID is the gtt offset of the context's vma we pin
>> -        * the vma to ensure the ID remains fixed.
>> -        *
>> -        * NB: implied RCS engine...
>> -        */
>> -       ret = engine->context_pin(engine, stream->ctx);
>> -       if (ret)
>> -               goto unlock;
>> +               ret = i915_mutex_lock_interruptible(&dev_priv->drm);
>> +               if (ret)
>> +                       return ret;
>>
>> -       /* Explicitly track the ID (instead of calling i915_ggtt_offset()
>> -        * on the fly) considering the difference with gen8+ and
>> -        * execlists
>> -        */
>> -       dev_priv->perf.oa.specific_ctx_id =
>> -               i915_ggtt_offset(stream->ctx->engine[engine->id].state);
>> +               /* As the ID is the gtt offset of the context's vma we pin
>> +                * the vma to ensure the ID remains fixed.
>> +                */
>> +               ret = engine->context_pin(engine, stream->ctx);
>> +               if (ret) {
>> +                       mutex_unlock(&dev_priv->drm.struct_mutex);
>> +                       return ret;
>> +               }
>>
>> -unlock:
>> -       mutex_unlock(&dev_priv->drm.struct_mutex);
>> +               /* Explicitly track the ID (instead of calling
>> +                * i915_ggtt_offset() on the fly) considering the difference
>> +                * with gen8+ and execlists
>> +                */
>> +               dev_priv->perf.oa.specific_ctx_id =
>> +                       i915_ggtt_offset(stream->ctx->engine[engine->id].state);
>>
>> -       return ret;
>> +               mutex_unlock(&dev_priv->drm.struct_mutex);
>> +       }
>> +
>> +       return 0;
>>   }
>>
>>   /**
>> @@ -874,12 +1191,16 @@ static void oa_put_render_ctx_id(struct i915_perf_stream *stream)
>>          struct drm_i915_private *dev_priv = stream->dev_priv;
>>          struct intel_engine_cs *engine = dev_priv->engine[RCS];
>>
>> -       mutex_lock(&dev_priv->drm.struct_mutex);
>> +       if (i915.enable_execlists) {
>> +               dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
>> +       } else {
>> +               mutex_lock(&dev_priv->drm.struct_mutex);
>>
>> -       dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
>> -       engine->context_unpin(engine, stream->ctx);
>> +               dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
>> +               engine->context_unpin(engine, stream->ctx);
>>
>> -       mutex_unlock(&dev_priv->drm.struct_mutex);
>> +               mutex_unlock(&dev_priv->drm.struct_mutex);
>> +       }
>>   }
>>
>>   static void
>> @@ -969,6 +1290,61 @@ static void gen7_init_oa_buffer(struct drm_i915_private *dev_priv)
>>          dev_priv->perf.oa.pollin = false;
>>   }
>>
>> +static void gen8_init_oa_buffer(struct drm_i915_private *dev_priv)
>> +{
>> +       u32 gtt_offset = i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
>> +       unsigned long flags;
>> +
>> +       spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
>> +
>> +       I915_WRITE(GEN8_OASTATUS, 0);
>> +       I915_WRITE(GEN8_OAHEADPTR, gtt_offset);
>> +       dev_priv->perf.oa.oa_buffer.head = gtt_offset;
>> +
>> +       I915_WRITE(GEN8_OABUFFER_UDW, 0);
>> +
>> +       /* PRM says:
>> +        *
>> +        *  "This MMIO must be set before the OATAILPTR
>> +        *  register and after the OAHEADPTR register. This is
>> +        *  to enable proper functionality of the overflow
>> +        *  bit."
>> +        */
>> +       I915_WRITE(GEN8_OABUFFER, gtt_offset |
>> +                  OABUFFER_SIZE_16M | OA_MEM_SELECT_GGTT);
>> +       I915_WRITE(GEN8_OATAILPTR, gtt_offset & GEN8_OATAILPTR_MASK);
>> +
>> +       /* Mark that we need updated tail pointers to read from... */
>> +       dev_priv->perf.oa.oa_buffer.tails[0].offset = INVALID_TAIL_PTR;
>> +       dev_priv->perf.oa.oa_buffer.tails[1].offset = INVALID_TAIL_PTR;
>> +
>> +       /* Reset state used to recognise context switches, affecting which
>> +        * reports we will forward to userspace while filtering for a single
>> +        * context.
>> +        */
>> +       dev_priv->perf.oa.oa_buffer.last_ctx_id = INVALID_CTX_ID;
>> +
>> +       spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
>> +
>> +       /* NB: although the OA buffer will initially be allocated
>> +        * zeroed via shmfs (and so this memset is redundant when
>> +        * first allocating), we may re-init the OA buffer, either
>> +        * when re-enabling a stream or in error/reset paths.
>> +        *
>> +        * The reason we clear the buffer for each re-init is for the
>> +        * sanity check in gen8_append_oa_reports() that looks at the
>> +        * reason field to make sure it's non-zero which relies on
>> +        * the assumption that new reports are being written to zeroed
>> +        * memory...
>> +        */
>> +       memset(dev_priv->perf.oa.oa_buffer.vaddr, 0, OA_BUFFER_SIZE);
>> +
>> +       /* Maybe make ->pollin per-stream state if we support multiple
>> +        * concurrent streams in the future.
>> +        */
>> +       dev_priv->perf.oa.pollin = false;
>> +}
>> +
>>   static int alloc_oa_buffer(struct drm_i915_private *dev_priv)
>>   {
>>          struct drm_i915_gem_object *bo;
>> @@ -1113,6 +1489,280 @@ static void hsw_disable_metric_set(struct drm_i915_private *dev_priv)
>>                                        ~GT_NOA_ENABLE));
>>   }
>>
>> +/*
>> + * From Broadwell PRM, 3D-Media-GPGPU -> Register State Context
>> + *
>> + * MMIO reads or writes to any of the registers listed in the
>> + * “Register State Context image” subsections through HOST/IA
>> + * MMIO interface for debug purposes must follow the steps below:
>> + *
>> + * - SW should set the Force Wakeup bit to prevent GT from entering C6.
>> + * - Write 0x2050[31:0] = 0x00010001 (disable sequence).
>> + * - Disable IDLE messaging in CS (Write 0x2050[31:0] = 0x00010001).
>> + * - BDW:  Poll/Wait for register bits of 0x22AC[6:0] turn to 0x30 value.
>> + * - SKL+: Poll/Wait for register bits of 0x22A4[6:0] turn to 0x30 value.
>> + * - Read/Write to desired MMIO registers.
>> + * - Enable IDLE messaging in CS (Write 0x2050[31:0] = 0x00010000).
>> + * - Force Wakeup bit should be reset to enable C6 entry.
>> + *
>> + * XXX: don't nest or overlap calls to this API, it has no ref
>> + * counting to track how many entities require the RCS to be
>> + * blocked from being idle.
>> + */
>> +static int gen8_begin_ctx_mmio(struct drm_i915_private *dev_priv)
>> +{
>> +       i915_reg_t fsm_reg = INTEL_GEN(dev_priv) > 8 ?
>> +               GEN9_RCS_FE_FSM2 : GEN6_RCS_PWR_FSM;
>> +       int ret = 0;
>> +
>> +       /* There's only no active context while idle in execlist mode
>> +        * (though we shouldn't be using this in any other case)
>> +        */
>> +       if (WARN_ON(!i915.enable_execlists))
>> +               return -EINVAL;
>> +
>> +       intel_uncore_forcewake_get(dev_priv, FORCEWAKE_RENDER);
>> +
>> +       I915_WRITE(GEN6_RC_SLEEP_PSMI_CONTROL,
>> +                  _MASKED_BIT_ENABLE(GEN6_PSMI_SLEEP_MSG_DISABLE));
>> +
>> +       /* Note: we don't currently have a good handle on the maximum
>> +        * latency for this wake up so while we only need to hold rcs
>> +        * busy from process context we at least keep the waiting
>> +        * interruptible...
>> +        */
>> +       ret = intel_wait_for_register(dev_priv, fsm_reg, 0x3f, 0x30, 50);
>> +       if (ret == -ETIMEDOUT)
>> +               DRM_ERROR("Timeout waiting to be able to begin per-ctx mmio\n");
>> +
>> +       if (ret)
>> +               intel_uncore_forcewake_put(dev_priv, FORCEWAKE_RENDER);
>> +
>> +       return ret;
>> +}
>> +
>> +static void gen8_end_ctx_mmio(struct drm_i915_private *dev_priv)
>> +{
>> +       if (WARN_ON(!i915.enable_execlists))
>> +               return;
>> +
>> +       I915_WRITE(GEN6_RC_SLEEP_PSMI_CONTROL,
>> +                  _MASKED_BIT_DISABLE(GEN6_PSMI_SLEEP_MSG_DISABLE));
>> +       intel_uncore_forcewake_put(dev_priv, FORCEWAKE_RENDER);
>> +}
>> +
>> +/* NB: It must always remain pointer safe to run this even if the OA unit
>> + * has been disabled.
>> + *
>> + * It's fine to put out-of-date values into these per-context registers
>> + * in the case that the OA unit has been disabled.
>> + */
>> +static void gen8_update_reg_state_unlocked(struct i915_gem_context *ctx,
>> +                                          u32 *reg_state)
>> +{
>> +       struct drm_i915_private *dev_priv = ctx->i915;
>> +       const struct i915_oa_reg *flex_regs = dev_priv->perf.oa.flex_regs;
>> +       int n_flex_regs = dev_priv->perf.oa.flex_regs_len;
>> +       u32 ctx_oactxctrl = dev_priv->perf.oa.ctx_oactxctrl_offset;
>> +       u32 ctx_flexeu0 = dev_priv->perf.oa.ctx_flexeu0_offset;
>> +       /* The MMIO offsets for Flex EU registers aren't contiguous */
>> +       u32 flex_mmio[] = {
>> +               i915_mmio_reg_offset(EU_PERF_CNTL0),
>> +               i915_mmio_reg_offset(EU_PERF_CNTL1),
>> +               i915_mmio_reg_offset(EU_PERF_CNTL2),
>> +               i915_mmio_reg_offset(EU_PERF_CNTL3),
>> +               i915_mmio_reg_offset(EU_PERF_CNTL4),
>> +               i915_mmio_reg_offset(EU_PERF_CNTL5),
>> +               i915_mmio_reg_offset(EU_PERF_CNTL6),
>> +       };
>> +       int i;
>> +
>> +       reg_state[ctx_oactxctrl] = i915_mmio_reg_offset(GEN8_OACTXCONTROL);
>> +       reg_state[ctx_oactxctrl+1] = (dev_priv->perf.oa.period_exponent <<
>> +                                     GEN8_OA_TIMER_PERIOD_SHIFT) |
>> +                                    (dev_priv->perf.oa.periodic ?
>> +                                     GEN8_OA_TIMER_ENABLE : 0) |
>> +                                    GEN8_OA_COUNTER_RESUME;
>> +
>> +       for (i = 0; i < ARRAY_SIZE(flex_mmio); i++) {
>> +               u32 state_offset = ctx_flexeu0 + i * 2;
>> +               u32 mmio = flex_mmio[i];
>> +
>> +               /* This arbitrary default will select the 'EU FPU0 Pipeline
>> +                * Active' event. In the future it's anticipated that there
>> +                * will be an explicit 'No Event' we can select, but not yet...
>> +                */
>> +               u32 value = 0;
>> +               int j;
>> +
>> +               for (j = 0; j < n_flex_regs; j++) {
>> +                       if (i915_mmio_reg_offset(flex_regs[j].addr) == mmio) {
>> +                               value = flex_regs[j].value;
>> +                               break;
>> +                       }
>> +               }
>> +
>> +               reg_state[state_offset] = mmio;
>> +               reg_state[state_offset+1] = value;
>> +       }
>> +}
>> +
>> +/* Manages updating the per-context aspects of the OA stream
>> + * configuration across all contexts.
>> + *
>> + * The awkward consideration here is that OACTXCONTROL controls the
>> + * exponent for periodic sampling which is primarily used for system
>> + * wide profiling where we'd like a consistent sampling period even in
>> + * the face of context switches.
>> + *
>> + * Our approach of updating the register state context (as opposed to
>> + * say using a workaround batch buffer) ensures that the hardware
>> + * won't automatically reload an out-of-date timer exponent even
>> + * transiently before a WA BB could be parsed.
>> + *
>> + * This function needs to:
>> + * - Ensure the currently running context's per-context OA state is
>> + *   updated
>> + * - Ensure that all existing contexts will have the correct per-context
>> + *   OA state if they are scheduled for use.
>> + * - Ensure any new contexts will be initialized with the correct
>> + *   per-context OA state.
>> + *
>> + * Note: it's only the RCS/Render context that has any OA state.
>> + */
>> +static int gen8_configure_all_contexts(struct drm_i915_private *dev_priv)
>> +{
>> +       struct i915_gem_context *ctx;
>> +       int ret;
>> +
>> +       ret = i915_mutex_lock_interruptible(&dev_priv->drm);
>> +       if (ret)
>> +               return ret;
>> +
>> +       /* Switch away from any user context.  */
>> +       ret = i915_gem_switch_to_kernel_context(dev_priv);
>> +       if (ret)
>   mutex_unlock(&dev_priv->drm.struct_mutex);

Duh! Thanks!

>
>> +               return ret;
>> +
>> +       /* The OA register config is setup through the context image. This image
>> +        * might be written to by the GPU on context switch (in particular on
>> +        * lite-restore). This means we can't safely update a context's image,
>> +        * if this context is scheduled/submitted to run on the GPU.
>> +        *
>> +        * We could emit the OA register config through the batch buffer but
>> +        * this might leave small interval of time where the OA unit is
>> +        * configured at an invalid sampling period.
>> +        *
>> +        * So far the best way to work around this issue seems to be draining
>> +        * the GPU from any submitted work.
>> +        */
>> +       ret = i915_gem_wait_for_idle(dev_priv,
>> +                                    I915_WAIT_INTERRUPTIBLE |
>> +                                    I915_WAIT_LOCKED);
>> +       if (ret) {
>> +               mutex_unlock(&dev_priv->drm.struct_mutex);
>> +               return ret;
>> +       }
>> +
>> +       /* Update all contexts now that we've stalled the submission. */
>> +       list_for_each_entry(ctx, &dev_priv->context_list, link) {
>> +               if (!ctx->engine[RCS].initialised)
>> +                       continue;
>> +
>> +               gen8_update_reg_state_unlocked(ctx,
>> +                                              ctx->engine[RCS].lrc_reg_state);
>> +       }
>> +
>> +       mutex_unlock(&dev_priv->drm.struct_mutex);
>> +
>> +       /* Now update the current context.
>> +        *
>> +        * Note: Using MMIO to update per-context registers requires
>> +        * some extra care...
>> +        */
>> +       ret = gen8_begin_ctx_mmio(dev_priv);
>> +       if (ret) {
>> +               DRM_ERROR("Failed to bring RCS out of idle to update current ctx OA state\n");
>> +               return ret;
>> +       }
> So do we still need the mmio for the current context, given that we
> now idle everything and then update the reg state?

I would say yes, because we don't have any guarantee that another 
context is going to be scheduled after that.
Yet we want the config to be applied before we return.

>> +
>> +       I915_WRITE(GEN8_OACTXCONTROL, ((dev_priv->perf.oa.period_exponent <<
>> +                                       GEN8_OA_TIMER_PERIOD_SHIFT) |
>> +                                     (dev_priv->perf.oa.periodic ?
>> +                                      GEN8_OA_TIMER_ENABLE : 0) |
>> +                                     GEN8_OA_COUNTER_RESUME));
>> +
>> +       config_oa_regs(dev_priv, dev_priv->perf.oa.flex_regs,
>> +                       dev_priv->perf.oa.flex_regs_len);
>> +
>> +       gen8_end_ctx_mmio(dev_priv);
>> +
>> +       return 0;
>> +}
>> +
>> +static int gen8_enable_metric_set(struct drm_i915_private *dev_priv)
>> +{
>> +       int ret = dev_priv->perf.oa.ops.select_metric_set(dev_priv);
>> +
>> +       if (ret)
>> +               return ret;
>> +
>> +       /* We disable slice/unslice clock ratio change reports on SKL since
>> +        * they are too noisy. The HW generates a lot of redundant reports
>> +        * where the ratio hasn't really changed causing a lot of redundant
>> +        * work to processes and increasing the chances we'll hit buffer
>> +        * overruns.
>> +        *
>> +        * Although we don't currently use the 'disable overrun' OABUFFER
>> +        * feature it's worth noting that clock ratio reports have to be
>> +        * disabled before considering to use that feature since the HW doesn't
>> +        * correctly block these reports.
>> +        *
>> +        * Currently none of the high-level metrics we have depend on knowing
>> +        * this ratio to normalize.
>> +        *
>> +        * Note: This register is not power context saved and restored, but
>> +        * that's OK considering that we disable RC6 while the OA unit is
>> +        * enabled.
>> +        *
>> +        * The _INCLUDE_CLK_RATIO bit allows the slice/unslice frequency to
>> +        * be read back from automatically triggered reports, as part of the
>> +        * RPT_ID field.
>> +        */
>> +       if (IS_SKYLAKE(dev_priv) || IS_BROXTON(dev_priv)) {
>> +               I915_WRITE(GEN8_OA_DEBUG,
>> +                          _MASKED_BIT_ENABLE(GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS |
>> +                                             GEN9_OA_DEBUG_INCLUDE_CLK_RATIO));
>> +       }
>> +
>> +       I915_WRITE(GDT_CHICKEN_BITS, 0xA0);
>> +       config_oa_regs(dev_priv, dev_priv->perf.oa.mux_regs,
>> +                      dev_priv->perf.oa.mux_regs_len);
>> +       I915_WRITE(GDT_CHICKEN_BITS, 0x80);
>> +
>> +       /* It takes a fairly long time for a new MUX configuration to
>> +        * be be applied after these register writes. This delay
>> +        * duration is take from Haswell (derived empirically based on
>> +        * the render_basic config) but hopefully it covers the
>> +        * maximum configuration latency for Gen8+ too...
>> +        */
>> +       usleep_range(15000, 20000);
>> +
>> +       config_oa_regs(dev_priv, dev_priv->perf.oa.b_counter_regs,
>> +                      dev_priv->perf.oa.b_counter_regs_len);
>> +
>> +       ret = gen8_configure_all_contexts(dev_priv);
>> +       if (ret)
>> +               return ret;
>> +
>> +       return 0;
>> +}
>> +
>> +static void gen8_disable_metric_set(struct drm_i915_private *dev_priv)
>> +{
>> +       /* NOP */
>> +}
>> +
>>   static void gen7_update_oacontrol_locked(struct drm_i915_private *dev_priv)
>>   {
>>          lockdep_assert_held(&dev_priv->perf.hook_lock);
>> @@ -1157,6 +1807,29 @@ static void gen7_oa_enable(struct drm_i915_private *dev_priv)
>>          spin_unlock_irqrestore(&dev_priv->perf.hook_lock, flags);
>>   }
>>
>> +static void gen8_oa_enable(struct drm_i915_private *dev_priv)
>> +{
>> +       u32 report_format = dev_priv->perf.oa.oa_buffer.format;
>> +
>> +       /* Reset buf pointers so we don't forward reports from before now.
>> +        *
>> +        * Think carefully if considering trying to avoid this, since it
>> +        * also ensures status flags and the buffer itself are cleared
>> +        * in error paths, and we have checks for invalid reports based
>> +        * on the assumption that certain fields are written to zeroed
>> +        * memory which this helps maintains.
>> +        */
>> +       gen8_init_oa_buffer(dev_priv);
>> +
>> +       /* Note: we don't rely on the hardware to perform single context
>> +        * filtering and instead filter on the cpu based on the context-id
>> +        * field of reports
>> +        */
>> +       I915_WRITE(GEN8_OACONTROL, (report_format <<
>> +                                   GEN8_OA_REPORT_FORMAT_SHIFT) |
>> +                                  GEN8_OA_COUNTER_ENABLE);
>> +}
>> +
>>   /**
>>    * i915_oa_stream_enable - handle `I915_PERF_IOCTL_ENABLE` for OA stream
>>    * @stream: An i915 perf stream opened for OA metrics
>> @@ -1183,6 +1856,11 @@ static void gen7_oa_disable(struct drm_i915_private *dev_priv)
>>          I915_WRITE(GEN7_OACONTROL, 0);
>>   }
>>
>> +static void gen8_oa_disable(struct drm_i915_private *dev_priv)
>> +{
>> +       I915_WRITE(GEN8_OACONTROL, 0);
>> +}
>> +
>>   /**
>>    * i915_oa_stream_disable - handle `I915_PERF_IOCTL_DISABLE` for OA stream
>>    * @stream: An i915 perf stream opened for OA metrics
>> @@ -1361,6 +2039,21 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream,
>>          return ret;
>>   }
>>
>> +void i915_oa_init_reg_state(struct intel_engine_cs *engine,
>> +                           struct i915_gem_context *ctx,
>> +                           u32 *reg_state)
>> +{
>> +       struct drm_i915_private *dev_priv = engine->i915;
>> +
>> +       if (engine->id != RCS)
>> +               return;
>> +
>> +       if (!dev_priv->perf.initialized)
>> +               return;
>> +
>> +       gen8_update_reg_state_unlocked(ctx, reg_state);
>> +}
>> +
>>   /**
>>    * i915_perf_read_locked - &i915_perf_stream_ops->read with error normalisation
>>    * @stream: An i915 perf stream
>> @@ -1486,7 +2179,7 @@ static enum hrtimer_restart oa_poll_check_timer_cb(struct hrtimer *hrtimer)
>>                  container_of(hrtimer, typeof(*dev_priv),
>>                               perf.oa.poll_check_timer);
>>
>> -       if (dev_priv->perf.oa.ops.oa_buffer_check(dev_priv)) {
>> +       if (oa_buffer_check_unlocked(dev_priv)) {
>>                  dev_priv->perf.oa.pollin = true;
>>                  wake_up(&dev_priv->perf.oa.poll_wq);
>>          }
>> @@ -1775,6 +2468,7 @@ i915_perf_open_ioctl_locked(struct drm_i915_private *dev_priv,
>>          struct i915_gem_context *specific_ctx = NULL;
>>          struct i915_perf_stream *stream = NULL;
>>          unsigned long f_flags = 0;
>> +       bool privileged_op = true;
>>          int stream_fd;
>>          int ret;
>>
>> @@ -1792,12 +2486,28 @@ i915_perf_open_ioctl_locked(struct drm_i915_private *dev_priv,
>>                  }
>>          }
>>
>> +       /* On Haswell the OA unit supports clock gating off for a specific
>> +        * context and in this mode there's no visibility of metrics for the
>> +        * rest of the system, which we consider acceptable for a
>> +        * non-privileged client.
>> +        *
>> +        * For Gen8+ the OA unit no longer supports clock gating off for a
>> +        * specific context and the kernel can't securely stop the counters
>> +        * from updating as system-wide / global values. Even though we can
>> +        * filter reports based on the included context ID we can't block
>> +        * clients from seeing the raw / global counter values via
>> +        * MI_REPORT_PERF_COUNT commands and so consider it a privileged op to
>> +        * enable the OA unit by default.
>> +        */
>> +       if (IS_HASWELL(dev_priv) && specific_ctx)
>> +               privileged_op = false;
>> +
>>          /* Similar to perf's kernel.perf_paranoid_cpu sysctl option
>>           * we check a dev.i915.perf_stream_paranoid sysctl option
>>           * to determine if it's ok to access system wide OA counters
>>           * without CAP_SYS_ADMIN privileges.
>>           */
>> -       if (!specific_ctx &&
>> +       if (privileged_op &&
>>              i915_perf_stream_paranoid && !capable(CAP_SYS_ADMIN)) {
>>                  DRM_DEBUG("Insufficient privileges to open system-wide i915 perf stream\n");
>>                  ret = -EACCES;
>> @@ -2069,9 +2779,6 @@ int i915_perf_open_ioctl(struct drm_device *dev, void *data,
>>    */
>>   void i915_perf_register(struct drm_i915_private *dev_priv)
>>   {
>> -       if (!IS_HASWELL(dev_priv))
>> -               return;
>> -
>>          if (!dev_priv->perf.initialized)
>>                  return;
>>
>> @@ -2087,11 +2794,38 @@ void i915_perf_register(struct drm_i915_private *dev_priv)
>>          if (!dev_priv->perf.metrics_kobj)
>>                  goto exit;
>>
>> -       if (i915_perf_register_sysfs_hsw(dev_priv)) {
>> -               kobject_put(dev_priv->perf.metrics_kobj);
>> -               dev_priv->perf.metrics_kobj = NULL;
>> +       if (IS_HASWELL(dev_priv)) {
>> +               if (i915_perf_register_sysfs_hsw(dev_priv))
>> +                       goto sysfs_error;
>> +       } else if (IS_BROADWELL(dev_priv)) {
>> +               if (i915_perf_register_sysfs_bdw(dev_priv))
>> +                       goto sysfs_error;
>> +       } else if (IS_CHERRYVIEW(dev_priv)) {
>> +               if (i915_perf_register_sysfs_chv(dev_priv))
>> +                       goto sysfs_error;
>> +       } else if (IS_SKYLAKE(dev_priv)) {
>> +               if (IS_SKL_GT2(dev_priv)) {
>> +                       if (i915_perf_register_sysfs_sklgt2(dev_priv))
>> +                               goto sysfs_error;
>> +               } else if (IS_SKL_GT3(dev_priv)) {
>> +                       if (i915_perf_register_sysfs_sklgt3(dev_priv))
>> +                               goto sysfs_error;
>> +               } else if (IS_SKL_GT4(dev_priv)) {
>> +                       if (i915_perf_register_sysfs_sklgt4(dev_priv))
>> +                               goto sysfs_error;
>> +               } else
>> +                       goto sysfs_error;
>> +       } else if (IS_BROXTON(dev_priv)) {
>> +               if (i915_perf_register_sysfs_bxt(dev_priv))
>> +                       goto sysfs_error;
>>          }
>>
>> +       goto exit;
>> +
>> +sysfs_error:
>> +       kobject_put(dev_priv->perf.metrics_kobj);
>> +       dev_priv->perf.metrics_kobj = NULL;
>> +
>>   exit:
>>          mutex_unlock(&dev_priv->perf.lock);
>>   }
>> @@ -2107,13 +2841,24 @@ void i915_perf_register(struct drm_i915_private *dev_priv)
>>    */
>>   void i915_perf_unregister(struct drm_i915_private *dev_priv)
>>   {
>> -       if (!IS_HASWELL(dev_priv))
>> -               return;
>> -
>>          if (!dev_priv->perf.metrics_kobj)
>>                  return;
>>
>> -       i915_perf_unregister_sysfs_hsw(dev_priv);
>> +        if (IS_HASWELL(dev_priv))
>> +                i915_perf_unregister_sysfs_hsw(dev_priv);
>> +        else if (IS_BROADWELL(dev_priv))
>> +                i915_perf_unregister_sysfs_bdw(dev_priv);
>> +        else if (IS_CHERRYVIEW(dev_priv))
>> +                i915_perf_unregister_sysfs_chv(dev_priv);
>> +        else if (IS_SKYLAKE(dev_priv)) {
>> +               if (IS_SKL_GT2(dev_priv))
>> +                       i915_perf_unregister_sysfs_sklgt2(dev_priv);
>> +               else if (IS_SKL_GT3(dev_priv))
>> +                       i915_perf_unregister_sysfs_sklgt3(dev_priv);
>> +               else if (IS_SKL_GT4(dev_priv))
>> +                       i915_perf_unregister_sysfs_sklgt4(dev_priv);
>> +       } else if (IS_BROXTON(dev_priv))
>> +                i915_perf_unregister_sysfs_bxt(dev_priv);
>>
>>          kobject_put(dev_priv->perf.metrics_kobj);
>>          dev_priv->perf.metrics_kobj = NULL;
>> @@ -2172,36 +2917,105 @@ static struct ctl_table dev_root[] = {
>>    */
>>   void i915_perf_init(struct drm_i915_private *dev_priv)
>>   {
>> -       if (!IS_HASWELL(dev_priv))
>> -               return;
>> -
>> -       hrtimer_init(&dev_priv->perf.oa.poll_check_timer,
>> -                    CLOCK_MONOTONIC, HRTIMER_MODE_REL);
>> -       dev_priv->perf.oa.poll_check_timer.function = oa_poll_check_timer_cb;
>> -       init_waitqueue_head(&dev_priv->perf.oa.poll_wq);
>> +       dev_priv->perf.oa.n_builtin_sets = 0;
>> +
>> +       if (IS_HASWELL(dev_priv)) {
>> +               dev_priv->perf.oa.ops.init_oa_buffer = gen7_init_oa_buffer;
>> +               dev_priv->perf.oa.ops.enable_metric_set = hsw_enable_metric_set;
>> +               dev_priv->perf.oa.ops.disable_metric_set = hsw_disable_metric_set;
>> +               dev_priv->perf.oa.ops.oa_enable = gen7_oa_enable;
>> +               dev_priv->perf.oa.ops.oa_disable = gen7_oa_disable;
>> +               dev_priv->perf.oa.ops.read = gen7_oa_read;
>> +               dev_priv->perf.oa.ops.oa_hw_tail_read =
>> +                       gen7_oa_hw_tail_read;
>> +
>> +               dev_priv->perf.oa.oa_formats = hsw_oa_formats;
>> +
>> +               dev_priv->perf.oa.n_builtin_sets =
>> +                       i915_oa_n_builtin_metric_sets_hsw;
>> +       } else if (i915.enable_execlists) {
>> +               /* Note: that although we could theoretically also support the
>> +                * legacy ringbuffer mode on BDW (and earlier iterations of
>> +                * this driver, before upstreaming did this) it didn't seem
>> +                * worth the complexity to maintain now that BDW+ enable
>> +                * execlist mode by default.
>> +                */
>>
>> -       INIT_LIST_HEAD(&dev_priv->perf.streams);
>> -       mutex_init(&dev_priv->perf.lock);
>> -       spin_lock_init(&dev_priv->perf.hook_lock);
>> -       spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);
>> +               if (IS_GEN8(dev_priv)) {
>> +                       dev_priv->perf.oa.ctx_oactxctrl_offset = 0x120;
>> +                       dev_priv->perf.oa.ctx_flexeu0_offset = 0x2ce;
>> +                       dev_priv->perf.oa.gen8_valid_ctx_bit = (1<<25);
>> +
>> +                       if (IS_BROADWELL(dev_priv)) {
>> +                               dev_priv->perf.oa.n_builtin_sets =
>> +                                       i915_oa_n_builtin_metric_sets_bdw;
>> +                               dev_priv->perf.oa.ops.select_metric_set =
>> +                                       i915_oa_select_metric_set_bdw;
>> +                       } else if (IS_CHERRYVIEW(dev_priv)) {
>> +                               dev_priv->perf.oa.n_builtin_sets =
>> +                                       i915_oa_n_builtin_metric_sets_chv;
>> +                               dev_priv->perf.oa.ops.select_metric_set =
>> +                                       i915_oa_select_metric_set_chv;
>> +                       }
>> +               } else if (IS_GEN9(dev_priv)) {
>> +                       dev_priv->perf.oa.ctx_oactxctrl_offset = 0x128;
>> +                       dev_priv->perf.oa.ctx_flexeu0_offset = 0x3de;
>> +                       dev_priv->perf.oa.gen8_valid_ctx_bit = (1<<16);
>> +
>> +                       if (IS_SKL_GT2(dev_priv)) {
>> +                               dev_priv->perf.oa.n_builtin_sets =
>> +                                       i915_oa_n_builtin_metric_sets_sklgt2;
>> +                               dev_priv->perf.oa.ops.select_metric_set =
>> +                                       i915_oa_select_metric_set_sklgt2;
>> +                       } else if (IS_SKL_GT3(dev_priv)) {
>> +                               dev_priv->perf.oa.n_builtin_sets =
>> +                                       i915_oa_n_builtin_metric_sets_sklgt3;
>> +                               dev_priv->perf.oa.ops.select_metric_set =
>> +                                       i915_oa_select_metric_set_sklgt3;
>> +                       } else if (IS_SKL_GT4(dev_priv)) {
>> +                               dev_priv->perf.oa.n_builtin_sets =
>> +                                       i915_oa_n_builtin_metric_sets_sklgt4;
>> +                               dev_priv->perf.oa.ops.select_metric_set =
>> +                                       i915_oa_select_metric_set_sklgt4;
>> +                       } else if (IS_BROXTON(dev_priv)) {
>> +                               dev_priv->perf.oa.n_builtin_sets =
>> +                                       i915_oa_n_builtin_metric_sets_bxt;
>> +                               dev_priv->perf.oa.ops.select_metric_set =
>> +                                       i915_oa_select_metric_set_bxt;
>> +                       }
>> +               }
>>
>> -       dev_priv->perf.oa.ops.init_oa_buffer = gen7_init_oa_buffer;
>> -       dev_priv->perf.oa.ops.enable_metric_set = hsw_enable_metric_set;
>> -       dev_priv->perf.oa.ops.disable_metric_set = hsw_disable_metric_set;
>> -       dev_priv->perf.oa.ops.oa_enable = gen7_oa_enable;
>> -       dev_priv->perf.oa.ops.oa_disable = gen7_oa_disable;
>> -       dev_priv->perf.oa.ops.read = gen7_oa_read;
>> -       dev_priv->perf.oa.ops.oa_buffer_check =
>> -               gen7_oa_buffer_check_unlocked;
>> +               if (dev_priv->perf.oa.n_builtin_sets) {
>> +                       dev_priv->perf.oa.ops.init_oa_buffer = gen8_init_oa_buffer;
>> +                       dev_priv->perf.oa.ops.enable_metric_set =
>> +                               gen8_enable_metric_set;
>> +                       dev_priv->perf.oa.ops.disable_metric_set =
>> +                               gen8_disable_metric_set;
>> +                       dev_priv->perf.oa.ops.oa_enable = gen8_oa_enable;
>> +                       dev_priv->perf.oa.ops.oa_disable = gen8_oa_disable;
>> +                       dev_priv->perf.oa.ops.read = gen8_oa_read;
>> +                       dev_priv->perf.oa.ops.oa_hw_tail_read =
>> +                               gen8_oa_hw_tail_read;
>> +
>> +                       dev_priv->perf.oa.oa_formats = gen8_plus_oa_formats;
>> +               }
>> +       }
>>
>> -       dev_priv->perf.oa.oa_formats = hsw_oa_formats;
>> +       if (dev_priv->perf.oa.n_builtin_sets) {
>> +               hrtimer_init(&dev_priv->perf.oa.poll_check_timer,
>> +                               CLOCK_MONOTONIC, HRTIMER_MODE_REL);
>> +               dev_priv->perf.oa.poll_check_timer.function = oa_poll_check_timer_cb;
>> +               init_waitqueue_head(&dev_priv->perf.oa.poll_wq);
>>
>> -       dev_priv->perf.oa.n_builtin_sets =
>> -               i915_oa_n_builtin_metric_sets_hsw;
>> +               INIT_LIST_HEAD(&dev_priv->perf.streams);
>> +               mutex_init(&dev_priv->perf.lock);
>> +               spin_lock_init(&dev_priv->perf.hook_lock);
>> +               spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);
>>
>> -       dev_priv->perf.sysctl_header = register_sysctl_table(dev_root);
>> +               dev_priv->perf.sysctl_header = register_sysctl_table(dev_root);
>>
>> -       dev_priv->perf.initialized = true;
>> +               dev_priv->perf.initialized = true;
>> +       }
>>   }
>>
>>   /**
>> @@ -2216,5 +3030,6 @@ void i915_perf_fini(struct drm_i915_private *dev_priv)
>>          unregister_sysctl_table(dev_priv->perf.sysctl_header);
>>
>>          memset(&dev_priv->perf.oa.ops, 0, sizeof(dev_priv->perf.oa.ops));
>> +
>>          dev_priv->perf.initialized = false;
>>   }
>> diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
>> index 4c72adae368b..c0bbba6d5b78 100644
>> --- a/drivers/gpu/drm/i915/i915_reg.h
>> +++ b/drivers/gpu/drm/i915/i915_reg.h
>> @@ -653,6 +653,12 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
>>
>>   #define GEN8_OACTXID _MMIO(0x2364)
>>
>> +#define GEN8_OA_DEBUG _MMIO(0x2B04)
>> +#define  GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS    (1<<5)
>> +#define  GEN9_OA_DEBUG_INCLUDE_CLK_RATIO           (1<<6)
>> +#define  GEN9_OA_DEBUG_DISABLE_GO_1_0_REPORTS      (1<<2)
>> +#define  GEN9_OA_DEBUG_DISABLE_CTX_SWITCH_REPORTS   (1<<1)
>> +
>>   #define GEN8_OACONTROL _MMIO(0x2B00)
>>   #define  GEN8_OA_REPORT_FORMAT_A12         (0<<2)
>>   #define  GEN8_OA_REPORT_FORMAT_A12_B8_C8    (2<<2)
>> @@ -674,6 +680,7 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
>>   #define  GEN7_OABUFFER_STOP_RESUME_ENABLE   (1<<1)
>>   #define  GEN7_OABUFFER_RESUME              (1<<0)
>>
>> +#define GEN8_OABUFFER_UDW _MMIO(0x23b4)
>>   #define GEN8_OABUFFER _MMIO(0x2b14)
>>
>>   #define GEN7_OASTATUS1 _MMIO(0x2364)
>> @@ -692,7 +699,9 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
>>   #define  GEN8_OASTATUS_REPORT_LOST         (1<<0)
>>
>>   #define GEN8_OAHEADPTR _MMIO(0x2B0C)
>> +#define GEN8_OAHEADPTR_MASK    0xffffffc0
>>   #define GEN8_OATAILPTR _MMIO(0x2B10)
>> +#define GEN8_OATAILPTR_MASK    0xffffffc0
>>
>>   #define OABUFFER_SIZE_128K  (0<<3)
>>   #define OABUFFER_SIZE_256K  (1<<3)
>> @@ -705,7 +714,17 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
>>
>>   #define OA_MEM_SELECT_GGTT  (1<<0)
>>
>> +/*
>> + * Flexible, Aggregate EU Counter Registers.
>> + * Note: these aren't contiguous
>> + */
>>   #define EU_PERF_CNTL0      _MMIO(0xe458)
>> +#define EU_PERF_CNTL1      _MMIO(0xe558)
>> +#define EU_PERF_CNTL2      _MMIO(0xe658)
>> +#define EU_PERF_CNTL3      _MMIO(0xe758)
>> +#define EU_PERF_CNTL4      _MMIO(0xe45c)
>> +#define EU_PERF_CNTL5      _MMIO(0xe55c)
>> +#define EU_PERF_CNTL6      _MMIO(0xe65c)
>>
>>   #define GDT_CHICKEN_BITS    _MMIO(0x9840)
>>   #define GT_NOA_ENABLE      0x00000080
>> @@ -2325,6 +2344,9 @@ enum skl_disp_power_wells {
>>   #define   GEN8_RC_SEMA_IDLE_MSG_DISABLE        (1 << 12)
>>   #define   GEN8_FF_DOP_CLOCK_GATE_DISABLE       (1<<10)
>>
>> +#define GEN6_RCS_PWR_FSM _MMIO(0x22ac)
>> +#define GEN9_RCS_FE_FSM2 _MMIO(0x22a4)
>> +
>>   /* Fuse readout registers for GT */
>>   #define CHV_FUSE_GT                    _MMIO(VLV_DISPLAY_BASE + 0x2168)
>>   #define   CHV_FGT_DISABLE_SS0          (1 << 10)
>> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
>> index 7df278fe492e..351e231a02f4 100644
>> --- a/drivers/gpu/drm/i915/intel_lrc.c
>> +++ b/drivers/gpu/drm/i915/intel_lrc.c
>> @@ -1879,6 +1879,8 @@ static void execlists_init_reg_state(u32 *regs,
>>                  regs[CTX_LRI_HEADER_2] = MI_LOAD_REGISTER_IMM(1);
>>                  CTX_REG(regs, CTX_R_PWR_CLK_STATE, GEN8_R_PWR_CLK_STATE,
>>                          make_rpcs(dev_priv));
>> +
>> +               i915_oa_init_reg_state(engine, ctx, regs);
>>          }
>>   }
>>
>> diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
>> index 464547d08173..15bc9f78ba4d 100644
>> --- a/include/uapi/drm/i915_drm.h
>> +++ b/include/uapi/drm/i915_drm.h
>> @@ -1316,13 +1316,18 @@ struct drm_i915_gem_context_param {
>>   };
>>
>>   enum drm_i915_oa_format {
>> -       I915_OA_FORMAT_A13 = 1,
>> -       I915_OA_FORMAT_A29,
>> -       I915_OA_FORMAT_A13_B8_C8,
>> -       I915_OA_FORMAT_B4_C8,
>> -       I915_OA_FORMAT_A45_B8_C8,
>> -       I915_OA_FORMAT_B4_C8_A16,
>> -       I915_OA_FORMAT_C4_B8,
>> +       I915_OA_FORMAT_A13 = 1,     /* HSW only */
>> +       I915_OA_FORMAT_A29,         /* HSW only */
>> +       I915_OA_FORMAT_A13_B8_C8,   /* HSW only */
>> +       I915_OA_FORMAT_B4_C8,       /* HSW only */
>> +       I915_OA_FORMAT_A45_B8_C8,   /* HSW only */
>> +       I915_OA_FORMAT_B4_C8_A16,   /* HSW only */
>> +       I915_OA_FORMAT_C4_B8,       /* HSW+ */
>> +
>> +       /* Gen8+ */
>> +       I915_OA_FORMAT_A12,
>> +       I915_OA_FORMAT_A12_B8_C8,
>> +       I915_OA_FORMAT_A32u40_A4u32_B8_C8,
>>
>>          I915_OA_FORMAT_MAX          /* non-ABI */
>>   };
>> --
>> 2.11.0
>> _______________________________________________
>> Intel-gfx mailing list
>> Intel-gfx@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/intel-gfx


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* [PATCH v7 12/15] drm/i915/perf: Add OA unit support for Gen 8+
  2017-04-25 16:42         ` Matthew Auld
  2017-04-25 17:10           ` Lionel Landwerlin
@ 2017-04-25 17:30           ` Lionel Landwerlin
  2017-04-27  8:53             ` Chris Wilson
  1 sibling, 1 reply; 87+ messages in thread
From: Lionel Landwerlin @ 2017-04-25 17:30 UTC (permalink / raw)
  To: intel-gfx

From: Robert Bragg <robert@sixbynine.org>

Enables access to OA unit metrics for BDW, CHV, SKL and BXT which all
share (more-or-less) the same OA unit design.

Of particular note in comparison to Haswell: some OA unit HW config
state has become per-context state and as a consequence it is somewhat
more complicated to manage synchronous state changes from the cpu while
there's no guarantee of what context (if any) is currently actively
running on the gpu.

The periodic sampling frequency which can be particularly useful for
system-wide analysis (as opposed to command stream synchronised
MI_REPORT_PERF_COUNT commands) is perhaps the most surprising state to
have become per-context save and restored (while the OABUFFER
destination is still a shared, system-wide resource).

This support for gen8+ takes care to consider a number of timing
challenges involved in synchronously updating per-context state
primarily by programming all config state from the cpu and updating all
current and saved contexts synchronously while the OA unit is still
disabled.

The driver intentionally avoids depending on command streamer
programming to update OA state considering the lack of synchronization
between the automatic loading of OACTXCONTROL state (that includes the
periodic sampling state and enable state) on context restore and the
parsing of any general purpose BB the driver can control. I.e. this
implementation is careful to avoid the possibility of a context restore
temporarily enabling any out-of-date periodic sampling state. In
addition to the risk of transiently-out-of-date state being loaded
automatically; there are also internal HW latencies involved in the
loading of MUX configurations which would be difficult to account for
from the command streamer (and we only want to enable the unit when once
the MUX configuration is complete).

Since the Gen8+ OA unit design no longer supports clock gating the unit
off for a single given context (which effectively stopped any progress
of counters while any other context was running) and instead supports
tagging OA reports with a context ID for filtering on the CPU, it means
we can no longer hide the system-wide progress of counters from a
non-privileged application only interested in metrics for its own
context. Although we could theoretically try and subtract the progress
of other contexts before forwarding reports via read() we aren't in a
position to filter reports captured via MI_REPORT_PERF_COUNT commands.
As a result, for Gen8+, we always require the
dev.i915.perf_stream_paranoid to be unset for any access to OA metrics
if not root.

v5: Drain submitted requests when enabling metric set to ensure no
    lite-restore erases the context image we just updated (Lionel)

v6: In addition to drain, switch to kernel context & update all
    context in place (Chris)

v7: Add missing mutex_unlock() if switching to kernel context fails
    (Matthew)

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com> \o/
---
 drivers/gpu/drm/i915/i915_drv.h  |  45 +-
 drivers/gpu/drm/i915/i915_perf.c | 959 ++++++++++++++++++++++++++++++++++++---
 drivers/gpu/drm/i915/i915_reg.h  |  22 +
 drivers/gpu/drm/i915/intel_lrc.c |   2 +
 include/uapi/drm/i915_drm.h      |  19 +-
 5 files changed, 954 insertions(+), 93 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index ffa1fc5eddfd..676b1227067c 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2064,9 +2064,17 @@ struct i915_oa_ops {
 	void (*init_oa_buffer)(struct drm_i915_private *dev_priv);

 	/**
-	 * @enable_metric_set: Applies any MUX configuration to set up the
-	 * Boolean and Custom (B/C) counters that are part of the counter
-	 * reports being sampled. May apply system constraints such as
+	 * @select_metric_set: The auto generated code that checks whether a
+	 * requested OA config is applicable to the system and if so sets up
+	 * the mux, oa and flex eu register config pointers according to the
+	 * current dev_priv->perf.oa.metrics_set.
+	 */
+	int (*select_metric_set)(struct drm_i915_private *dev_priv);
+
+	/**
+	 * @enable_metric_set: Selects and applies any MUX configuration to set
+	 * up the Boolean and Custom (B/C) counters that are part of the
+	 * counter reports being sampled. May apply system constraints such as
 	 * disabling EU clock gating as required.
 	 */
 	int (*enable_metric_set)(struct drm_i915_private *dev_priv);
@@ -2097,20 +2105,13 @@ struct i915_oa_ops {
 		    size_t *offset);

 	/**
-	 * @oa_buffer_check: Check for OA buffer data + update tail
-	 *
-	 * This is either called via fops or the poll check hrtimer (atomic
-	 * ctx) without any locks taken.
+	 * @oa_hw_tail_read: read the OA tail pointer register
 	 *
-	 * It's safe to read OA config state here unlocked, assuming that this
-	 * is only called while the stream is enabled, while the global OA
-	 * configuration can't be modified.
-	 *
-	 * Efficiency is more important than avoiding some false positives
-	 * here, which will be handled gracefully - likely resulting in an
-	 * %EAGAIN error for userspace.
+	 * In particular this enables us to share all the fiddly code for
+	 * handling the OA unit tail pointer race that affects multiple
+	 * generations.
 	 */
-	bool (*oa_buffer_check)(struct drm_i915_private *dev_priv);
+	u32 (*oa_hw_tail_read)(struct drm_i915_private *dev_priv);
 };

 struct intel_cdclk_state {
@@ -2472,6 +2473,7 @@ struct drm_i915_private {
 			struct {
 				struct i915_vma *vma;
 				u8 *vaddr;
+				u32 last_ctx_id;
 				int format;
 				int format_size;

@@ -2541,6 +2543,14 @@ struct drm_i915_private {
 			} oa_buffer;

 			u32 gen7_latched_oastatus1;
+			u32 ctx_oactxctrl_offset;
+			u32 ctx_flexeu0_offset;
+
+			/* The RPT_ID/reason field for Gen8+ includes a bit
+			 * to determine if the CTX ID in the report is valid
+			 * but the specific bit differs between Gen 8 and 9
+			 */
+			u32 gen8_valid_ctx_bit;

 			struct i915_oa_ops ops;
 			const struct i915_oa_format *oa_formats;
@@ -2851,6 +2861,8 @@ intel_info(const struct drm_i915_private *dev_priv)
 #define IS_KBL_ULX(dev_priv)	(INTEL_DEVID(dev_priv) == 0x590E || \
 				 INTEL_DEVID(dev_priv) == 0x5915 || \
 				 INTEL_DEVID(dev_priv) == 0x591E)
+#define IS_SKL_GT2(dev_priv)	(IS_SKYLAKE(dev_priv) && \
+				 (INTEL_DEVID(dev_priv) & 0x00F0) == 0x0010)
 #define IS_SKL_GT3(dev_priv)	(IS_SKYLAKE(dev_priv) && \
 				 (INTEL_DEVID(dev_priv) & 0x00F0) == 0x0020)
 #define IS_SKL_GT4(dev_priv)	(IS_SKYLAKE(dev_priv) && \
@@ -3615,6 +3627,9 @@ i915_gem_context_lookup_timeline(struct i915_gem_context *ctx,

 int i915_perf_open_ioctl(struct drm_device *dev, void *data,
 			 struct drm_file *file);
+void i915_oa_init_reg_state(struct intel_engine_cs *engine,
+			    struct i915_gem_context *ctx,
+			    uint32_t *reg_state);

 /* i915_gem_evict.c */
 int __must_check i915_gem_evict_something(struct i915_address_space *vm,
diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 3277a52ce98e..548bba41bfec 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -196,6 +196,12 @@

 #include "i915_drv.h"
 #include "i915_oa_hsw.h"
+#include "i915_oa_bdw.h"
+#include "i915_oa_chv.h"
+#include "i915_oa_sklgt2.h"
+#include "i915_oa_sklgt3.h"
+#include "i915_oa_sklgt4.h"
+#include "i915_oa_bxt.h"

 /* HW requires this to be a power of two, between 128k and 16M, though driver
  * is currently generally designed assuming the largest 16M size is used such
@@ -215,7 +221,7 @@
  *
  * Although this can be observed explicitly while copying reports to userspace
  * by checking for a zeroed report-id field in tail reports, we want to account
- * for this earlier, as part of the _oa_buffer_check to avoid lots of redundant
+ * for this earlier, as part of the oa_buffer_check to avoid lots of redundant
  * read() attempts.
  *
  * In effect we define a tail pointer for reading that lags the real tail
@@ -237,7 +243,7 @@
  * indicates that an updated tail pointer is needed.
  *
  * Most of the implementation details for this workaround are in
- * gen7_oa_buffer_check_unlocked() and gen7_appand_oa_reports()
+ * oa_buffer_check_unlocked() and _append_oa_reports()
  *
  * Note for posterity: previously the driver used to define an effective tail
  * pointer that lagged the real pointer by a 'tail margin' measured in bytes
@@ -272,6 +278,13 @@ static u32 i915_perf_stream_paranoid = true;

 #define INVALID_CTX_ID 0xffffffff

+/* On Gen8+ automatically triggered OA reports include a 'reason' field... */
+#define OAREPORT_REASON_MASK           0x3f
+#define OAREPORT_REASON_SHIFT          19
+#define OAREPORT_REASON_TIMER          (1<<0)
+#define OAREPORT_REASON_CTX_SWITCH     (1<<3)
+#define OAREPORT_REASON_CLK_RATIO      (1<<5)
+

 /* For sysctl proc_dointvec_minmax of i915_oa_max_sample_rate
  *
@@ -303,6 +316,13 @@ static struct i915_oa_format hsw_oa_formats[I915_OA_FORMAT_MAX] = {
 	[I915_OA_FORMAT_C4_B8]	    = { 7, 64 },
 };

+static struct i915_oa_format gen8_plus_oa_formats[I915_OA_FORMAT_MAX] = {
+	[I915_OA_FORMAT_A12]		    = { 0, 64 },
+	[I915_OA_FORMAT_A12_B8_C8]	    = { 2, 128 },
+	[I915_OA_FORMAT_A32u40_A4u32_B8_C8] = { 5, 256 },
+	[I915_OA_FORMAT_C4_B8]		    = { 7, 64 },
+};
+
 #define SAMPLE_OA_REPORT      (1<<0)

 /**
@@ -332,8 +352,20 @@ struct perf_open_properties {
 	int oa_period_exponent;
 };

+static u32 gen8_oa_hw_tail_read(struct drm_i915_private *dev_priv)
+{
+	return I915_READ(GEN8_OATAILPTR) & GEN8_OATAILPTR_MASK;
+}
+
+static u32 gen7_oa_hw_tail_read(struct drm_i915_private *dev_priv)
+{
+	u32 oastatus1 = I915_READ(GEN7_OASTATUS1);
+
+	return oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
+}
+
 /**
- * gen7_oa_buffer_check_unlocked - check for data and update tail ptr state
+ * oa_buffer_check_unlocked - check for data and update tail ptr state
  * @dev_priv: i915 device instance
  *
  * This is either called via fops (for blocking reads in user ctx) or the poll
@@ -356,12 +388,11 @@ struct perf_open_properties {
  *
  * Returns: %true if the OA buffer contains data, else %false
  */
-static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
+static bool oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
 {
 	int report_size = dev_priv->perf.oa.oa_buffer.format_size;
 	unsigned long flags;
 	unsigned int aged_idx;
-	u32 oastatus1;
 	u32 head, hw_tail, aged_tail, aging_tail;
 	u64 now;

@@ -381,8 +412,7 @@ static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
 	aged_tail = dev_priv->perf.oa.oa_buffer.tails[aged_idx].offset;
 	aging_tail = dev_priv->perf.oa.oa_buffer.tails[!aged_idx].offset;

-	oastatus1 = I915_READ(GEN7_OASTATUS1);
-	hw_tail = oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
+	hw_tail = dev_priv->perf.oa.ops.oa_hw_tail_read(dev_priv);

 	/* The tail pointer increases in 64 byte increments,
 	 * not in report_size steps...
@@ -404,6 +434,7 @@ static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
 	if (aging_tail != INVALID_TAIL_PTR &&
 	    ((now - dev_priv->perf.oa.oa_buffer.aging_timestamp) >
 	     OA_TAIL_MARGIN_NSEC)) {
+
 		aged_idx ^= 1;
 		dev_priv->perf.oa.oa_buffer.aged_tail_idx = aged_idx;

@@ -553,6 +584,287 @@ static int append_oa_sample(struct i915_perf_stream *stream,
  *
  * Returns: 0 on success, negative error code on failure.
  */
+static int gen8_append_oa_reports(struct i915_perf_stream *stream,
+				  char __user *buf,
+				  size_t count,
+				  size_t *offset)
+{
+	struct drm_i915_private *dev_priv = stream->dev_priv;
+	int report_size = dev_priv->perf.oa.oa_buffer.format_size;
+	u8 *oa_buf_base = dev_priv->perf.oa.oa_buffer.vaddr;
+	u32 gtt_offset = i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
+	u32 mask = (OA_BUFFER_SIZE - 1);
+	size_t start_offset = *offset;
+	unsigned long flags;
+	unsigned int aged_tail_idx;
+	u32 head, tail;
+	u32 taken;
+	int ret = 0;
+
+	if (WARN_ON(!stream->enabled))
+		return -EIO;
+
+	spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	head = dev_priv->perf.oa.oa_buffer.head;
+	aged_tail_idx = dev_priv->perf.oa.oa_buffer.aged_tail_idx;
+	tail = dev_priv->perf.oa.oa_buffer.tails[aged_tail_idx].offset;
+
+	spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	/* An invalid tail pointer here means we're still waiting for the poll
+	 * hrtimer callback to give us a pointer
+	 */
+	if (tail == INVALID_TAIL_PTR)
+		return -EAGAIN;
+
+	/* NB: oa_buffer.head/tail include the gtt_offset which we don't want
+	 * while indexing relative to oa_buf_base.
+	 */
+	head -= gtt_offset;
+	tail -= gtt_offset;
+
+	/* An out of bounds or misaligned head or tail pointer implies a driver
+	 * bug since we validate + align the tail pointers we read from the
+	 * hardware and we are in full control of the head pointer which should
+	 * only be incremented by multiples of the report size (notably also
+	 * all a power of two).
+	 */
+	if (WARN_ONCE(head > OA_BUFFER_SIZE || head % report_size ||
+		      tail > OA_BUFFER_SIZE || tail % report_size,
+		      "Inconsistent OA buffer pointers: head = %u, tail = %u\n",
+		      head, tail))
+		return -EIO;
+
+
+	for (/* none */;
+	     (taken = OA_TAKEN(tail, head));
+	     head = (head + report_size) & mask) {
+		u8 *report = oa_buf_base + head;
+		u32 *report32 = (void *)report;
+		u32 ctx_id;
+		u32 reason;
+
+		/* All the report sizes factor neatly into the buffer
+		 * size so we never expect to see a report split
+		 * between the beginning and end of the buffer.
+		 *
+		 * Given the initial alignment check a misalignment
+		 * here would imply a driver bug that would result
+		 * in an overrun.
+		 */
+		if (WARN_ON((OA_BUFFER_SIZE - head) < report_size)) {
+			DRM_ERROR("Spurious OA head ptr: non-integral report offset\n");
+			break;
+		}
+
+		/* The reason field includes flags identifying what
+		 * triggered this specific report (mostly timer
+		 * triggered or e.g. due to a context switch).
+		 *
+		 * This field is never expected to be zero so we can
+		 * check that the report isn't invalid before copying
+		 * it to userspace...
+		 */
+		reason = ((report32[0] >> OAREPORT_REASON_SHIFT) &
+			  OAREPORT_REASON_MASK);
+		if (reason == 0) {
+			if (__ratelimit(&dev_priv->perf.oa.spurious_report_rs))
+				DRM_NOTE("Skipping spurious, invalid OA report\n");
+			continue;
+		}
+
+		/* XXX: Just keep the lower 21 bits for now since I'm not
+		 * entirely sure if the HW touches any of the higher bits in
+		 * this field
+		 */
+		ctx_id = report32[2] & 0x1fffff;
+
+		/* Squash whatever is in the CTX_ID field if it's marked as
+		 * invalid to be sure we avoid false-positive, single-context
+		 * filtering below...
+		 *
+		 * Note: that we don't clear the valid_ctx_bit so userspace can
+		 * understand that the ID has been squashed by the kernel.
+		 */
+		if (!(report32[0] & dev_priv->perf.oa.gen8_valid_ctx_bit))
+			ctx_id = report32[2] = INVALID_CTX_ID;
+
+		/* NB: For Gen 8 the OA unit no longer supports clock gating
+		 * off for a specific context and the kernel can't securely
+		 * stop the counters from updating as system-wide / global
+		 * values.
+		 *
+		 * Automatic reports now include a context ID so reports can be
+		 * filtered on the cpu but it's not worth trying to
+		 * automatically subtract/hide counter progress for other
+		 * contexts while filtering since we can't stop userspace
+		 * issuing MI_REPORT_PERF_COUNT commands which would still
+		 * provide a side-band view of the real values.
+		 *
+		 * To allow userspace (such as Mesa/GL_INTEL_performance_query)
+		 * to normalize counters for a single filtered context then it
+		 * needs be forwarded bookend context-switch reports so that it
+		 * can track switches in between MI_REPORT_PERF_COUNT commands
+		 * and can itself subtract/ignore the progress of counters
+		 * associated with other contexts. Note that the hardware
+		 * automatically triggers reports when switching to a new
+		 * context which are tagged with the ID of the newly active
+		 * context. To avoid the complexity (and likely fragility) of
+		 * reading ahead while parsing reports to try and minimize
+		 * forwarding redundant context switch reports (i.e. between
+		 * other, unrelated contexts) we simply elect to forward them
+		 * all.
+		 *
+		 * We don't rely solely on the reason field to identify context
+		 * switches since it's not-uncommon for periodic samples to
+		 * identify a switch before any 'context switch' report.
+		 */
+		if (!dev_priv->perf.oa.exclusive_stream->ctx ||
+		    dev_priv->perf.oa.specific_ctx_id == ctx_id ||
+		    (dev_priv->perf.oa.oa_buffer.last_ctx_id ==
+		     dev_priv->perf.oa.specific_ctx_id) ||
+		    reason & OAREPORT_REASON_CTX_SWITCH) {
+
+			/* While filtering for a single context we avoid
+			 * leaking the IDs of other contexts.
+			 */
+			if (dev_priv->perf.oa.exclusive_stream->ctx &&
+			    dev_priv->perf.oa.specific_ctx_id != ctx_id) {
+				report32[2] = INVALID_CTX_ID;
+			}
+
+			ret = append_oa_sample(stream, buf, count, offset,
+					       report);
+			if (ret)
+				break;
+
+			dev_priv->perf.oa.oa_buffer.last_ctx_id = ctx_id;
+		}
+
+		/* The above reason field sanity check is based on
+		 * the assumption that the OA buffer is initially
+		 * zeroed and we reset the field after copying so the
+		 * check is still meaningful once old reports start
+		 * being overwritten.
+		 */
+		report32[0] = 0;
+	}
+
+	if (start_offset != *offset) {
+		spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+		/* We removed the gtt_offset for the copy loop above, indexing
+		 * relative to oa_buf_base so put back here...
+		 */
+		head += gtt_offset;
+
+		I915_WRITE(GEN8_OAHEADPTR, head & GEN8_OAHEADPTR_MASK);
+		dev_priv->perf.oa.oa_buffer.head = head;
+
+		spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+	}
+
+	return ret;
+}
+
+/**
+ * gen8_oa_read - copy status records then buffered OA reports
+ * @stream: An i915-perf stream opened for OA metrics
+ * @buf: destination buffer given by userspace
+ * @count: the number of bytes userspace wants to read
+ * @offset: (inout): the current position for writing into @buf
+ *
+ * Checks OA unit status registers and if necessary appends corresponding
+ * status records for userspace (such as for a buffer full condition) and then
+ * initiate appending any buffered OA reports.
+ *
+ * Updates @offset according to the number of bytes successfully copied into
+ * the userspace buffer.
+ *
+ * NB: some data may be successfully copied to the userspace buffer
+ * even if an error is returned, and this is reflected in the
+ * updated @offset.
+ *
+ * Returns: zero on success or a negative error code
+ */
+static int gen8_oa_read(struct i915_perf_stream *stream,
+			char __user *buf,
+			size_t count,
+			size_t *offset)
+{
+	struct drm_i915_private *dev_priv = stream->dev_priv;
+	u32 oastatus;
+	int ret;
+
+	if (WARN_ON(!dev_priv->perf.oa.oa_buffer.vaddr))
+		return -EIO;
+
+	oastatus = I915_READ(GEN8_OASTATUS);
+
+	/* We treat OABUFFER_OVERFLOW as a significant error:
+	 *
+	 * Although theoretically we could handle this more gracefully
+	 * sometimes, some Gens don't correctly suppress certain
+	 * automatically triggered reports in this condition and so we
+	 * have to assume that old reports are now being trampled
+	 * over.
+	 *
+	 * Considering how we don't currently give userspace control
+	 * over the OA buffer size and always configure a large 16MB
+	 * buffer, then a buffer overflow does anyway likely indicate
+	 * that something has gone quite badly wrong.
+	 */
+	if (oastatus & GEN8_OASTATUS_OABUFFER_OVERFLOW) {
+		ret = append_oa_status(stream, buf, count, offset,
+				       DRM_I915_PERF_RECORD_OA_BUFFER_LOST);
+		if (ret)
+			return ret;
+
+		DRM_DEBUG("OA buffer overflow (exponent = %d): force restart\n",
+			  dev_priv->perf.oa.period_exponent);
+
+		dev_priv->perf.oa.ops.oa_disable(dev_priv);
+		dev_priv->perf.oa.ops.oa_enable(dev_priv);
+
+		/* Note: .oa_enable() is expected to re-init the oabuffer
+		 * and reset GEN8_OASTATUS for us
+		 */
+		oastatus = I915_READ(GEN8_OASTATUS);
+	}
+
+	if (oastatus & GEN8_OASTATUS_REPORT_LOST) {
+		ret = append_oa_status(stream, buf, count, offset,
+				       DRM_I915_PERF_RECORD_OA_REPORT_LOST);
+		if (ret)
+			return ret;
+		I915_WRITE(GEN8_OASTATUS,
+			   oastatus & ~GEN8_OASTATUS_REPORT_LOST);
+	}
+
+	return gen8_append_oa_reports(stream, buf, count, offset);
+}
+
+/**
+ * Copies all buffered OA reports into userspace read() buffer.
+ * @stream: An i915-perf stream opened for OA metrics
+ * @buf: destination buffer given by userspace
+ * @count: the number of bytes userspace wants to read
+ * @offset: (inout): the current position for writing into @buf
+ *
+ * Notably any error condition resulting in a short read (-%ENOSPC or
+ * -%EFAULT) will be returned even though one or more records may
+ * have been successfully copied. In this case it's up to the caller
+ * to decide if the error should be squashed before returning to
+ * userspace.
+ *
+ * Note: reports are consumed from the head, and appended to the
+ * tail, so the tail chases the head?... If you think that's mad
+ * and back-to-front you're not alone, but this follows the
+ * Gen PRM naming convention.
+ *
+ * Returns: 0 on success, negative error code on failure.
+ */
 static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 				  char __user *buf,
 				  size_t count,
@@ -733,7 +1045,8 @@ static int gen7_oa_read(struct i915_perf_stream *stream,
 		if (ret)
 			return ret;

-		DRM_DEBUG("OA buffer overflow: force restart\n");
+		DRM_DEBUG("OA buffer overflow (exponent = %d): force restart\n",
+			  dev_priv->perf.oa.period_exponent);

 		dev_priv->perf.oa.ops.oa_disable(dev_priv);
 		dev_priv->perf.oa.ops.oa_enable(dev_priv);
@@ -776,7 +1089,7 @@ static int i915_oa_wait_unlocked(struct i915_perf_stream *stream)
 		return -EIO;

 	return wait_event_interruptible(dev_priv->perf.oa.poll_wq,
-					dev_priv->perf.oa.ops.oa_buffer_check(dev_priv));
+					oa_buffer_check_unlocked(dev_priv));
 }

 /**
@@ -833,33 +1146,37 @@ static int i915_oa_read(struct i915_perf_stream *stream,
 static int oa_get_render_ctx_id(struct i915_perf_stream *stream)
 {
 	struct drm_i915_private *dev_priv = stream->dev_priv;
-	struct intel_engine_cs *engine = dev_priv->engine[RCS];
-	int ret;

-	ret = i915_mutex_lock_interruptible(&dev_priv->drm);
-	if (ret)
-		return ret;
+	if (i915.enable_execlists)
+		dev_priv->perf.oa.specific_ctx_id = stream->ctx->hw_id;
+	else {
+		struct intel_engine_cs *engine = dev_priv->engine[RCS];
+		int ret;

-	/* As the ID is the gtt offset of the context's vma we pin
-	 * the vma to ensure the ID remains fixed.
-	 *
-	 * NB: implied RCS engine...
-	 */
-	ret = engine->context_pin(engine, stream->ctx);
-	if (ret)
-		goto unlock;
+		ret = i915_mutex_lock_interruptible(&dev_priv->drm);
+		if (ret)
+			return ret;

-	/* Explicitly track the ID (instead of calling i915_ggtt_offset()
-	 * on the fly) considering the difference with gen8+ and
-	 * execlists
-	 */
-	dev_priv->perf.oa.specific_ctx_id =
-		i915_ggtt_offset(stream->ctx->engine[engine->id].state);
+		/* As the ID is the gtt offset of the context's vma we pin
+		 * the vma to ensure the ID remains fixed.
+		 */
+		ret = engine->context_pin(engine, stream->ctx);
+		if (ret) {
+			mutex_unlock(&dev_priv->drm.struct_mutex);
+			return ret;
+		}

-unlock:
-	mutex_unlock(&dev_priv->drm.struct_mutex);
+		/* Explicitly track the ID (instead of calling
+		 * i915_ggtt_offset() on the fly) considering the difference
+		 * with gen8+ and execlists
+		 */
+		dev_priv->perf.oa.specific_ctx_id =
+			i915_ggtt_offset(stream->ctx->engine[engine->id].state);

-	return ret;
+		mutex_unlock(&dev_priv->drm.struct_mutex);
+	}
+
+	return 0;
 }

 /**
@@ -874,12 +1191,16 @@ static void oa_put_render_ctx_id(struct i915_perf_stream *stream)
 	struct drm_i915_private *dev_priv = stream->dev_priv;
 	struct intel_engine_cs *engine = dev_priv->engine[RCS];

-	mutex_lock(&dev_priv->drm.struct_mutex);
+	if (i915.enable_execlists) {
+		dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
+	} else {
+		mutex_lock(&dev_priv->drm.struct_mutex);

-	dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
-	engine->context_unpin(engine, stream->ctx);
+		dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
+		engine->context_unpin(engine, stream->ctx);

-	mutex_unlock(&dev_priv->drm.struct_mutex);
+		mutex_unlock(&dev_priv->drm.struct_mutex);
+	}
 }

 static void
@@ -969,6 +1290,61 @@ static void gen7_init_oa_buffer(struct drm_i915_private *dev_priv)
 	dev_priv->perf.oa.pollin = false;
 }

+static void gen8_init_oa_buffer(struct drm_i915_private *dev_priv)
+{
+	u32 gtt_offset = i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
+	unsigned long flags;
+
+	spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	I915_WRITE(GEN8_OASTATUS, 0);
+	I915_WRITE(GEN8_OAHEADPTR, gtt_offset);
+	dev_priv->perf.oa.oa_buffer.head = gtt_offset;
+
+	I915_WRITE(GEN8_OABUFFER_UDW, 0);
+
+	/* PRM says:
+	 *
+	 *  "This MMIO must be set before the OATAILPTR
+	 *  register and after the OAHEADPTR register. This is
+	 *  to enable proper functionality of the overflow
+	 *  bit."
+	 */
+	I915_WRITE(GEN8_OABUFFER, gtt_offset |
+		   OABUFFER_SIZE_16M | OA_MEM_SELECT_GGTT);
+	I915_WRITE(GEN8_OATAILPTR, gtt_offset & GEN8_OATAILPTR_MASK);
+
+	/* Mark that we need updated tail pointers to read from... */
+	dev_priv->perf.oa.oa_buffer.tails[0].offset = INVALID_TAIL_PTR;
+	dev_priv->perf.oa.oa_buffer.tails[1].offset = INVALID_TAIL_PTR;
+
+	/* Reset state used to recognise context switches, affecting which
+	 * reports we will forward to userspace while filtering for a single
+	 * context.
+	 */
+	dev_priv->perf.oa.oa_buffer.last_ctx_id = INVALID_CTX_ID;
+
+	spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	/* NB: although the OA buffer will initially be allocated
+	 * zeroed via shmfs (and so this memset is redundant when
+	 * first allocating), we may re-init the OA buffer, either
+	 * when re-enabling a stream or in error/reset paths.
+	 *
+	 * The reason we clear the buffer for each re-init is for the
+	 * sanity check in gen8_append_oa_reports() that looks at the
+	 * reason field to make sure it's non-zero which relies on
+	 * the assumption that new reports are being written to zeroed
+	 * memory...
+	 */
+	memset(dev_priv->perf.oa.oa_buffer.vaddr, 0, OA_BUFFER_SIZE);
+
+	/* Maybe make ->pollin per-stream state if we support multiple
+	 * concurrent streams in the future.
+	 */
+	dev_priv->perf.oa.pollin = false;
+}
+
 static int alloc_oa_buffer(struct drm_i915_private *dev_priv)
 {
 	struct drm_i915_gem_object *bo;
@@ -1113,6 +1489,282 @@ static void hsw_disable_metric_set(struct drm_i915_private *dev_priv)
 				      ~GT_NOA_ENABLE));
 }

+/*
+ * From Broadwell PRM, 3D-Media-GPGPU -> Register State Context
+ *
+ * MMIO reads or writes to any of the registers listed in the
+ * “Register State Context image” subsections through HOST/IA
+ * MMIO interface for debug purposes must follow the steps below:
+ *
+ * - SW should set the Force Wakeup bit to prevent GT from entering C6.
+ * - Write 0x2050[31:0] = 0x00010001 (disable sequence).
+ * - Disable IDLE messaging in CS (Write 0x2050[31:0] = 0x00010001).
+ * - BDW:  Poll/Wait for register bits of 0x22AC[6:0] turn to 0x30 value.
+ * - SKL+: Poll/Wait for register bits of 0x22A4[6:0] turn to 0x30 value.
+ * - Read/Write to desired MMIO registers.
+ * - Enable IDLE messaging in CS (Write 0x2050[31:0] = 0x00010000).
+ * - Force Wakeup bit should be reset to enable C6 entry.
+ *
+ * XXX: don't nest or overlap calls to this API, it has no ref
+ * counting to track how many entities require the RCS to be
+ * blocked from being idle.
+ */
+static int gen8_begin_ctx_mmio(struct drm_i915_private *dev_priv)
+{
+	i915_reg_t fsm_reg = INTEL_GEN(dev_priv) > 8 ?
+		GEN9_RCS_FE_FSM2 : GEN6_RCS_PWR_FSM;
+	int ret = 0;
+
+	/* There's only no active context while idle in execlist mode
+	 * (though we shouldn't be using this in any other case)
+	 */
+	if (WARN_ON(!i915.enable_execlists))
+		return -EINVAL;
+
+	intel_uncore_forcewake_get(dev_priv, FORCEWAKE_RENDER);
+
+	I915_WRITE(GEN6_RC_SLEEP_PSMI_CONTROL,
+		   _MASKED_BIT_ENABLE(GEN6_PSMI_SLEEP_MSG_DISABLE));
+
+	/* Note: we don't currently have a good handle on the maximum
+	 * latency for this wake up so while we only need to hold rcs
+	 * busy from process context we at least keep the waiting
+	 * interruptible...
+	 */
+	ret = intel_wait_for_register(dev_priv, fsm_reg, 0x3f, 0x30, 50);
+	if (ret == -ETIMEDOUT)
+		DRM_ERROR("Timeout waiting to be able to begin per-ctx mmio\n");
+
+	if (ret)
+		intel_uncore_forcewake_put(dev_priv, FORCEWAKE_RENDER);
+
+	return ret;
+}
+
+static void gen8_end_ctx_mmio(struct drm_i915_private *dev_priv)
+{
+	if (WARN_ON(!i915.enable_execlists))
+		return;
+
+	I915_WRITE(GEN6_RC_SLEEP_PSMI_CONTROL,
+		   _MASKED_BIT_DISABLE(GEN6_PSMI_SLEEP_MSG_DISABLE));
+	intel_uncore_forcewake_put(dev_priv, FORCEWAKE_RENDER);
+}
+
+/* NB: It must always remain pointer safe to run this even if the OA unit
+ * has been disabled.
+ *
+ * It's fine to put out-of-date values into these per-context registers
+ * in the case that the OA unit has been disabled.
+ */
+static void gen8_update_reg_state_unlocked(struct i915_gem_context *ctx,
+					   u32 *reg_state)
+{
+	struct drm_i915_private *dev_priv = ctx->i915;
+	const struct i915_oa_reg *flex_regs = dev_priv->perf.oa.flex_regs;
+	int n_flex_regs = dev_priv->perf.oa.flex_regs_len;
+	u32 ctx_oactxctrl = dev_priv->perf.oa.ctx_oactxctrl_offset;
+	u32 ctx_flexeu0 = dev_priv->perf.oa.ctx_flexeu0_offset;
+	/* The MMIO offsets for Flex EU registers aren't contiguous */
+	u32 flex_mmio[] = {
+		i915_mmio_reg_offset(EU_PERF_CNTL0),
+		i915_mmio_reg_offset(EU_PERF_CNTL1),
+		i915_mmio_reg_offset(EU_PERF_CNTL2),
+		i915_mmio_reg_offset(EU_PERF_CNTL3),
+		i915_mmio_reg_offset(EU_PERF_CNTL4),
+		i915_mmio_reg_offset(EU_PERF_CNTL5),
+		i915_mmio_reg_offset(EU_PERF_CNTL6),
+	};
+	int i;
+
+	reg_state[ctx_oactxctrl] = i915_mmio_reg_offset(GEN8_OACTXCONTROL);
+	reg_state[ctx_oactxctrl+1] = (dev_priv->perf.oa.period_exponent <<
+				      GEN8_OA_TIMER_PERIOD_SHIFT) |
+				     (dev_priv->perf.oa.periodic ?
+				      GEN8_OA_TIMER_ENABLE : 0) |
+				     GEN8_OA_COUNTER_RESUME;
+
+	for (i = 0; i < ARRAY_SIZE(flex_mmio); i++) {
+		u32 state_offset = ctx_flexeu0 + i * 2;
+		u32 mmio = flex_mmio[i];
+
+		/* This arbitrary default will select the 'EU FPU0 Pipeline
+		 * Active' event. In the future it's anticipated that there
+		 * will be an explicit 'No Event' we can select, but not yet...
+		 */
+		u32 value = 0;
+		int j;
+
+		for (j = 0; j < n_flex_regs; j++) {
+			if (i915_mmio_reg_offset(flex_regs[j].addr) == mmio) {
+				value = flex_regs[j].value;
+				break;
+			}
+		}
+
+		reg_state[state_offset] = mmio;
+		reg_state[state_offset+1] = value;
+	}
+}
+
+/* Manages updating the per-context aspects of the OA stream
+ * configuration across all contexts.
+ *
+ * The awkward consideration here is that OACTXCONTROL controls the
+ * exponent for periodic sampling which is primarily used for system
+ * wide profiling where we'd like a consistent sampling period even in
+ * the face of context switches.
+ *
+ * Our approach of updating the register state context (as opposed to
+ * say using a workaround batch buffer) ensures that the hardware
+ * won't automatically reload an out-of-date timer exponent even
+ * transiently before a WA BB could be parsed.
+ *
+ * This function needs to:
+ * - Ensure the currently running context's per-context OA state is
+ *   updated
+ * - Ensure that all existing contexts will have the correct per-context
+ *   OA state if they are scheduled for use.
+ * - Ensure any new contexts will be initialized with the correct
+ *   per-context OA state.
+ *
+ * Note: it's only the RCS/Render context that has any OA state.
+ */
+static int gen8_configure_all_contexts(struct drm_i915_private *dev_priv)
+{
+	struct i915_gem_context *ctx;
+	int ret;
+
+	ret = i915_mutex_lock_interruptible(&dev_priv->drm);
+	if (ret)
+		return ret;
+
+	/* Switch away from any user context.  */
+	ret = i915_gem_switch_to_kernel_context(dev_priv);
+	if (ret) {
+		mutex_unlock(&dev_priv->drm.struct_mutex);
+		return ret;
+	}
+
+	/* The OA register config is setup through the context image. This image
+	 * might be written to by the GPU on context switch (in particular on
+	 * lite-restore). This means we can't safely update a context's image,
+	 * if this context is scheduled/submitted to run on the GPU.
+	 *
+	 * We could emit the OA register config through the batch buffer but
+	 * this might leave small interval of time where the OA unit is
+	 * configured at an invalid sampling period.
+	 *
+	 * So far the best way to work around this issue seems to be draining
+	 * the GPU from any submitted work.
+	 */
+	ret = i915_gem_wait_for_idle(dev_priv,
+				     I915_WAIT_INTERRUPTIBLE |
+				     I915_WAIT_LOCKED);
+	if (ret) {
+		mutex_unlock(&dev_priv->drm.struct_mutex);
+		return ret;
+	}
+
+	/* Update all contexts now that we've stalled the submission. */
+	list_for_each_entry(ctx, &dev_priv->context_list, link) {
+		if (!ctx->engine[RCS].initialised)
+			continue;
+
+		gen8_update_reg_state_unlocked(ctx,
+					       ctx->engine[RCS].lrc_reg_state);
+	}
+
+	mutex_unlock(&dev_priv->drm.struct_mutex);
+
+	/* Now update the current context.
+	 *
+	 * Note: Using MMIO to update per-context registers requires
+	 * some extra care...
+	 */
+	ret = gen8_begin_ctx_mmio(dev_priv);
+	if (ret) {
+		DRM_ERROR("Failed to bring RCS out of idle to update current ctx OA state\n");
+		return ret;
+	}
+
+	I915_WRITE(GEN8_OACTXCONTROL, ((dev_priv->perf.oa.period_exponent <<
+					GEN8_OA_TIMER_PERIOD_SHIFT) |
+				      (dev_priv->perf.oa.periodic ?
+				       GEN8_OA_TIMER_ENABLE : 0) |
+				      GEN8_OA_COUNTER_RESUME));
+
+	config_oa_regs(dev_priv, dev_priv->perf.oa.flex_regs,
+			dev_priv->perf.oa.flex_regs_len);
+
+	gen8_end_ctx_mmio(dev_priv);
+
+	return 0;
+}
+
+static int gen8_enable_metric_set(struct drm_i915_private *dev_priv)
+{
+	int ret = dev_priv->perf.oa.ops.select_metric_set(dev_priv);
+
+	if (ret)
+		return ret;
+
+	/* We disable slice/unslice clock ratio change reports on SKL since
+	 * they are too noisy. The HW generates a lot of redundant reports
+	 * where the ratio hasn't really changed causing a lot of redundant
+	 * work to processes and increasing the chances we'll hit buffer
+	 * overruns.
+	 *
+	 * Although we don't currently use the 'disable overrun' OABUFFER
+	 * feature it's worth noting that clock ratio reports have to be
+	 * disabled before considering to use that feature since the HW doesn't
+	 * correctly block these reports.
+	 *
+	 * Currently none of the high-level metrics we have depend on knowing
+	 * this ratio to normalize.
+	 *
+	 * Note: This register is not power context saved and restored, but
+	 * that's OK considering that we disable RC6 while the OA unit is
+	 * enabled.
+	 *
+	 * The _INCLUDE_CLK_RATIO bit allows the slice/unslice frequency to
+	 * be read back from automatically triggered reports, as part of the
+	 * RPT_ID field.
+	 */
+	if (IS_SKYLAKE(dev_priv) || IS_BROXTON(dev_priv)) {
+		I915_WRITE(GEN8_OA_DEBUG,
+			   _MASKED_BIT_ENABLE(GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS |
+					      GEN9_OA_DEBUG_INCLUDE_CLK_RATIO));
+	}
+
+	I915_WRITE(GDT_CHICKEN_BITS, 0xA0);
+	config_oa_regs(dev_priv, dev_priv->perf.oa.mux_regs,
+		       dev_priv->perf.oa.mux_regs_len);
+	I915_WRITE(GDT_CHICKEN_BITS, 0x80);
+
+	/* It takes a fairly long time for a new MUX configuration to
+	 * be be applied after these register writes. This delay
+	 * duration is take from Haswell (derived empirically based on
+	 * the render_basic config) but hopefully it covers the
+	 * maximum configuration latency for Gen8+ too...
+	 */
+	usleep_range(15000, 20000);
+
+	config_oa_regs(dev_priv, dev_priv->perf.oa.b_counter_regs,
+		       dev_priv->perf.oa.b_counter_regs_len);
+
+	ret = gen8_configure_all_contexts(dev_priv);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+static void gen8_disable_metric_set(struct drm_i915_private *dev_priv)
+{
+	/* NOP */
+}
+
 static void gen7_update_oacontrol_locked(struct drm_i915_private *dev_priv)
 {
 	lockdep_assert_held(&dev_priv->perf.hook_lock);
@@ -1157,6 +1809,29 @@ static void gen7_oa_enable(struct drm_i915_private *dev_priv)
 	spin_unlock_irqrestore(&dev_priv->perf.hook_lock, flags);
 }

+static void gen8_oa_enable(struct drm_i915_private *dev_priv)
+{
+	u32 report_format = dev_priv->perf.oa.oa_buffer.format;
+
+	/* Reset buf pointers so we don't forward reports from before now.
+	 *
+	 * Think carefully if considering trying to avoid this, since it
+	 * also ensures status flags and the buffer itself are cleared
+	 * in error paths, and we have checks for invalid reports based
+	 * on the assumption that certain fields are written to zeroed
+	 * memory which this helps maintains.
+	 */
+	gen8_init_oa_buffer(dev_priv);
+
+	/* Note: we don't rely on the hardware to perform single context
+	 * filtering and instead filter on the cpu based on the context-id
+	 * field of reports
+	 */
+	I915_WRITE(GEN8_OACONTROL, (report_format <<
+				    GEN8_OA_REPORT_FORMAT_SHIFT) |
+				   GEN8_OA_COUNTER_ENABLE);
+}
+
 /**
  * i915_oa_stream_enable - handle `I915_PERF_IOCTL_ENABLE` for OA stream
  * @stream: An i915 perf stream opened for OA metrics
@@ -1183,6 +1858,11 @@ static void gen7_oa_disable(struct drm_i915_private *dev_priv)
 	I915_WRITE(GEN7_OACONTROL, 0);
 }

+static void gen8_oa_disable(struct drm_i915_private *dev_priv)
+{
+	I915_WRITE(GEN8_OACONTROL, 0);
+}
+
 /**
  * i915_oa_stream_disable - handle `I915_PERF_IOCTL_DISABLE` for OA stream
  * @stream: An i915 perf stream opened for OA metrics
@@ -1361,6 +2041,21 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream,
 	return ret;
 }

+void i915_oa_init_reg_state(struct intel_engine_cs *engine,
+			    struct i915_gem_context *ctx,
+			    u32 *reg_state)
+{
+	struct drm_i915_private *dev_priv = engine->i915;
+
+	if (engine->id != RCS)
+		return;
+
+	if (!dev_priv->perf.initialized)
+		return;
+
+	gen8_update_reg_state_unlocked(ctx, reg_state);
+}
+
 /**
  * i915_perf_read_locked - &i915_perf_stream_ops->read with error normalisation
  * @stream: An i915 perf stream
@@ -1486,7 +2181,7 @@ static enum hrtimer_restart oa_poll_check_timer_cb(struct hrtimer *hrtimer)
 		container_of(hrtimer, typeof(*dev_priv),
 			     perf.oa.poll_check_timer);

-	if (dev_priv->perf.oa.ops.oa_buffer_check(dev_priv)) {
+	if (oa_buffer_check_unlocked(dev_priv)) {
 		dev_priv->perf.oa.pollin = true;
 		wake_up(&dev_priv->perf.oa.poll_wq);
 	}
@@ -1775,6 +2470,7 @@ i915_perf_open_ioctl_locked(struct drm_i915_private *dev_priv,
 	struct i915_gem_context *specific_ctx = NULL;
 	struct i915_perf_stream *stream = NULL;
 	unsigned long f_flags = 0;
+	bool privileged_op = true;
 	int stream_fd;
 	int ret;

@@ -1792,12 +2488,28 @@ i915_perf_open_ioctl_locked(struct drm_i915_private *dev_priv,
 		}
 	}

+	/* On Haswell the OA unit supports clock gating off for a specific
+	 * context and in this mode there's no visibility of metrics for the
+	 * rest of the system, which we consider acceptable for a
+	 * non-privileged client.
+	 *
+	 * For Gen8+ the OA unit no longer supports clock gating off for a
+	 * specific context and the kernel can't securely stop the counters
+	 * from updating as system-wide / global values. Even though we can
+	 * filter reports based on the included context ID we can't block
+	 * clients from seeing the raw / global counter values via
+	 * MI_REPORT_PERF_COUNT commands and so consider it a privileged op to
+	 * enable the OA unit by default.
+	 */
+	if (IS_HASWELL(dev_priv) && specific_ctx)
+		privileged_op = false;
+
 	/* Similar to perf's kernel.perf_paranoid_cpu sysctl option
 	 * we check a dev.i915.perf_stream_paranoid sysctl option
 	 * to determine if it's ok to access system wide OA counters
 	 * without CAP_SYS_ADMIN privileges.
 	 */
-	if (!specific_ctx &&
+	if (privileged_op &&
 	    i915_perf_stream_paranoid && !capable(CAP_SYS_ADMIN)) {
 		DRM_DEBUG("Insufficient privileges to open system-wide i915 perf stream\n");
 		ret = -EACCES;
@@ -2069,9 +2781,6 @@ int i915_perf_open_ioctl(struct drm_device *dev, void *data,
  */
 void i915_perf_register(struct drm_i915_private *dev_priv)
 {
-	if (!IS_HASWELL(dev_priv))
-		return;
-
 	if (!dev_priv->perf.initialized)
 		return;

@@ -2087,11 +2796,38 @@ void i915_perf_register(struct drm_i915_private *dev_priv)
 	if (!dev_priv->perf.metrics_kobj)
 		goto exit;

-	if (i915_perf_register_sysfs_hsw(dev_priv)) {
-		kobject_put(dev_priv->perf.metrics_kobj);
-		dev_priv->perf.metrics_kobj = NULL;
+	if (IS_HASWELL(dev_priv)) {
+		if (i915_perf_register_sysfs_hsw(dev_priv))
+			goto sysfs_error;
+	} else if (IS_BROADWELL(dev_priv)) {
+		if (i915_perf_register_sysfs_bdw(dev_priv))
+			goto sysfs_error;
+	} else if (IS_CHERRYVIEW(dev_priv)) {
+		if (i915_perf_register_sysfs_chv(dev_priv))
+			goto sysfs_error;
+	} else if (IS_SKYLAKE(dev_priv)) {
+		if (IS_SKL_GT2(dev_priv)) {
+			if (i915_perf_register_sysfs_sklgt2(dev_priv))
+				goto sysfs_error;
+		} else if (IS_SKL_GT3(dev_priv)) {
+			if (i915_perf_register_sysfs_sklgt3(dev_priv))
+				goto sysfs_error;
+		} else if (IS_SKL_GT4(dev_priv)) {
+			if (i915_perf_register_sysfs_sklgt4(dev_priv))
+				goto sysfs_error;
+		} else
+			goto sysfs_error;
+	} else if (IS_BROXTON(dev_priv)) {
+		if (i915_perf_register_sysfs_bxt(dev_priv))
+			goto sysfs_error;
 	}

+	goto exit;
+
+sysfs_error:
+	kobject_put(dev_priv->perf.metrics_kobj);
+	dev_priv->perf.metrics_kobj = NULL;
+
 exit:
 	mutex_unlock(&dev_priv->perf.lock);
 }
@@ -2107,13 +2843,24 @@ void i915_perf_register(struct drm_i915_private *dev_priv)
  */
 void i915_perf_unregister(struct drm_i915_private *dev_priv)
 {
-	if (!IS_HASWELL(dev_priv))
-		return;
-
 	if (!dev_priv->perf.metrics_kobj)
 		return;

-	i915_perf_unregister_sysfs_hsw(dev_priv);
+        if (IS_HASWELL(dev_priv))
+                i915_perf_unregister_sysfs_hsw(dev_priv);
+        else if (IS_BROADWELL(dev_priv))
+                i915_perf_unregister_sysfs_bdw(dev_priv);
+        else if (IS_CHERRYVIEW(dev_priv))
+                i915_perf_unregister_sysfs_chv(dev_priv);
+        else if (IS_SKYLAKE(dev_priv)) {
+		if (IS_SKL_GT2(dev_priv))
+			i915_perf_unregister_sysfs_sklgt2(dev_priv);
+		else if (IS_SKL_GT3(dev_priv))
+			i915_perf_unregister_sysfs_sklgt3(dev_priv);
+		else if (IS_SKL_GT4(dev_priv))
+			i915_perf_unregister_sysfs_sklgt4(dev_priv);
+	} else if (IS_BROXTON(dev_priv))
+                i915_perf_unregister_sysfs_bxt(dev_priv);

 	kobject_put(dev_priv->perf.metrics_kobj);
 	dev_priv->perf.metrics_kobj = NULL;
@@ -2172,36 +2919,105 @@ static struct ctl_table dev_root[] = {
  */
 void i915_perf_init(struct drm_i915_private *dev_priv)
 {
-	if (!IS_HASWELL(dev_priv))
-		return;
-
-	hrtimer_init(&dev_priv->perf.oa.poll_check_timer,
-		     CLOCK_MONOTONIC, HRTIMER_MODE_REL);
-	dev_priv->perf.oa.poll_check_timer.function = oa_poll_check_timer_cb;
-	init_waitqueue_head(&dev_priv->perf.oa.poll_wq);
+	dev_priv->perf.oa.n_builtin_sets = 0;
+
+	if (IS_HASWELL(dev_priv)) {
+		dev_priv->perf.oa.ops.init_oa_buffer = gen7_init_oa_buffer;
+		dev_priv->perf.oa.ops.enable_metric_set = hsw_enable_metric_set;
+		dev_priv->perf.oa.ops.disable_metric_set = hsw_disable_metric_set;
+		dev_priv->perf.oa.ops.oa_enable = gen7_oa_enable;
+		dev_priv->perf.oa.ops.oa_disable = gen7_oa_disable;
+		dev_priv->perf.oa.ops.read = gen7_oa_read;
+		dev_priv->perf.oa.ops.oa_hw_tail_read =
+			gen7_oa_hw_tail_read;
+
+		dev_priv->perf.oa.oa_formats = hsw_oa_formats;
+
+		dev_priv->perf.oa.n_builtin_sets =
+			i915_oa_n_builtin_metric_sets_hsw;
+	} else if (i915.enable_execlists) {
+		/* Note: that although we could theoretically also support the
+		 * legacy ringbuffer mode on BDW (and earlier iterations of
+		 * this driver, before upstreaming did this) it didn't seem
+		 * worth the complexity to maintain now that BDW+ enable
+		 * execlist mode by default.
+		 */

-	INIT_LIST_HEAD(&dev_priv->perf.streams);
-	mutex_init(&dev_priv->perf.lock);
-	spin_lock_init(&dev_priv->perf.hook_lock);
-	spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);
+		if (IS_GEN8(dev_priv)) {
+			dev_priv->perf.oa.ctx_oactxctrl_offset = 0x120;
+			dev_priv->perf.oa.ctx_flexeu0_offset = 0x2ce;
+			dev_priv->perf.oa.gen8_valid_ctx_bit = (1<<25);
+
+			if (IS_BROADWELL(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_bdw;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_bdw;
+			} else if (IS_CHERRYVIEW(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_chv;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_chv;
+			}
+		} else if (IS_GEN9(dev_priv)) {
+			dev_priv->perf.oa.ctx_oactxctrl_offset = 0x128;
+			dev_priv->perf.oa.ctx_flexeu0_offset = 0x3de;
+			dev_priv->perf.oa.gen8_valid_ctx_bit = (1<<16);
+
+			if (IS_SKL_GT2(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_sklgt2;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_sklgt2;
+			} else if (IS_SKL_GT3(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_sklgt3;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_sklgt3;
+			} else if (IS_SKL_GT4(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_sklgt4;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_sklgt4;
+			} else if (IS_BROXTON(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_bxt;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_bxt;
+			}
+		}

-	dev_priv->perf.oa.ops.init_oa_buffer = gen7_init_oa_buffer;
-	dev_priv->perf.oa.ops.enable_metric_set = hsw_enable_metric_set;
-	dev_priv->perf.oa.ops.disable_metric_set = hsw_disable_metric_set;
-	dev_priv->perf.oa.ops.oa_enable = gen7_oa_enable;
-	dev_priv->perf.oa.ops.oa_disable = gen7_oa_disable;
-	dev_priv->perf.oa.ops.read = gen7_oa_read;
-	dev_priv->perf.oa.ops.oa_buffer_check =
-		gen7_oa_buffer_check_unlocked;
+		if (dev_priv->perf.oa.n_builtin_sets) {
+			dev_priv->perf.oa.ops.init_oa_buffer = gen8_init_oa_buffer;
+			dev_priv->perf.oa.ops.enable_metric_set =
+				gen8_enable_metric_set;
+			dev_priv->perf.oa.ops.disable_metric_set =
+				gen8_disable_metric_set;
+			dev_priv->perf.oa.ops.oa_enable = gen8_oa_enable;
+			dev_priv->perf.oa.ops.oa_disable = gen8_oa_disable;
+			dev_priv->perf.oa.ops.read = gen8_oa_read;
+			dev_priv->perf.oa.ops.oa_hw_tail_read =
+				gen8_oa_hw_tail_read;
+
+			dev_priv->perf.oa.oa_formats = gen8_plus_oa_formats;
+		}
+	}

-	dev_priv->perf.oa.oa_formats = hsw_oa_formats;
+	if (dev_priv->perf.oa.n_builtin_sets) {
+		hrtimer_init(&dev_priv->perf.oa.poll_check_timer,
+				CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+		dev_priv->perf.oa.poll_check_timer.function = oa_poll_check_timer_cb;
+		init_waitqueue_head(&dev_priv->perf.oa.poll_wq);

-	dev_priv->perf.oa.n_builtin_sets =
-		i915_oa_n_builtin_metric_sets_hsw;
+		INIT_LIST_HEAD(&dev_priv->perf.streams);
+		mutex_init(&dev_priv->perf.lock);
+		spin_lock_init(&dev_priv->perf.hook_lock);
+		spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);

-	dev_priv->perf.sysctl_header = register_sysctl_table(dev_root);
+		dev_priv->perf.sysctl_header = register_sysctl_table(dev_root);

-	dev_priv->perf.initialized = true;
+		dev_priv->perf.initialized = true;
+	}
 }

 /**
@@ -2216,5 +3032,6 @@ void i915_perf_fini(struct drm_i915_private *dev_priv)
 	unregister_sysctl_table(dev_priv->perf.sysctl_header);

 	memset(&dev_priv->perf.oa.ops, 0, sizeof(dev_priv->perf.oa.ops));
+
 	dev_priv->perf.initialized = false;
 }
diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index 4c72adae368b..c0bbba6d5b78 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -653,6 +653,12 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)

 #define GEN8_OACTXID _MMIO(0x2364)

+#define GEN8_OA_DEBUG _MMIO(0x2B04)
+#define  GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS    (1<<5)
+#define  GEN9_OA_DEBUG_INCLUDE_CLK_RATIO	    (1<<6)
+#define  GEN9_OA_DEBUG_DISABLE_GO_1_0_REPORTS	    (1<<2)
+#define  GEN9_OA_DEBUG_DISABLE_CTX_SWITCH_REPORTS   (1<<1)
+
 #define GEN8_OACONTROL _MMIO(0x2B00)
 #define  GEN8_OA_REPORT_FORMAT_A12	    (0<<2)
 #define  GEN8_OA_REPORT_FORMAT_A12_B8_C8    (2<<2)
@@ -674,6 +680,7 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 #define  GEN7_OABUFFER_STOP_RESUME_ENABLE   (1<<1)
 #define  GEN7_OABUFFER_RESUME		    (1<<0)

+#define GEN8_OABUFFER_UDW _MMIO(0x23b4)
 #define GEN8_OABUFFER _MMIO(0x2b14)

 #define GEN7_OASTATUS1 _MMIO(0x2364)
@@ -692,7 +699,9 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 #define  GEN8_OASTATUS_REPORT_LOST	    (1<<0)

 #define GEN8_OAHEADPTR _MMIO(0x2B0C)
+#define GEN8_OAHEADPTR_MASK    0xffffffc0
 #define GEN8_OATAILPTR _MMIO(0x2B10)
+#define GEN8_OATAILPTR_MASK    0xffffffc0

 #define OABUFFER_SIZE_128K  (0<<3)
 #define OABUFFER_SIZE_256K  (1<<3)
@@ -705,7 +714,17 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)

 #define OA_MEM_SELECT_GGTT  (1<<0)

+/*
+ * Flexible, Aggregate EU Counter Registers.
+ * Note: these aren't contiguous
+ */
 #define EU_PERF_CNTL0	    _MMIO(0xe458)
+#define EU_PERF_CNTL1	    _MMIO(0xe558)
+#define EU_PERF_CNTL2	    _MMIO(0xe658)
+#define EU_PERF_CNTL3	    _MMIO(0xe758)
+#define EU_PERF_CNTL4	    _MMIO(0xe45c)
+#define EU_PERF_CNTL5	    _MMIO(0xe55c)
+#define EU_PERF_CNTL6	    _MMIO(0xe65c)

 #define GDT_CHICKEN_BITS    _MMIO(0x9840)
 #define GT_NOA_ENABLE	    0x00000080
@@ -2325,6 +2344,9 @@ enum skl_disp_power_wells {
 #define   GEN8_RC_SEMA_IDLE_MSG_DISABLE	(1 << 12)
 #define   GEN8_FF_DOP_CLOCK_GATE_DISABLE	(1<<10)

+#define GEN6_RCS_PWR_FSM _MMIO(0x22ac)
+#define GEN9_RCS_FE_FSM2 _MMIO(0x22a4)
+
 /* Fuse readout registers for GT */
 #define CHV_FUSE_GT			_MMIO(VLV_DISPLAY_BASE + 0x2168)
 #define   CHV_FGT_DISABLE_SS0		(1 << 10)
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 7df278fe492e..351e231a02f4 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1879,6 +1879,8 @@ static void execlists_init_reg_state(u32 *regs,
 		regs[CTX_LRI_HEADER_2] = MI_LOAD_REGISTER_IMM(1);
 		CTX_REG(regs, CTX_R_PWR_CLK_STATE, GEN8_R_PWR_CLK_STATE,
 			make_rpcs(dev_priv));
+
+		i915_oa_init_reg_state(engine, ctx, regs);
 	}
 }

diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
index 464547d08173..15bc9f78ba4d 100644
--- a/include/uapi/drm/i915_drm.h
+++ b/include/uapi/drm/i915_drm.h
@@ -1316,13 +1316,18 @@ struct drm_i915_gem_context_param {
 };

 enum drm_i915_oa_format {
-	I915_OA_FORMAT_A13 = 1,
-	I915_OA_FORMAT_A29,
-	I915_OA_FORMAT_A13_B8_C8,
-	I915_OA_FORMAT_B4_C8,
-	I915_OA_FORMAT_A45_B8_C8,
-	I915_OA_FORMAT_B4_C8_A16,
-	I915_OA_FORMAT_C4_B8,
+	I915_OA_FORMAT_A13 = 1,	    /* HSW only */
+	I915_OA_FORMAT_A29,	    /* HSW only */
+	I915_OA_FORMAT_A13_B8_C8,   /* HSW only */
+	I915_OA_FORMAT_B4_C8,	    /* HSW only */
+	I915_OA_FORMAT_A45_B8_C8,   /* HSW only */
+	I915_OA_FORMAT_B4_C8_A16,   /* HSW only */
+	I915_OA_FORMAT_C4_B8,	    /* HSW+ */
+
+	/* Gen8+ */
+	I915_OA_FORMAT_A12,
+	I915_OA_FORMAT_A12_B8_C8,
+	I915_OA_FORMAT_A32u40_A4u32_B8_C8,

 	I915_OA_FORMAT_MAX	    /* non-ABI */
 };
--
2.11.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* ✗ Fi.CI.BAT: failure for Enable OA unit for Gen 8 and 9 in i915 perf (rev8)
  2017-04-12 15:55 [PATCH v4 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Robert Bragg
                   ` (17 preceding siblings ...)
  2017-04-24 18:52 ` ✗ Fi.CI.BAT: failure for Enable OA unit for Gen 8 and 9 in i915 perf (rev7) Patchwork
@ 2017-04-25 17:35 ` Patchwork
  18 siblings, 0 replies; 87+ messages in thread
From: Patchwork @ 2017-04-25 17:35 UTC (permalink / raw)
  To: Lionel Landwerlin; +Cc: intel-gfx

== Series Details ==

Series: Enable OA unit for Gen 8 and 9 in i915 perf (rev8)
URL   : https://patchwork.freedesktop.org/series/20084/
State : failure

== Summary ==

cc1: all warnings being treated as errors
cc1: all warnings being treated as errors
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/intel_ddi.o' failed
make[4]: *** [drivers/gpu/drm/i915/intel_ddi.o] Error 1
cc1: all warnings being treated as errors
cc1: all warnings being treated as errors
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/intel_fbc.o' failed
make[4]: *** [drivers/gpu/drm/i915/intel_fbc.o] Error 1
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/intel_display.o' failed
make[4]: *** [drivers/gpu/drm/i915/intel_display.o] Error 1
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/gvt/cfg_space.o' failed
make[4]: *** [drivers/gpu/drm/i915/gvt/cfg_space.o] Error 1
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/intel_sprite.o' failed
make[4]: *** [drivers/gpu/drm/i915/intel_sprite.o] Error 1
cc1: all warnings being treated as errors
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/gvt/display.o' failed
make[4]: *** [drivers/gpu/drm/i915/gvt/display.o] Error 1
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/intel_panel.o' failed
make[4]: *** [drivers/gpu/drm/i915/intel_panel.o] Error 1
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/gvt/gvt.o' failed
make[4]: *** [drivers/gpu/drm/i915/gvt/gvt.o] Error 1
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/intel_crt.o' failed
make[4]: *** [drivers/gpu/drm/i915/intel_crt.o] Error 1
cc1: all warnings being treated as errors
cc1: all warnings being treated as errors
cc1: all warnings being treated as errors
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/i915_oa_hsw.o' failed
make[4]: *** [drivers/gpu/drm/i915/i915_oa_hsw.o] Error 1
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/intel_dpll_mgr.o' failed
make[4]: *** [drivers/gpu/drm/i915/intel_dpll_mgr.o] Error 1
cc1: all warnings being treated as errors
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/intel_dsi.o' failed
make[4]: *** [drivers/gpu/drm/i915/intel_dsi.o] Error 1
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/i915_gpu_error.o' failed
make[4]: *** [drivers/gpu/drm/i915/i915_gpu_error.o] Error 1
cc1: all warnings being treated as errors
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/gvt/scheduler.o' failed
make[4]: *** [drivers/gpu/drm/i915/gvt/scheduler.o] Error 1
cc1: all warnings being treated as errors
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/gvt/execlist.o' failed
make[4]: *** [drivers/gpu/drm/i915/gvt/execlist.o] Error 1
cc1: all warnings being treated as errors
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/gvt/render.o' failed
make[4]: *** [drivers/gpu/drm/i915/gvt/render.o] Error 1
cc1: all warnings being treated as errors
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/gvt/gtt.o' failed
make[4]: *** [drivers/gpu/drm/i915/gvt/gtt.o] Error 1
  LD [M]  drivers/net/ethernet/intel/igbvf/igbvf.o
cc1: all warnings being treated as errors
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/gvt/firmware.o' failed
make[4]: *** [drivers/gpu/drm/i915/gvt/firmware.o] Error 1
cc1: all warnings being treated as errors
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/intel_lpe_audio.o' failed
make[4]: *** [drivers/gpu/drm/i915/intel_lpe_audio.o] Error 1
cc1: all warnings being treated as errors
scripts/Makefile.build:294: recipe for target 'drivers/gpu/drm/i915/gvt/handlers.o' failed
make[4]: *** [drivers/gpu/drm/i915/gvt/handlers.o] Error 1
scripts/Makefile.build:553: recipe for target 'drivers/gpu/drm/i915' failed
make[3]: *** [drivers/gpu/drm/i915] Error 2
scripts/Makefile.build:553: recipe for target 'drivers/gpu/drm' failed
make[2]: *** [drivers/gpu/drm] Error 2
scripts/Makefile.build:553: recipe for target 'drivers/gpu' failed
make[1]: *** [drivers/gpu] Error 2
make[1]: *** Waiting for unfinished jobs....
  AR      lib/lib.a
  EXPORTS lib/lib-ksyms.o
  LD      lib/built-in.o
  LD      drivers/usb/core/usbcore.o
  LD      drivers/usb/core/built-in.o
  CC      arch/x86/kernel/cpu/capflags.o
  LD      arch/x86/kernel/cpu/built-in.o
  LD      drivers/scsi/sd_mod.o
  LD      drivers/scsi/built-in.o
scripts/Makefile.build:553: recipe for target 'arch/x86/kernel' failed
make[1]: *** [arch/x86/kernel] Error 2
Makefile:1002: recipe for target 'arch/x86' failed
make: *** [arch/x86] Error 2
make: *** Waiting for unfinished jobs....
  LD      drivers/tty/vt/built-in.o
  LD      drivers/tty/built-in.o
  LD      drivers/md/md-mod.o
  LD      fs/btrfs/btrfs.o
  LD      drivers/md/built-in.o
  LD      net/xfrm/built-in.o
  LD      fs/btrfs/built-in.o
  LD      drivers/usb/host/xhci-hcd.o
  LD      net/ipv4/built-in.o
  LD [M]  drivers/net/ethernet/intel/igb/igb.o
  LD      net/core/built-in.o
  LD      drivers/usb/host/built-in.o
  LD      drivers/usb/built-in.o
  LD      net/built-in.o
  LD      fs/ext4/ext4.o
  LD      fs/ext4/built-in.o
  LD      fs/built-in.o
  LD [M]  drivers/net/ethernet/intel/e1000e/e1000e.o
  LD      drivers/net/ethernet/built-in.o
  LD      drivers/net/built-in.o
Makefile:1002: recipe for target 'drivers' failed
make: *** [drivers] Error 2

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v6 12/15] drm/i915/perf: Add OA unit support for Gen 8+
  2017-04-25 17:10           ` Lionel Landwerlin
@ 2017-04-25 17:47             ` Matthew Auld
  0 siblings, 0 replies; 87+ messages in thread
From: Matthew Auld @ 2017-04-25 17:47 UTC (permalink / raw)
  To: Lionel Landwerlin; +Cc: Intel Graphics Development

On 25 April 2017 at 18:10, Lionel Landwerlin
<lionel.g.landwerlin@intel.com> wrote:
> On 25/04/17 09:42, Matthew Auld wrote:
>>
>> On 24 April 2017 at 19:49, Lionel Landwerlin
>> <lionel.g.landwerlin@intel.com> wrote:
>>>
>>> From: Robert Bragg <robert@sixbynine.org>
>>>
>>> Enables access to OA unit metrics for BDW, CHV, SKL and BXT which all
>>> share (more-or-less) the same OA unit design.
>>>
>>> Of particular note in comparison to Haswell: some OA unit HW config
>>> state has become per-context state and as a consequence it is somewhat
>>> more complicated to manage synchronous state changes from the cpu while
>>> there's no guarantee of what context (if any) is currently actively
>>> running on the gpu.
>>>
>>> The periodic sampling frequency which can be particularly useful for
>>> system-wide analysis (as opposed to command stream synchronised
>>> MI_REPORT_PERF_COUNT commands) is perhaps the most surprising state to
>>> have become per-context save and restored (while the OABUFFER
>>> destination is still a shared, system-wide resource).
>>>
>>> This support for gen8+ takes care to consider a number of timing
>>> challenges involved in synchronously updating per-context state
>>> primarily by programming all config state from the cpu and updating all
>>> current and saved contexts synchronously while the OA unit is still
>>> disabled.
>>>
>>> The driver intentionally avoids depending on command streamer
>>> programming to update OA state considering the lack of synchronization
>>> between the automatic loading of OACTXCONTROL state (that includes the
>>> periodic sampling state and enable state) on context restore and the
>>> parsing of any general purpose BB the driver can control. I.e. this
>>> implementation is careful to avoid the possibility of a context restore
>>> temporarily enabling any out-of-date periodic sampling state. In
>>> addition to the risk of transiently-out-of-date state being loaded
>>> automatically; there are also internal HW latencies involved in the
>>> loading of MUX configurations which would be difficult to account for
>>> from the command streamer (and we only want to enable the unit when once
>>> the MUX configuration is complete).
>>>
>>> Since the Gen8+ OA unit design no longer supports clock gating the unit
>>> off for a single given context (which effectively stopped any progress
>>> of counters while any other context was running) and instead supports
>>> tagging OA reports with a context ID for filtering on the CPU, it means
>>> we can no longer hide the system-wide progress of counters from a
>>> non-privileged application only interested in metrics for its own
>>> context. Although we could theoretically try and subtract the progress
>>> of other contexts before forwarding reports via read() we aren't in a
>>> position to filter reports captured via MI_REPORT_PERF_COUNT commands.
>>> As a result, for Gen8+, we always require the
>>> dev.i915.perf_stream_paranoid to be unset for any access to OA metrics
>>> if not root.
>>>
>>> v5: Drain submitted requests when enabling metric set to ensure no
>>>      lite-restore erases the context image we just updated (Lionel)
>>>
>>> v6: In addition to drain, switch to kernel context & update all
>>>      context in place (Chris)
>>>
>>> Signed-off-by: Robert Bragg <robert@sixbynine.org>
>>> Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
>>> Reviewed-by: Matthew Auld <matthew.auld@intel.com> \o/
>>> ---
>>>   drivers/gpu/drm/i915/i915_drv.h  |  45 +-
>>>   drivers/gpu/drm/i915/i915_perf.c | 957
>>> ++++++++++++++++++++++++++++++++++++---
>>>   drivers/gpu/drm/i915/i915_reg.h  |  22 +
>>>   drivers/gpu/drm/i915/intel_lrc.c |   2 +
>>>   include/uapi/drm/i915_drm.h      |  19 +-
>>>   5 files changed, 952 insertions(+), 93 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/i915/i915_drv.h
>>> b/drivers/gpu/drm/i915/i915_drv.h
>>> index ffa1fc5eddfd..676b1227067c 100644
>>> --- a/drivers/gpu/drm/i915/i915_drv.h
>>> +++ b/drivers/gpu/drm/i915/i915_drv.h
>>> @@ -2064,9 +2064,17 @@ struct i915_oa_ops {
>>>          void (*init_oa_buffer)(struct drm_i915_private *dev_priv);
>>>
>>>          /**
>>> -        * @enable_metric_set: Applies any MUX configuration to set up
>>> the
>>> -        * Boolean and Custom (B/C) counters that are part of the counter
>>> -        * reports being sampled. May apply system constraints such as
>>> +        * @select_metric_set: The auto generated code that checks
>>> whether a
>>> +        * requested OA config is applicable to the system and if so sets
>>> up
>>> +        * the mux, oa and flex eu register config pointers according to
>>> the
>>> +        * current dev_priv->perf.oa.metrics_set.
>>> +        */
>>> +       int (*select_metric_set)(struct drm_i915_private *dev_priv);
>>> +
>>> +       /**
>>> +        * @enable_metric_set: Selects and applies any MUX configuration
>>> to set
>>> +        * up the Boolean and Custom (B/C) counters that are part of the
>>> +        * counter reports being sampled. May apply system constraints
>>> such as
>>>           * disabling EU clock gating as required.
>>>           */
>>>          int (*enable_metric_set)(struct drm_i915_private *dev_priv);
>>> @@ -2097,20 +2105,13 @@ struct i915_oa_ops {
>>>                      size_t *offset);
>>>
>>>          /**
>>> -        * @oa_buffer_check: Check for OA buffer data + update tail
>>> -        *
>>> -        * This is either called via fops or the poll check hrtimer
>>> (atomic
>>> -        * ctx) without any locks taken.
>>> +        * @oa_hw_tail_read: read the OA tail pointer register
>>>           *
>>> -        * It's safe to read OA config state here unlocked, assuming that
>>> this
>>> -        * is only called while the stream is enabled, while the global
>>> OA
>>> -        * configuration can't be modified.
>>> -        *
>>> -        * Efficiency is more important than avoiding some false
>>> positives
>>> -        * here, which will be handled gracefully - likely resulting in
>>> an
>>> -        * %EAGAIN error for userspace.
>>> +        * In particular this enables us to share all the fiddly code for
>>> +        * handling the OA unit tail pointer race that affects multiple
>>> +        * generations.
>>>           */
>>> -       bool (*oa_buffer_check)(struct drm_i915_private *dev_priv);
>>> +       u32 (*oa_hw_tail_read)(struct drm_i915_private *dev_priv);
>>>   };
>>>
>>>   struct intel_cdclk_state {
>>> @@ -2472,6 +2473,7 @@ struct drm_i915_private {
>>>                          struct {
>>>                                  struct i915_vma *vma;
>>>                                  u8 *vaddr;
>>> +                               u32 last_ctx_id;
>>>                                  int format;
>>>                                  int format_size;
>>>
>>> @@ -2541,6 +2543,14 @@ struct drm_i915_private {
>>>                          } oa_buffer;
>>>
>>>                          u32 gen7_latched_oastatus1;
>>> +                       u32 ctx_oactxctrl_offset;
>>> +                       u32 ctx_flexeu0_offset;
>>> +
>>> +                       /* The RPT_ID/reason field for Gen8+ includes a
>>> bit
>>> +                        * to determine if the CTX ID in the report is
>>> valid
>>> +                        * but the specific bit differs between Gen 8 and
>>> 9
>>> +                        */
>>> +                       u32 gen8_valid_ctx_bit;
>>>
>>>                          struct i915_oa_ops ops;
>>>                          const struct i915_oa_format *oa_formats;
>>> @@ -2851,6 +2861,8 @@ intel_info(const struct drm_i915_private *dev_priv)
>>>   #define IS_KBL_ULX(dev_priv)   (INTEL_DEVID(dev_priv) == 0x590E || \
>>>                                   INTEL_DEVID(dev_priv) == 0x5915 || \
>>>                                   INTEL_DEVID(dev_priv) == 0x591E)
>>> +#define IS_SKL_GT2(dev_priv)   (IS_SKYLAKE(dev_priv) && \
>>> +                                (INTEL_DEVID(dev_priv) & 0x00F0) ==
>>> 0x0010)
>>>   #define IS_SKL_GT3(dev_priv)   (IS_SKYLAKE(dev_priv) && \
>>>                                   (INTEL_DEVID(dev_priv) & 0x00F0) ==
>>> 0x0020)
>>>   #define IS_SKL_GT4(dev_priv)   (IS_SKYLAKE(dev_priv) && \
>>> @@ -3615,6 +3627,9 @@ i915_gem_context_lookup_timeline(struct
>>> i915_gem_context *ctx,
>>>
>>>   int i915_perf_open_ioctl(struct drm_device *dev, void *data,
>>>                           struct drm_file *file);
>>> +void i915_oa_init_reg_state(struct intel_engine_cs *engine,
>>> +                           struct i915_gem_context *ctx,
>>> +                           uint32_t *reg_state);
>>>
>>>   /* i915_gem_evict.c */
>>>   int __must_check i915_gem_evict_something(struct i915_address_space
>>> *vm,
>>> diff --git a/drivers/gpu/drm/i915/i915_perf.c
>>> b/drivers/gpu/drm/i915/i915_perf.c
>>> index 3277a52ce98e..f2688dee5a03 100644
>>> --- a/drivers/gpu/drm/i915/i915_perf.c
>>> +++ b/drivers/gpu/drm/i915/i915_perf.c
>>> @@ -196,6 +196,12 @@
>>>
>>>   #include "i915_drv.h"
>>>   #include "i915_oa_hsw.h"
>>> +#include "i915_oa_bdw.h"
>>> +#include "i915_oa_chv.h"
>>> +#include "i915_oa_sklgt2.h"
>>> +#include "i915_oa_sklgt3.h"
>>> +#include "i915_oa_sklgt4.h"
>>> +#include "i915_oa_bxt.h"
>>>
>>>   /* HW requires this to be a power of two, between 128k and 16M, though
>>> driver
>>>    * is currently generally designed assuming the largest 16M size is
>>> used such
>>> @@ -215,7 +221,7 @@
>>>    *
>>>    * Although this can be observed explicitly while copying reports to
>>> userspace
>>>    * by checking for a zeroed report-id field in tail reports, we want to
>>> account
>>> - * for this earlier, as part of the _oa_buffer_check to avoid lots of
>>> redundant
>>> + * for this earlier, as part of the oa_buffer_check to avoid lots of
>>> redundant
>>>    * read() attempts.
>>>    *
>>>    * In effect we define a tail pointer for reading that lags the real
>>> tail
>>> @@ -237,7 +243,7 @@
>>>    * indicates that an updated tail pointer is needed.
>>>    *
>>>    * Most of the implementation details for this workaround are in
>>> - * gen7_oa_buffer_check_unlocked() and gen7_appand_oa_reports()
>>> + * oa_buffer_check_unlocked() and _append_oa_reports()
>>>    *
>>>    * Note for posterity: previously the driver used to define an
>>> effective tail
>>>    * pointer that lagged the real pointer by a 'tail margin' measured in
>>> bytes
>>> @@ -272,6 +278,13 @@ static u32 i915_perf_stream_paranoid = true;
>>>
>>>   #define INVALID_CTX_ID 0xffffffff
>>>
>>> +/* On Gen8+ automatically triggered OA reports include a 'reason'
>>> field... */
>>> +#define OAREPORT_REASON_MASK           0x3f
>>> +#define OAREPORT_REASON_SHIFT          19
>>> +#define OAREPORT_REASON_TIMER          (1<<0)
>>> +#define OAREPORT_REASON_CTX_SWITCH     (1<<3)
>>> +#define OAREPORT_REASON_CLK_RATIO      (1<<5)
>>> +
>>>
>>>   /* For sysctl proc_dointvec_minmax of i915_oa_max_sample_rate
>>>    *
>>> @@ -303,6 +316,13 @@ static struct i915_oa_format
>>> hsw_oa_formats[I915_OA_FORMAT_MAX] = {
>>>          [I915_OA_FORMAT_C4_B8]      = { 7, 64 },
>>>   };
>>>
>>> +static struct i915_oa_format gen8_plus_oa_formats[I915_OA_FORMAT_MAX] =
>>> {
>>> +       [I915_OA_FORMAT_A12]                = { 0, 64 },
>>> +       [I915_OA_FORMAT_A12_B8_C8]          = { 2, 128 },
>>> +       [I915_OA_FORMAT_A32u40_A4u32_B8_C8] = { 5, 256 },
>>> +       [I915_OA_FORMAT_C4_B8]              = { 7, 64 },
>>> +};
>>> +
>>>   #define SAMPLE_OA_REPORT      (1<<0)
>>>
>>>   /**
>>> @@ -332,8 +352,20 @@ struct perf_open_properties {
>>>          int oa_period_exponent;
>>>   };
>>>
>>> +static u32 gen8_oa_hw_tail_read(struct drm_i915_private *dev_priv)
>>> +{
>>> +       return I915_READ(GEN8_OATAILPTR) & GEN8_OATAILPTR_MASK;
>>> +}
>>> +
>>> +static u32 gen7_oa_hw_tail_read(struct drm_i915_private *dev_priv)
>>> +{
>>> +       u32 oastatus1 = I915_READ(GEN7_OASTATUS1);
>>> +
>>> +       return oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
>>> +}
>>> +
>>>   /**
>>> - * gen7_oa_buffer_check_unlocked - check for data and update tail ptr
>>> state
>>> + * oa_buffer_check_unlocked - check for data and update tail ptr state
>>>    * @dev_priv: i915 device instance
>>>    *
>>>    * This is either called via fops (for blocking reads in user ctx) or
>>> the poll
>>> @@ -356,12 +388,11 @@ struct perf_open_properties {
>>>    *
>>>    * Returns: %true if the OA buffer contains data, else %false
>>>    */
>>> -static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private
>>> *dev_priv)
>>> +static bool oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
>>>   {
>>>          int report_size = dev_priv->perf.oa.oa_buffer.format_size;
>>>          unsigned long flags;
>>>          unsigned int aged_idx;
>>> -       u32 oastatus1;
>>>          u32 head, hw_tail, aged_tail, aging_tail;
>>>          u64 now;
>>>
>>> @@ -381,8 +412,7 @@ static bool gen7_oa_buffer_check_unlocked(struct
>>> drm_i915_private *dev_priv)
>>>          aged_tail = dev_priv->perf.oa.oa_buffer.tails[aged_idx].offset;
>>>          aging_tail =
>>> dev_priv->perf.oa.oa_buffer.tails[!aged_idx].offset;
>>>
>>> -       oastatus1 = I915_READ(GEN7_OASTATUS1);
>>> -       hw_tail = oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
>>> +       hw_tail = dev_priv->perf.oa.ops.oa_hw_tail_read(dev_priv);
>>>
>>>          /* The tail pointer increases in 64 byte increments,
>>>           * not in report_size steps...
>>> @@ -404,6 +434,7 @@ static bool gen7_oa_buffer_check_unlocked(struct
>>> drm_i915_private *dev_priv)
>>>          if (aging_tail != INVALID_TAIL_PTR &&
>>>              ((now - dev_priv->perf.oa.oa_buffer.aging_timestamp) >
>>>               OA_TAIL_MARGIN_NSEC)) {
>>> +
>>>                  aged_idx ^= 1;
>>>                  dev_priv->perf.oa.oa_buffer.aged_tail_idx = aged_idx;
>>>
>>> @@ -553,6 +584,287 @@ static int append_oa_sample(struct i915_perf_stream
>>> *stream,
>>>    *
>>>    * Returns: 0 on success, negative error code on failure.
>>>    */
>>> +static int gen8_append_oa_reports(struct i915_perf_stream *stream,
>>> +                                 char __user *buf,
>>> +                                 size_t count,
>>> +                                 size_t *offset)
>>> +{
>>> +       struct drm_i915_private *dev_priv = stream->dev_priv;
>>> +       int report_size = dev_priv->perf.oa.oa_buffer.format_size;
>>> +       u8 *oa_buf_base = dev_priv->perf.oa.oa_buffer.vaddr;
>>> +       u32 gtt_offset =
>>> i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
>>> +       u32 mask = (OA_BUFFER_SIZE - 1);
>>> +       size_t start_offset = *offset;
>>> +       unsigned long flags;
>>> +       unsigned int aged_tail_idx;
>>> +       u32 head, tail;
>>> +       u32 taken;
>>> +       int ret = 0;
>>> +
>>> +       if (WARN_ON(!stream->enabled))
>>> +               return -EIO;
>>> +
>>> +       spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
>>> +
>>> +       head = dev_priv->perf.oa.oa_buffer.head;
>>> +       aged_tail_idx = dev_priv->perf.oa.oa_buffer.aged_tail_idx;
>>> +       tail = dev_priv->perf.oa.oa_buffer.tails[aged_tail_idx].offset;
>>> +
>>> +       spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock,
>>> flags);
>>> +
>>> +       /* An invalid tail pointer here means we're still waiting for the
>>> poll
>>> +        * hrtimer callback to give us a pointer
>>> +        */
>>> +       if (tail == INVALID_TAIL_PTR)
>>> +               return -EAGAIN;
>>> +
>>> +       /* NB: oa_buffer.head/tail include the gtt_offset which we don't
>>> want
>>> +        * while indexing relative to oa_buf_base.
>>> +        */
>>> +       head -= gtt_offset;
>>> +       tail -= gtt_offset;
>>> +
>>> +       /* An out of bounds or misaligned head or tail pointer implies a
>>> driver
>>> +        * bug since we validate + align the tail pointers we read from
>>> the
>>> +        * hardware and we are in full control of the head pointer which
>>> should
>>> +        * only be incremented by multiples of the report size (notably
>>> also
>>> +        * all a power of two).
>>> +        */
>>> +       if (WARN_ONCE(head > OA_BUFFER_SIZE || head % report_size ||
>>> +                     tail > OA_BUFFER_SIZE || tail % report_size,
>>> +                     "Inconsistent OA buffer pointers: head = %u, tail =
>>> %u\n",
>>> +                     head, tail))
>>> +               return -EIO;
>>> +
>>> +
>>> +       for (/* none */;
>>> +            (taken = OA_TAKEN(tail, head));
>>> +            head = (head + report_size) & mask) {
>>> +               u8 *report = oa_buf_base + head;
>>> +               u32 *report32 = (void *)report;
>>> +               u32 ctx_id;
>>> +               u32 reason;
>>> +
>>> +               /* All the report sizes factor neatly into the buffer
>>> +                * size so we never expect to see a report split
>>> +                * between the beginning and end of the buffer.
>>> +                *
>>> +                * Given the initial alignment check a misalignment
>>> +                * here would imply a driver bug that would result
>>> +                * in an overrun.
>>> +                */
>>> +               if (WARN_ON((OA_BUFFER_SIZE - head) < report_size)) {
>>> +                       DRM_ERROR("Spurious OA head ptr: non-integral
>>> report offset\n");
>>> +                       break;
>>> +               }
>>> +
>>> +               /* The reason field includes flags identifying what
>>> +                * triggered this specific report (mostly timer
>>> +                * triggered or e.g. due to a context switch).
>>> +                *
>>> +                * This field is never expected to be zero so we can
>>> +                * check that the report isn't invalid before copying
>>> +                * it to userspace...
>>> +                */
>>> +               reason = ((report32[0] >> OAREPORT_REASON_SHIFT) &
>>> +                         OAREPORT_REASON_MASK);
>>> +               if (reason == 0) {
>>> +                       if
>>> (__ratelimit(&dev_priv->perf.oa.spurious_report_rs))
>>> +                               DRM_NOTE("Skipping spurious, invalid OA
>>> report\n");
>>> +                       continue;
>>> +               }
>>> +
>>> +               /* XXX: Just keep the lower 21 bits for now since I'm not
>>> +                * entirely sure if the HW touches any of the higher bits
>>> in
>>> +                * this field
>>> +                */
>>> +               ctx_id = report32[2] & 0x1fffff;
>>> +
>>> +               /* Squash whatever is in the CTX_ID field if it's marked
>>> as
>>> +                * invalid to be sure we avoid false-positive,
>>> single-context
>>> +                * filtering below...
>>> +                *
>>> +                * Note: that we don't clear the valid_ctx_bit so
>>> userspace can
>>> +                * understand that the ID has been squashed by the
>>> kernel.
>>> +                */
>>> +               if (!(report32[0] &
>>> dev_priv->perf.oa.gen8_valid_ctx_bit))
>>> +                       ctx_id = report32[2] = INVALID_CTX_ID;
>>> +
>>> +               /* NB: For Gen 8 the OA unit no longer supports clock
>>> gating
>>> +                * off for a specific context and the kernel can't
>>> securely
>>> +                * stop the counters from updating as system-wide /
>>> global
>>> +                * values.
>>> +                *
>>> +                * Automatic reports now include a context ID so reports
>>> can be
>>> +                * filtered on the cpu but it's not worth trying to
>>> +                * automatically subtract/hide counter progress for other
>>> +                * contexts while filtering since we can't stop userspace
>>> +                * issuing MI_REPORT_PERF_COUNT commands which would
>>> still
>>> +                * provide a side-band view of the real values.
>>> +                *
>>> +                * To allow userspace (such as
>>> Mesa/GL_INTEL_performance_query)
>>> +                * to normalize counters for a single filtered context
>>> then it
>>> +                * needs be forwarded bookend context-switch reports so
>>> that it
>>> +                * can track switches in between MI_REPORT_PERF_COUNT
>>> commands
>>> +                * and can itself subtract/ignore the progress of
>>> counters
>>> +                * associated with other contexts. Note that the hardware
>>> +                * automatically triggers reports when switching to a new
>>> +                * context which are tagged with the ID of the newly
>>> active
>>> +                * context. To avoid the complexity (and likely
>>> fragility) of
>>> +                * reading ahead while parsing reports to try and
>>> minimize
>>> +                * forwarding redundant context switch reports (i.e.
>>> between
>>> +                * other, unrelated contexts) we simply elect to forward
>>> them
>>> +                * all.
>>> +                *
>>> +                * We don't rely solely on the reason field to identify
>>> context
>>> +                * switches since it's not-uncommon for periodic samples
>>> to
>>> +                * identify a switch before any 'context switch' report.
>>> +                */
>>> +               if (!dev_priv->perf.oa.exclusive_stream->ctx ||
>>> +                   dev_priv->perf.oa.specific_ctx_id == ctx_id ||
>>> +                   (dev_priv->perf.oa.oa_buffer.last_ctx_id ==
>>> +                    dev_priv->perf.oa.specific_ctx_id) ||
>>> +                   reason & OAREPORT_REASON_CTX_SWITCH) {
>>> +
>>> +                       /* While filtering for a single context we avoid
>>> +                        * leaking the IDs of other contexts.
>>> +                        */
>>> +                       if (dev_priv->perf.oa.exclusive_stream->ctx &&
>>> +                           dev_priv->perf.oa.specific_ctx_id != ctx_id)
>>> {
>>> +                               report32[2] = INVALID_CTX_ID;
>>> +                       }
>>> +
>>> +                       ret = append_oa_sample(stream, buf, count,
>>> offset,
>>> +                                              report);
>>> +                       if (ret)
>>> +                               break;
>>> +
>>> +                       dev_priv->perf.oa.oa_buffer.last_ctx_id = ctx_id;
>>> +               }
>>> +
>>> +               /* The above reason field sanity check is based on
>>> +                * the assumption that the OA buffer is initially
>>> +                * zeroed and we reset the field after copying so the
>>> +                * check is still meaningful once old reports start
>>> +                * being overwritten.
>>> +                */
>>> +               report32[0] = 0;
>>> +       }
>>> +
>>> +       if (start_offset != *offset) {
>>> +               spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock,
>>> flags);
>>> +
>>> +               /* We removed the gtt_offset for the copy loop above,
>>> indexing
>>> +                * relative to oa_buf_base so put back here...
>>> +                */
>>> +               head += gtt_offset;
>>> +
>>> +               I915_WRITE(GEN8_OAHEADPTR, head & GEN8_OAHEADPTR_MASK);
>>> +               dev_priv->perf.oa.oa_buffer.head = head;
>>> +
>>> +
>>> spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
>>> +       }
>>> +
>>> +       return ret;
>>> +}
>>> +
>>> +/**
>>> + * gen8_oa_read - copy status records then buffered OA reports
>>> + * @stream: An i915-perf stream opened for OA metrics
>>> + * @buf: destination buffer given by userspace
>>> + * @count: the number of bytes userspace wants to read
>>> + * @offset: (inout): the current position for writing into @buf
>>> + *
>>> + * Checks OA unit status registers and if necessary appends
>>> corresponding
>>> + * status records for userspace (such as for a buffer full condition)
>>> and then
>>> + * initiate appending any buffered OA reports.
>>> + *
>>> + * Updates @offset according to the number of bytes successfully copied
>>> into
>>> + * the userspace buffer.
>>> + *
>>> + * NB: some data may be successfully copied to the userspace buffer
>>> + * even if an error is returned, and this is reflected in the
>>> + * updated @offset.
>>> + *
>>> + * Returns: zero on success or a negative error code
>>> + */
>>> +static int gen8_oa_read(struct i915_perf_stream *stream,
>>> +                       char __user *buf,
>>> +                       size_t count,
>>> +                       size_t *offset)
>>> +{
>>> +       struct drm_i915_private *dev_priv = stream->dev_priv;
>>> +       u32 oastatus;
>>> +       int ret;
>>> +
>>> +       if (WARN_ON(!dev_priv->perf.oa.oa_buffer.vaddr))
>>> +               return -EIO;
>>> +
>>> +       oastatus = I915_READ(GEN8_OASTATUS);
>>> +
>>> +       /* We treat OABUFFER_OVERFLOW as a significant error:
>>> +        *
>>> +        * Although theoretically we could handle this more gracefully
>>> +        * sometimes, some Gens don't correctly suppress certain
>>> +        * automatically triggered reports in this condition and so we
>>> +        * have to assume that old reports are now being trampled
>>> +        * over.
>>> +        *
>>> +        * Considering how we don't currently give userspace control
>>> +        * over the OA buffer size and always configure a large 16MB
>>> +        * buffer, then a buffer overflow does anyway likely indicate
>>> +        * that something has gone quite badly wrong.
>>> +        */
>>> +       if (oastatus & GEN8_OASTATUS_OABUFFER_OVERFLOW) {
>>> +               ret = append_oa_status(stream, buf, count, offset,
>>> +
>>> DRM_I915_PERF_RECORD_OA_BUFFER_LOST);
>>> +               if (ret)
>>> +                       return ret;
>>> +
>>> +               DRM_DEBUG("OA buffer overflow (exponent = %d): force
>>> restart\n",
>>> +                         dev_priv->perf.oa.period_exponent);
>>> +
>>> +               dev_priv->perf.oa.ops.oa_disable(dev_priv);
>>> +               dev_priv->perf.oa.ops.oa_enable(dev_priv);
>>> +
>>> +               /* Note: .oa_enable() is expected to re-init the oabuffer
>>> +                * and reset GEN8_OASTATUS for us
>>> +                */
>>> +               oastatus = I915_READ(GEN8_OASTATUS);
>>> +       }
>>> +
>>> +       if (oastatus & GEN8_OASTATUS_REPORT_LOST) {
>>> +               ret = append_oa_status(stream, buf, count, offset,
>>> +
>>> DRM_I915_PERF_RECORD_OA_REPORT_LOST);
>>> +               if (ret)
>>> +                       return ret;
>>> +               I915_WRITE(GEN8_OASTATUS,
>>> +                          oastatus & ~GEN8_OASTATUS_REPORT_LOST);
>>> +       }
>>> +
>>> +       return gen8_append_oa_reports(stream, buf, count, offset);
>>> +}
>>> +
>>> +/**
>>> + * Copies all buffered OA reports into userspace read() buffer.
>>> + * @stream: An i915-perf stream opened for OA metrics
>>> + * @buf: destination buffer given by userspace
>>> + * @count: the number of bytes userspace wants to read
>>> + * @offset: (inout): the current position for writing into @buf
>>> + *
>>> + * Notably any error condition resulting in a short read (-%ENOSPC or
>>> + * -%EFAULT) will be returned even though one or more records may
>>> + * have been successfully copied. In this case it's up to the caller
>>> + * to decide if the error should be squashed before returning to
>>> + * userspace.
>>> + *
>>> + * Note: reports are consumed from the head, and appended to the
>>> + * tail, so the tail chases the head?... If you think that's mad
>>> + * and back-to-front you're not alone, but this follows the
>>> + * Gen PRM naming convention.
>>> + *
>>> + * Returns: 0 on success, negative error code on failure.
>>> + */
>>>   static int gen7_append_oa_reports(struct i915_perf_stream *stream,
>>>                                    char __user *buf,
>>>                                    size_t count,
>>> @@ -733,7 +1045,8 @@ static int gen7_oa_read(struct i915_perf_stream
>>> *stream,
>>>                  if (ret)
>>>                          return ret;
>>>
>>> -               DRM_DEBUG("OA buffer overflow: force restart\n");
>>> +               DRM_DEBUG("OA buffer overflow (exponent = %d): force
>>> restart\n",
>>> +                         dev_priv->perf.oa.period_exponent);
>>>
>>>                  dev_priv->perf.oa.ops.oa_disable(dev_priv);
>>>                  dev_priv->perf.oa.ops.oa_enable(dev_priv);
>>> @@ -776,7 +1089,7 @@ static int i915_oa_wait_unlocked(struct
>>> i915_perf_stream *stream)
>>>                  return -EIO;
>>>
>>>          return wait_event_interruptible(dev_priv->perf.oa.poll_wq,
>>> -
>>> dev_priv->perf.oa.ops.oa_buffer_check(dev_priv));
>>> +
>>> oa_buffer_check_unlocked(dev_priv));
>>>   }
>>>
>>>   /**
>>> @@ -833,33 +1146,37 @@ static int i915_oa_read(struct i915_perf_stream
>>> *stream,
>>>   static int oa_get_render_ctx_id(struct i915_perf_stream *stream)
>>>   {
>>>          struct drm_i915_private *dev_priv = stream->dev_priv;
>>> -       struct intel_engine_cs *engine = dev_priv->engine[RCS];
>>> -       int ret;
>>>
>>> -       ret = i915_mutex_lock_interruptible(&dev_priv->drm);
>>> -       if (ret)
>>> -               return ret;
>>> +       if (i915.enable_execlists)
>>> +               dev_priv->perf.oa.specific_ctx_id = stream->ctx->hw_id;
>>> +       else {
>>> +               struct intel_engine_cs *engine = dev_priv->engine[RCS];
>>> +               int ret;
>>>
>>> -       /* As the ID is the gtt offset of the context's vma we pin
>>> -        * the vma to ensure the ID remains fixed.
>>> -        *
>>> -        * NB: implied RCS engine...
>>> -        */
>>> -       ret = engine->context_pin(engine, stream->ctx);
>>> -       if (ret)
>>> -               goto unlock;
>>> +               ret = i915_mutex_lock_interruptible(&dev_priv->drm);
>>> +               if (ret)
>>> +                       return ret;
>>>
>>> -       /* Explicitly track the ID (instead of calling i915_ggtt_offset()
>>> -        * on the fly) considering the difference with gen8+ and
>>> -        * execlists
>>> -        */
>>> -       dev_priv->perf.oa.specific_ctx_id =
>>> -               i915_ggtt_offset(stream->ctx->engine[engine->id].state);
>>> +               /* As the ID is the gtt offset of the context's vma we
>>> pin
>>> +                * the vma to ensure the ID remains fixed.
>>> +                */
>>> +               ret = engine->context_pin(engine, stream->ctx);
>>> +               if (ret) {
>>> +                       mutex_unlock(&dev_priv->drm.struct_mutex);
>>> +                       return ret;
>>> +               }
>>>
>>> -unlock:
>>> -       mutex_unlock(&dev_priv->drm.struct_mutex);
>>> +               /* Explicitly track the ID (instead of calling
>>> +                * i915_ggtt_offset() on the fly) considering the
>>> difference
>>> +                * with gen8+ and execlists
>>> +                */
>>> +               dev_priv->perf.oa.specific_ctx_id =
>>> +
>>> i915_ggtt_offset(stream->ctx->engine[engine->id].state);
>>>
>>> -       return ret;
>>> +               mutex_unlock(&dev_priv->drm.struct_mutex);
>>> +       }
>>> +
>>> +       return 0;
>>>   }
>>>
>>>   /**
>>> @@ -874,12 +1191,16 @@ static void oa_put_render_ctx_id(struct
>>> i915_perf_stream *stream)
>>>          struct drm_i915_private *dev_priv = stream->dev_priv;
>>>          struct intel_engine_cs *engine = dev_priv->engine[RCS];
>>>
>>> -       mutex_lock(&dev_priv->drm.struct_mutex);
>>> +       if (i915.enable_execlists) {
>>> +               dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
>>> +       } else {
>>> +               mutex_lock(&dev_priv->drm.struct_mutex);
>>>
>>> -       dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
>>> -       engine->context_unpin(engine, stream->ctx);
>>> +               dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
>>> +               engine->context_unpin(engine, stream->ctx);
>>>
>>> -       mutex_unlock(&dev_priv->drm.struct_mutex);
>>> +               mutex_unlock(&dev_priv->drm.struct_mutex);
>>> +       }
>>>   }
>>>
>>>   static void
>>> @@ -969,6 +1290,61 @@ static void gen7_init_oa_buffer(struct
>>> drm_i915_private *dev_priv)
>>>          dev_priv->perf.oa.pollin = false;
>>>   }
>>>
>>> +static void gen8_init_oa_buffer(struct drm_i915_private *dev_priv)
>>> +{
>>> +       u32 gtt_offset =
>>> i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
>>> +       unsigned long flags;
>>> +
>>> +       spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
>>> +
>>> +       I915_WRITE(GEN8_OASTATUS, 0);
>>> +       I915_WRITE(GEN8_OAHEADPTR, gtt_offset);
>>> +       dev_priv->perf.oa.oa_buffer.head = gtt_offset;
>>> +
>>> +       I915_WRITE(GEN8_OABUFFER_UDW, 0);
>>> +
>>> +       /* PRM says:
>>> +        *
>>> +        *  "This MMIO must be set before the OATAILPTR
>>> +        *  register and after the OAHEADPTR register. This is
>>> +        *  to enable proper functionality of the overflow
>>> +        *  bit."
>>> +        */
>>> +       I915_WRITE(GEN8_OABUFFER, gtt_offset |
>>> +                  OABUFFER_SIZE_16M | OA_MEM_SELECT_GGTT);
>>> +       I915_WRITE(GEN8_OATAILPTR, gtt_offset & GEN8_OATAILPTR_MASK);
>>> +
>>> +       /* Mark that we need updated tail pointers to read from... */
>>> +       dev_priv->perf.oa.oa_buffer.tails[0].offset = INVALID_TAIL_PTR;
>>> +       dev_priv->perf.oa.oa_buffer.tails[1].offset = INVALID_TAIL_PTR;
>>> +
>>> +       /* Reset state used to recognise context switches, affecting
>>> which
>>> +        * reports we will forward to userspace while filtering for a
>>> single
>>> +        * context.
>>> +        */
>>> +       dev_priv->perf.oa.oa_buffer.last_ctx_id = INVALID_CTX_ID;
>>> +
>>> +       spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock,
>>> flags);
>>> +
>>> +       /* NB: although the OA buffer will initially be allocated
>>> +        * zeroed via shmfs (and so this memset is redundant when
>>> +        * first allocating), we may re-init the OA buffer, either
>>> +        * when re-enabling a stream or in error/reset paths.
>>> +        *
>>> +        * The reason we clear the buffer for each re-init is for the
>>> +        * sanity check in gen8_append_oa_reports() that looks at the
>>> +        * reason field to make sure it's non-zero which relies on
>>> +        * the assumption that new reports are being written to zeroed
>>> +        * memory...
>>> +        */
>>> +       memset(dev_priv->perf.oa.oa_buffer.vaddr, 0, OA_BUFFER_SIZE);
>>> +
>>> +       /* Maybe make ->pollin per-stream state if we support multiple
>>> +        * concurrent streams in the future.
>>> +        */
>>> +       dev_priv->perf.oa.pollin = false;
>>> +}
>>> +
>>>   static int alloc_oa_buffer(struct drm_i915_private *dev_priv)
>>>   {
>>>          struct drm_i915_gem_object *bo;
>>> @@ -1113,6 +1489,280 @@ static void hsw_disable_metric_set(struct
>>> drm_i915_private *dev_priv)
>>>                                        ~GT_NOA_ENABLE));
>>>   }
>>>
>>> +/*
>>> + * From Broadwell PRM, 3D-Media-GPGPU -> Register State Context
>>> + *
>>> + * MMIO reads or writes to any of the registers listed in the
>>> + * “Register State Context image” subsections through HOST/IA
>>> + * MMIO interface for debug purposes must follow the steps below:
>>> + *
>>> + * - SW should set the Force Wakeup bit to prevent GT from entering C6.
>>> + * - Write 0x2050[31:0] = 0x00010001 (disable sequence).
>>> + * - Disable IDLE messaging in CS (Write 0x2050[31:0] = 0x00010001).
>>> + * - BDW:  Poll/Wait for register bits of 0x22AC[6:0] turn to 0x30
>>> value.
>>> + * - SKL+: Poll/Wait for register bits of 0x22A4[6:0] turn to 0x30
>>> value.
>>> + * - Read/Write to desired MMIO registers.
>>> + * - Enable IDLE messaging in CS (Write 0x2050[31:0] = 0x00010000).
>>> + * - Force Wakeup bit should be reset to enable C6 entry.
>>> + *
>>> + * XXX: don't nest or overlap calls to this API, it has no ref
>>> + * counting to track how many entities require the RCS to be
>>> + * blocked from being idle.
>>> + */
>>> +static int gen8_begin_ctx_mmio(struct drm_i915_private *dev_priv)
>>> +{
>>> +       i915_reg_t fsm_reg = INTEL_GEN(dev_priv) > 8 ?
>>> +               GEN9_RCS_FE_FSM2 : GEN6_RCS_PWR_FSM;
>>> +       int ret = 0;
>>> +
>>> +       /* There's only no active context while idle in execlist mode
>>> +        * (though we shouldn't be using this in any other case)
>>> +        */
>>> +       if (WARN_ON(!i915.enable_execlists))
>>> +               return -EINVAL;
>>> +
>>> +       intel_uncore_forcewake_get(dev_priv, FORCEWAKE_RENDER);
>>> +
>>> +       I915_WRITE(GEN6_RC_SLEEP_PSMI_CONTROL,
>>> +                  _MASKED_BIT_ENABLE(GEN6_PSMI_SLEEP_MSG_DISABLE));
>>> +
>>> +       /* Note: we don't currently have a good handle on the maximum
>>> +        * latency for this wake up so while we only need to hold rcs
>>> +        * busy from process context we at least keep the waiting
>>> +        * interruptible...
>>> +        */
>>> +       ret = intel_wait_for_register(dev_priv, fsm_reg, 0x3f, 0x30, 50);
>>> +       if (ret == -ETIMEDOUT)
>>> +               DRM_ERROR("Timeout waiting to be able to begin per-ctx
>>> mmio\n");
>>> +
>>> +       if (ret)
>>> +               intel_uncore_forcewake_put(dev_priv, FORCEWAKE_RENDER);
>>> +
>>> +       return ret;
>>> +}
>>> +
>>> +static void gen8_end_ctx_mmio(struct drm_i915_private *dev_priv)
>>> +{
>>> +       if (WARN_ON(!i915.enable_execlists))
>>> +               return;
>>> +
>>> +       I915_WRITE(GEN6_RC_SLEEP_PSMI_CONTROL,
>>> +                  _MASKED_BIT_DISABLE(GEN6_PSMI_SLEEP_MSG_DISABLE));
>>> +       intel_uncore_forcewake_put(dev_priv, FORCEWAKE_RENDER);
>>> +}
>>> +
>>> +/* NB: It must always remain pointer safe to run this even if the OA
>>> unit
>>> + * has been disabled.
>>> + *
>>> + * It's fine to put out-of-date values into these per-context registers
>>> + * in the case that the OA unit has been disabled.
>>> + */
>>> +static void gen8_update_reg_state_unlocked(struct i915_gem_context *ctx,
>>> +                                          u32 *reg_state)
>>> +{
>>> +       struct drm_i915_private *dev_priv = ctx->i915;
>>> +       const struct i915_oa_reg *flex_regs =
>>> dev_priv->perf.oa.flex_regs;
>>> +       int n_flex_regs = dev_priv->perf.oa.flex_regs_len;
>>> +       u32 ctx_oactxctrl = dev_priv->perf.oa.ctx_oactxctrl_offset;
>>> +       u32 ctx_flexeu0 = dev_priv->perf.oa.ctx_flexeu0_offset;
>>> +       /* The MMIO offsets for Flex EU registers aren't contiguous */
>>> +       u32 flex_mmio[] = {
>>> +               i915_mmio_reg_offset(EU_PERF_CNTL0),
>>> +               i915_mmio_reg_offset(EU_PERF_CNTL1),
>>> +               i915_mmio_reg_offset(EU_PERF_CNTL2),
>>> +               i915_mmio_reg_offset(EU_PERF_CNTL3),
>>> +               i915_mmio_reg_offset(EU_PERF_CNTL4),
>>> +               i915_mmio_reg_offset(EU_PERF_CNTL5),
>>> +               i915_mmio_reg_offset(EU_PERF_CNTL6),
>>> +       };
>>> +       int i;
>>> +
>>> +       reg_state[ctx_oactxctrl] =
>>> i915_mmio_reg_offset(GEN8_OACTXCONTROL);
>>> +       reg_state[ctx_oactxctrl+1] = (dev_priv->perf.oa.period_exponent
>>> <<
>>> +                                     GEN8_OA_TIMER_PERIOD_SHIFT) |
>>> +                                    (dev_priv->perf.oa.periodic ?
>>> +                                     GEN8_OA_TIMER_ENABLE : 0) |
>>> +                                    GEN8_OA_COUNTER_RESUME;
>>> +
>>> +       for (i = 0; i < ARRAY_SIZE(flex_mmio); i++) {
>>> +               u32 state_offset = ctx_flexeu0 + i * 2;
>>> +               u32 mmio = flex_mmio[i];
>>> +
>>> +               /* This arbitrary default will select the 'EU FPU0
>>> Pipeline
>>> +                * Active' event. In the future it's anticipated that
>>> there
>>> +                * will be an explicit 'No Event' we can select, but not
>>> yet...
>>> +                */
>>> +               u32 value = 0;
>>> +               int j;
>>> +
>>> +               for (j = 0; j < n_flex_regs; j++) {
>>> +                       if (i915_mmio_reg_offset(flex_regs[j].addr) ==
>>> mmio) {
>>> +                               value = flex_regs[j].value;
>>> +                               break;
>>> +                       }
>>> +               }
>>> +
>>> +               reg_state[state_offset] = mmio;
>>> +               reg_state[state_offset+1] = value;
>>> +       }
>>> +}
>>> +
>>> +/* Manages updating the per-context aspects of the OA stream
>>> + * configuration across all contexts.
>>> + *
>>> + * The awkward consideration here is that OACTXCONTROL controls the
>>> + * exponent for periodic sampling which is primarily used for system
>>> + * wide profiling where we'd like a consistent sampling period even in
>>> + * the face of context switches.
>>> + *
>>> + * Our approach of updating the register state context (as opposed to
>>> + * say using a workaround batch buffer) ensures that the hardware
>>> + * won't automatically reload an out-of-date timer exponent even
>>> + * transiently before a WA BB could be parsed.
>>> + *
>>> + * This function needs to:
>>> + * - Ensure the currently running context's per-context OA state is
>>> + *   updated
>>> + * - Ensure that all existing contexts will have the correct per-context
>>> + *   OA state if they are scheduled for use.
>>> + * - Ensure any new contexts will be initialized with the correct
>>> + *   per-context OA state.
>>> + *
>>> + * Note: it's only the RCS/Render context that has any OA state.
>>> + */
>>> +static int gen8_configure_all_contexts(struct drm_i915_private
>>> *dev_priv)
>>> +{
>>> +       struct i915_gem_context *ctx;
>>> +       int ret;
>>> +
>>> +       ret = i915_mutex_lock_interruptible(&dev_priv->drm);
>>> +       if (ret)
>>> +               return ret;
>>> +
>>> +       /* Switch away from any user context.  */
>>> +       ret = i915_gem_switch_to_kernel_context(dev_priv);
>>> +       if (ret)
>>
>>   mutex_unlock(&dev_priv->drm.struct_mutex);
>
>
> Duh! Thanks!
>
>>
>>> +               return ret;
>>> +
>>> +       /* The OA register config is setup through the context image.
>>> This image
>>> +        * might be written to by the GPU on context switch (in
>>> particular on
>>> +        * lite-restore). This means we can't safely update a context's
>>> image,
>>> +        * if this context is scheduled/submitted to run on the GPU.
>>> +        *
>>> +        * We could emit the OA register config through the batch buffer
>>> but
>>> +        * this might leave small interval of time where the OA unit is
>>> +        * configured at an invalid sampling period.
>>> +        *
>>> +        * So far the best way to work around this issue seems to be
>>> draining
>>> +        * the GPU from any submitted work.
>>> +        */
>>> +       ret = i915_gem_wait_for_idle(dev_priv,
>>> +                                    I915_WAIT_INTERRUPTIBLE |
>>> +                                    I915_WAIT_LOCKED);
>>> +       if (ret) {
>>> +               mutex_unlock(&dev_priv->drm.struct_mutex);
>>> +               return ret;
>>> +       }
>>> +
>>> +       /* Update all contexts now that we've stalled the submission. */
>>> +       list_for_each_entry(ctx, &dev_priv->context_list, link) {
>>> +               if (!ctx->engine[RCS].initialised)
>>> +                       continue;
>>> +
>>> +               gen8_update_reg_state_unlocked(ctx,
>>> +
>>> ctx->engine[RCS].lrc_reg_state);
>>> +       }
>>> +
>>> +       mutex_unlock(&dev_priv->drm.struct_mutex);
>>> +
>>> +       /* Now update the current context.
>>> +        *
>>> +        * Note: Using MMIO to update per-context registers requires
>>> +        * some extra care...
>>> +        */
>>> +       ret = gen8_begin_ctx_mmio(dev_priv);
>>> +       if (ret) {
>>> +               DRM_ERROR("Failed to bring RCS out of idle to update
>>> current ctx OA state\n");
>>> +               return ret;
>>> +       }
>>
>> So do we still need the mmio for the current context, given that we
>> now idle everything and then update the reg state?
>
>
> I would say yes, because we don't have any guarantee that another context is
> going to be scheduled after that.
> Yet we want the config to be applied before we return.

Okay, feel free to keep my r-b then.
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v7 12/15] drm/i915/perf: Add OA unit support for Gen 8+
  2017-04-25 17:30           ` [PATCH v7 " Lionel Landwerlin
@ 2017-04-27  8:53             ` Chris Wilson
  2017-05-02  1:17               ` [PATCH v9 " Lionel Landwerlin
  0 siblings, 1 reply; 87+ messages in thread
From: Chris Wilson @ 2017-04-27  8:53 UTC (permalink / raw)
  To: Lionel Landwerlin; +Cc: intel-gfx

On Tue, Apr 25, 2017 at 10:30:07AM -0700, Lionel Landwerlin wrote:
> +static int gen8_configure_all_contexts(struct drm_i915_private *dev_priv)
> +{
> +	struct i915_gem_context *ctx;
> +	int ret;
> +
> +	ret = i915_mutex_lock_interruptible(&dev_priv->drm);
> +	if (ret)
> +		return ret;
> +
> +	/* Switch away from any user context.  */
> +	ret = i915_gem_switch_to_kernel_context(dev_priv);
> +	if (ret) {
> +		mutex_unlock(&dev_priv->drm.struct_mutex);
> +		return ret;
> +	}
> +
> +	/* The OA register config is setup through the context image. This image
> +	 * might be written to by the GPU on context switch (in particular on
> +	 * lite-restore). This means we can't safely update a context's image,
> +	 * if this context is scheduled/submitted to run on the GPU.
> +	 *
> +	 * We could emit the OA register config through the batch buffer but
> +	 * this might leave small interval of time where the OA unit is
> +	 * configured at an invalid sampling period.
> +	 *
> +	 * So far the best way to work around this issue seems to be draining
> +	 * the GPU from any submitted work.
> +	 */
> +	ret = i915_gem_wait_for_idle(dev_priv,
> +				     I915_WAIT_INTERRUPTIBLE |
> +				     I915_WAIT_LOCKED);
> +	if (ret) {
> +		mutex_unlock(&dev_priv->drm.struct_mutex);
> +		return ret;
> +	}
> +
> +	/* Update all contexts now that we've stalled the submission. */
> +	list_for_each_entry(ctx, &dev_priv->context_list, link) {
> +		if (!ctx->engine[RCS].initialised)
> +			continue;
> +

You need to pin the context here, otherwise there is not guarrantee that
the lrc_reg_state exists, or map it directly.

> +		gen8_update_reg_state_unlocked(ctx,
> +					       ctx->engine[RCS].lrc_reg_state);
> +	}
> +
> +	mutex_unlock(&dev_priv->drm.struct_mutex);
> +
> +	/* Now update the current context.

You don't need to. The current context is the kernel context, it is
scratch and never used by userspace so no oa-reports.

> +	 *
> +	 * Note: Using MMIO to update per-context registers requires
> +	 * some extra care...
> +	 */
> +	ret = gen8_begin_ctx_mmio(dev_priv);
> +	if (ret) {
> +		DRM_ERROR("Failed to bring RCS out of idle to update current ctx OA state\n");
> +		return ret;
> +	}
> +
> +	I915_WRITE(GEN8_OACTXCONTROL, ((dev_priv->perf.oa.period_exponent <<
> +					GEN8_OA_TIMER_PERIOD_SHIFT) |
> +				      (dev_priv->perf.oa.periodic ?
> +				       GEN8_OA_TIMER_ENABLE : 0) |
> +				      GEN8_OA_COUNTER_RESUME));
> +
> +	config_oa_regs(dev_priv, dev_priv->perf.oa.flex_regs,
> +			dev_priv->perf.oa.flex_regs_len);
> +
> +	gen8_end_ctx_mmio(dev_priv);

This entire chunk can go and I don't need to critique
gen8_begin_ctx_mmio() -- it needs a bit of tlc.

This patch is not ready.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* [PATCH v9 12/15] drm/i915/perf: Add OA unit support for Gen 8+
  2017-04-27  8:53             ` Chris Wilson
@ 2017-05-02  1:17               ` Lionel Landwerlin
  2017-05-02 20:14                 ` Chris Wilson
  0 siblings, 1 reply; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-02  1:17 UTC (permalink / raw)
  To: intel-gfx; +Cc: matthew.auld

From: Robert Bragg <robert@sixbynine.org>

Enables access to OA unit metrics for BDW, CHV, SKL and BXT which all
share (more-or-less) the same OA unit design.

Of particular note in comparison to Haswell: some OA unit HW config
state has become per-context state and as a consequence it is somewhat
more complicated to manage synchronous state changes from the cpu while
there's no guarantee of what context (if any) is currently actively
running on the gpu.

The periodic sampling frequency which can be particularly useful for
system-wide analysis (as opposed to command stream synchronised
MI_REPORT_PERF_COUNT commands) is perhaps the most surprising state to
have become per-context save and restored (while the OABUFFER
destination is still a shared, system-wide resource).

This support for gen8+ takes care to consider a number of timing
challenges involved in synchronously updating per-context state
primarily by programming all config state from the cpu and updating all
current and saved contexts synchronously while the OA unit is still
disabled.

The driver intentionally avoids depending on command streamer
programming to update OA state considering the lack of synchronization
between the automatic loading of OACTXCONTROL state (that includes the
periodic sampling state and enable state) on context restore and the
parsing of any general purpose BB the driver can control. I.e. this
implementation is careful to avoid the possibility of a context restore
temporarily enabling any out-of-date periodic sampling state. In
addition to the risk of transiently-out-of-date state being loaded
automatically; there are also internal HW latencies involved in the
loading of MUX configurations which would be difficult to account for
from the command streamer (and we only want to enable the unit when once
the MUX configuration is complete).

Since the Gen8+ OA unit design no longer supports clock gating the unit
off for a single given context (which effectively stopped any progress
of counters while any other context was running) and instead supports
tagging OA reports with a context ID for filtering on the CPU, it means
we can no longer hide the system-wide progress of counters from a
non-privileged application only interested in metrics for its own
context. Although we could theoretically try and subtract the progress
of other contexts before forwarding reports via read() we aren't in a
position to filter reports captured via MI_REPORT_PERF_COUNT commands.
As a result, for Gen8+, we always require the
dev.i915.perf_stream_paranoid to be unset for any access to OA metrics
if not root.

v5: Drain submitted requests when enabling metric set to ensure no
    lite-restore erases the context image we just updated (Lionel)

v6: In addition to drain, switch to kernel context & update all
    context in place (Chris)

v7: Add missing mutex_unlock() if switching to kernel context fails
    (Matthew)

v8: Simplify OA period/flex-eu-counters programming by using the
    batchbuffer instead of modifying ctx-image (Lionel)

v9: Back to updating the context image (due to erroneous testing,
    batchbuffer programming the OA unit doesn't actually work)
    (Lionel)
    Pin context before updating context image (Chris)
    Drop MMIO programming now that we switch to a kernel context with
    right values in initial context image (Chris)

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com> \o/
---
 drivers/gpu/drm/i915/i915_drv.h  |  45 +-
 drivers/gpu/drm/i915/i915_perf.c | 879 +++++++++++++++++++++++++++++++++++----
 drivers/gpu/drm/i915/i915_reg.h  |  22 +
 drivers/gpu/drm/i915/intel_lrc.c |   2 +
 include/uapi/drm/i915_drm.h      |  19 +-
 5 files changed, 874 insertions(+), 93 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index f15751534c29..eaf4ce5e5790 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2063,9 +2063,17 @@ struct i915_oa_ops {
 	void (*init_oa_buffer)(struct drm_i915_private *dev_priv);

 	/**
-	 * @enable_metric_set: Applies any MUX configuration to set up the
-	 * Boolean and Custom (B/C) counters that are part of the counter
-	 * reports being sampled. May apply system constraints such as
+	 * @select_metric_set: The auto generated code that checks whether a
+	 * requested OA config is applicable to the system and if so sets up
+	 * the mux, oa and flex eu register config pointers according to the
+	 * current dev_priv->perf.oa.metrics_set.
+	 */
+	int (*select_metric_set)(struct drm_i915_private *dev_priv);
+
+	/**
+	 * @enable_metric_set: Selects and applies any MUX configuration to set
+	 * up the Boolean and Custom (B/C) counters that are part of the
+	 * counter reports being sampled. May apply system constraints such as
 	 * disabling EU clock gating as required.
 	 */
 	int (*enable_metric_set)(struct drm_i915_private *dev_priv);
@@ -2096,20 +2104,13 @@ struct i915_oa_ops {
 		    size_t *offset);

 	/**
-	 * @oa_buffer_check: Check for OA buffer data + update tail
-	 *
-	 * This is either called via fops or the poll check hrtimer (atomic
-	 * ctx) without any locks taken.
+	 * @oa_hw_tail_read: read the OA tail pointer register
 	 *
-	 * It's safe to read OA config state here unlocked, assuming that this
-	 * is only called while the stream is enabled, while the global OA
-	 * configuration can't be modified.
-	 *
-	 * Efficiency is more important than avoiding some false positives
-	 * here, which will be handled gracefully - likely resulting in an
-	 * %EAGAIN error for userspace.
+	 * In particular this enables us to share all the fiddly code for
+	 * handling the OA unit tail pointer race that affects multiple
+	 * generations.
 	 */
-	bool (*oa_buffer_check)(struct drm_i915_private *dev_priv);
+	u32 (*oa_hw_tail_read)(struct drm_i915_private *dev_priv);
 };

 struct intel_cdclk_state {
@@ -2470,6 +2471,7 @@ struct drm_i915_private {
 			struct {
 				struct i915_vma *vma;
 				u8 *vaddr;
+				u32 last_ctx_id;
 				int format;
 				int format_size;

@@ -2539,6 +2541,14 @@ struct drm_i915_private {
 			} oa_buffer;

 			u32 gen7_latched_oastatus1;
+			u32 ctx_oactxctrl_offset;
+			u32 ctx_flexeu0_offset;
+
+			/* The RPT_ID/reason field for Gen8+ includes a bit
+			 * to determine if the CTX ID in the report is valid
+			 * but the specific bit differs between Gen 8 and 9
+			 */
+			u32 gen8_valid_ctx_bit;

 			struct i915_oa_ops ops;
 			const struct i915_oa_format *oa_formats;
@@ -2849,6 +2859,8 @@ intel_info(const struct drm_i915_private *dev_priv)
 #define IS_KBL_ULX(dev_priv)	(INTEL_DEVID(dev_priv) == 0x590E || \
 				 INTEL_DEVID(dev_priv) == 0x5915 || \
 				 INTEL_DEVID(dev_priv) == 0x591E)
+#define IS_SKL_GT2(dev_priv)	(IS_SKYLAKE(dev_priv) && \
+				 (INTEL_DEVID(dev_priv) & 0x00F0) == 0x0010)
 #define IS_SKL_GT3(dev_priv)	(IS_SKYLAKE(dev_priv) && \
 				 (INTEL_DEVID(dev_priv) & 0x00F0) == 0x0020)
 #define IS_SKL_GT4(dev_priv)	(IS_SKYLAKE(dev_priv) && \
@@ -3612,6 +3624,9 @@ i915_gem_context_lookup_timeline(struct i915_gem_context *ctx,

 int i915_perf_open_ioctl(struct drm_device *dev, void *data,
 			 struct drm_file *file);
+void i915_oa_init_reg_state(struct intel_engine_cs *engine,
+			    struct i915_gem_context *ctx,
+			    uint32_t *reg_state);

 /* i915_gem_evict.c */
 int __must_check i915_gem_evict_something(struct i915_address_space *vm,
diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 3277a52ce98e..4696634f984e 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -196,6 +196,12 @@

 #include "i915_drv.h"
 #include "i915_oa_hsw.h"
+#include "i915_oa_bdw.h"
+#include "i915_oa_chv.h"
+#include "i915_oa_sklgt2.h"
+#include "i915_oa_sklgt3.h"
+#include "i915_oa_sklgt4.h"
+#include "i915_oa_bxt.h"

 /* HW requires this to be a power of two, between 128k and 16M, though driver
  * is currently generally designed assuming the largest 16M size is used such
@@ -215,7 +221,7 @@
  *
  * Although this can be observed explicitly while copying reports to userspace
  * by checking for a zeroed report-id field in tail reports, we want to account
- * for this earlier, as part of the _oa_buffer_check to avoid lots of redundant
+ * for this earlier, as part of the oa_buffer_check to avoid lots of redundant
  * read() attempts.
  *
  * In effect we define a tail pointer for reading that lags the real tail
@@ -237,7 +243,7 @@
  * indicates that an updated tail pointer is needed.
  *
  * Most of the implementation details for this workaround are in
- * gen7_oa_buffer_check_unlocked() and gen7_appand_oa_reports()
+ * oa_buffer_check_unlocked() and _append_oa_reports()
  *
  * Note for posterity: previously the driver used to define an effective tail
  * pointer that lagged the real pointer by a 'tail margin' measured in bytes
@@ -272,6 +278,13 @@ static u32 i915_perf_stream_paranoid = true;

 #define INVALID_CTX_ID 0xffffffff

+/* On Gen8+ automatically triggered OA reports include a 'reason' field... */
+#define OAREPORT_REASON_MASK           0x3f
+#define OAREPORT_REASON_SHIFT          19
+#define OAREPORT_REASON_TIMER          (1<<0)
+#define OAREPORT_REASON_CTX_SWITCH     (1<<3)
+#define OAREPORT_REASON_CLK_RATIO      (1<<5)
+

 /* For sysctl proc_dointvec_minmax of i915_oa_max_sample_rate
  *
@@ -303,6 +316,13 @@ static struct i915_oa_format hsw_oa_formats[I915_OA_FORMAT_MAX] = {
 	[I915_OA_FORMAT_C4_B8]	    = { 7, 64 },
 };

+static struct i915_oa_format gen8_plus_oa_formats[I915_OA_FORMAT_MAX] = {
+	[I915_OA_FORMAT_A12]		    = { 0, 64 },
+	[I915_OA_FORMAT_A12_B8_C8]	    = { 2, 128 },
+	[I915_OA_FORMAT_A32u40_A4u32_B8_C8] = { 5, 256 },
+	[I915_OA_FORMAT_C4_B8]		    = { 7, 64 },
+};
+
 #define SAMPLE_OA_REPORT      (1<<0)

 /**
@@ -332,8 +352,20 @@ struct perf_open_properties {
 	int oa_period_exponent;
 };

+static u32 gen8_oa_hw_tail_read(struct drm_i915_private *dev_priv)
+{
+	return I915_READ(GEN8_OATAILPTR) & GEN8_OATAILPTR_MASK;
+}
+
+static u32 gen7_oa_hw_tail_read(struct drm_i915_private *dev_priv)
+{
+	u32 oastatus1 = I915_READ(GEN7_OASTATUS1);
+
+	return oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
+}
+
 /**
- * gen7_oa_buffer_check_unlocked - check for data and update tail ptr state
+ * oa_buffer_check_unlocked - check for data and update tail ptr state
  * @dev_priv: i915 device instance
  *
  * This is either called via fops (for blocking reads in user ctx) or the poll
@@ -356,12 +388,11 @@ struct perf_open_properties {
  *
  * Returns: %true if the OA buffer contains data, else %false
  */
-static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
+static bool oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
 {
 	int report_size = dev_priv->perf.oa.oa_buffer.format_size;
 	unsigned long flags;
 	unsigned int aged_idx;
-	u32 oastatus1;
 	u32 head, hw_tail, aged_tail, aging_tail;
 	u64 now;

@@ -381,8 +412,7 @@ static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
 	aged_tail = dev_priv->perf.oa.oa_buffer.tails[aged_idx].offset;
 	aging_tail = dev_priv->perf.oa.oa_buffer.tails[!aged_idx].offset;

-	oastatus1 = I915_READ(GEN7_OASTATUS1);
-	hw_tail = oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
+	hw_tail = dev_priv->perf.oa.ops.oa_hw_tail_read(dev_priv);

 	/* The tail pointer increases in 64 byte increments,
 	 * not in report_size steps...
@@ -404,6 +434,7 @@ static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
 	if (aging_tail != INVALID_TAIL_PTR &&
 	    ((now - dev_priv->perf.oa.oa_buffer.aging_timestamp) >
 	     OA_TAIL_MARGIN_NSEC)) {
+
 		aged_idx ^= 1;
 		dev_priv->perf.oa.oa_buffer.aged_tail_idx = aged_idx;

@@ -553,6 +584,287 @@ static int append_oa_sample(struct i915_perf_stream *stream,
  *
  * Returns: 0 on success, negative error code on failure.
  */
+static int gen8_append_oa_reports(struct i915_perf_stream *stream,
+				  char __user *buf,
+				  size_t count,
+				  size_t *offset)
+{
+	struct drm_i915_private *dev_priv = stream->dev_priv;
+	int report_size = dev_priv->perf.oa.oa_buffer.format_size;
+	u8 *oa_buf_base = dev_priv->perf.oa.oa_buffer.vaddr;
+	u32 gtt_offset = i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
+	u32 mask = (OA_BUFFER_SIZE - 1);
+	size_t start_offset = *offset;
+	unsigned long flags;
+	unsigned int aged_tail_idx;
+	u32 head, tail;
+	u32 taken;
+	int ret = 0;
+
+	if (WARN_ON(!stream->enabled))
+		return -EIO;
+
+	spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	head = dev_priv->perf.oa.oa_buffer.head;
+	aged_tail_idx = dev_priv->perf.oa.oa_buffer.aged_tail_idx;
+	tail = dev_priv->perf.oa.oa_buffer.tails[aged_tail_idx].offset;
+
+	spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	/* An invalid tail pointer here means we're still waiting for the poll
+	 * hrtimer callback to give us a pointer
+	 */
+	if (tail == INVALID_TAIL_PTR)
+		return -EAGAIN;
+
+	/* NB: oa_buffer.head/tail include the gtt_offset which we don't want
+	 * while indexing relative to oa_buf_base.
+	 */
+	head -= gtt_offset;
+	tail -= gtt_offset;
+
+	/* An out of bounds or misaligned head or tail pointer implies a driver
+	 * bug since we validate + align the tail pointers we read from the
+	 * hardware and we are in full control of the head pointer which should
+	 * only be incremented by multiples of the report size (notably also
+	 * all a power of two).
+	 */
+	if (WARN_ONCE(head > OA_BUFFER_SIZE || head % report_size ||
+		      tail > OA_BUFFER_SIZE || tail % report_size,
+		      "Inconsistent OA buffer pointers: head = %u, tail = %u\n",
+		      head, tail))
+		return -EIO;
+
+
+	for (/* none */;
+	     (taken = OA_TAKEN(tail, head));
+	     head = (head + report_size) & mask) {
+		u8 *report = oa_buf_base + head;
+		u32 *report32 = (void *)report;
+		u32 ctx_id;
+		u32 reason;
+
+		/* All the report sizes factor neatly into the buffer
+		 * size so we never expect to see a report split
+		 * between the beginning and end of the buffer.
+		 *
+		 * Given the initial alignment check a misalignment
+		 * here would imply a driver bug that would result
+		 * in an overrun.
+		 */
+		if (WARN_ON((OA_BUFFER_SIZE - head) < report_size)) {
+			DRM_ERROR("Spurious OA head ptr: non-integral report offset\n");
+			break;
+		}
+
+		/* The reason field includes flags identifying what
+		 * triggered this specific report (mostly timer
+		 * triggered or e.g. due to a context switch).
+		 *
+		 * This field is never expected to be zero so we can
+		 * check that the report isn't invalid before copying
+		 * it to userspace...
+		 */
+		reason = ((report32[0] >> OAREPORT_REASON_SHIFT) &
+			  OAREPORT_REASON_MASK);
+		if (reason == 0) {
+			if (__ratelimit(&dev_priv->perf.oa.spurious_report_rs))
+				DRM_NOTE("Skipping spurious, invalid OA report\n");
+			continue;
+		}
+
+		/* XXX: Just keep the lower 21 bits for now since I'm not
+		 * entirely sure if the HW touches any of the higher bits in
+		 * this field
+		 */
+		ctx_id = report32[2] & 0x1fffff;
+
+		/* Squash whatever is in the CTX_ID field if it's marked as
+		 * invalid to be sure we avoid false-positive, single-context
+		 * filtering below...
+		 *
+		 * Note: that we don't clear the valid_ctx_bit so userspace can
+		 * understand that the ID has been squashed by the kernel.
+		 */
+		if (!(report32[0] & dev_priv->perf.oa.gen8_valid_ctx_bit))
+			ctx_id = report32[2] = INVALID_CTX_ID;
+
+		/* NB: For Gen 8 the OA unit no longer supports clock gating
+		 * off for a specific context and the kernel can't securely
+		 * stop the counters from updating as system-wide / global
+		 * values.
+		 *
+		 * Automatic reports now include a context ID so reports can be
+		 * filtered on the cpu but it's not worth trying to
+		 * automatically subtract/hide counter progress for other
+		 * contexts while filtering since we can't stop userspace
+		 * issuing MI_REPORT_PERF_COUNT commands which would still
+		 * provide a side-band view of the real values.
+		 *
+		 * To allow userspace (such as Mesa/GL_INTEL_performance_query)
+		 * to normalize counters for a single filtered context then it
+		 * needs be forwarded bookend context-switch reports so that it
+		 * can track switches in between MI_REPORT_PERF_COUNT commands
+		 * and can itself subtract/ignore the progress of counters
+		 * associated with other contexts. Note that the hardware
+		 * automatically triggers reports when switching to a new
+		 * context which are tagged with the ID of the newly active
+		 * context. To avoid the complexity (and likely fragility) of
+		 * reading ahead while parsing reports to try and minimize
+		 * forwarding redundant context switch reports (i.e. between
+		 * other, unrelated contexts) we simply elect to forward them
+		 * all.
+		 *
+		 * We don't rely solely on the reason field to identify context
+		 * switches since it's not-uncommon for periodic samples to
+		 * identify a switch before any 'context switch' report.
+		 */
+		if (!dev_priv->perf.oa.exclusive_stream->ctx ||
+		    dev_priv->perf.oa.specific_ctx_id == ctx_id ||
+		    (dev_priv->perf.oa.oa_buffer.last_ctx_id ==
+		     dev_priv->perf.oa.specific_ctx_id) ||
+		    reason & OAREPORT_REASON_CTX_SWITCH) {
+
+			/* While filtering for a single context we avoid
+			 * leaking the IDs of other contexts.
+			 */
+			if (dev_priv->perf.oa.exclusive_stream->ctx &&
+			    dev_priv->perf.oa.specific_ctx_id != ctx_id) {
+				report32[2] = INVALID_CTX_ID;
+			}
+
+			ret = append_oa_sample(stream, buf, count, offset,
+					       report);
+			if (ret)
+				break;
+
+			dev_priv->perf.oa.oa_buffer.last_ctx_id = ctx_id;
+		}
+
+		/* The above reason field sanity check is based on
+		 * the assumption that the OA buffer is initially
+		 * zeroed and we reset the field after copying so the
+		 * check is still meaningful once old reports start
+		 * being overwritten.
+		 */
+		report32[0] = 0;
+	}
+
+	if (start_offset != *offset) {
+		spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+		/* We removed the gtt_offset for the copy loop above, indexing
+		 * relative to oa_buf_base so put back here...
+		 */
+		head += gtt_offset;
+
+		I915_WRITE(GEN8_OAHEADPTR, head & GEN8_OAHEADPTR_MASK);
+		dev_priv->perf.oa.oa_buffer.head = head;
+
+		spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+	}
+
+	return ret;
+}
+
+/**
+ * gen8_oa_read - copy status records then buffered OA reports
+ * @stream: An i915-perf stream opened for OA metrics
+ * @buf: destination buffer given by userspace
+ * @count: the number of bytes userspace wants to read
+ * @offset: (inout): the current position for writing into @buf
+ *
+ * Checks OA unit status registers and if necessary appends corresponding
+ * status records for userspace (such as for a buffer full condition) and then
+ * initiate appending any buffered OA reports.
+ *
+ * Updates @offset according to the number of bytes successfully copied into
+ * the userspace buffer.
+ *
+ * NB: some data may be successfully copied to the userspace buffer
+ * even if an error is returned, and this is reflected in the
+ * updated @offset.
+ *
+ * Returns: zero on success or a negative error code
+ */
+static int gen8_oa_read(struct i915_perf_stream *stream,
+			char __user *buf,
+			size_t count,
+			size_t *offset)
+{
+	struct drm_i915_private *dev_priv = stream->dev_priv;
+	u32 oastatus;
+	int ret;
+
+	if (WARN_ON(!dev_priv->perf.oa.oa_buffer.vaddr))
+		return -EIO;
+
+	oastatus = I915_READ(GEN8_OASTATUS);
+
+	/* We treat OABUFFER_OVERFLOW as a significant error:
+	 *
+	 * Although theoretically we could handle this more gracefully
+	 * sometimes, some Gens don't correctly suppress certain
+	 * automatically triggered reports in this condition and so we
+	 * have to assume that old reports are now being trampled
+	 * over.
+	 *
+	 * Considering how we don't currently give userspace control
+	 * over the OA buffer size and always configure a large 16MB
+	 * buffer, then a buffer overflow does anyway likely indicate
+	 * that something has gone quite badly wrong.
+	 */
+	if (oastatus & GEN8_OASTATUS_OABUFFER_OVERFLOW) {
+		ret = append_oa_status(stream, buf, count, offset,
+				       DRM_I915_PERF_RECORD_OA_BUFFER_LOST);
+		if (ret)
+			return ret;
+
+		DRM_DEBUG("OA buffer overflow (exponent = %d): force restart\n",
+			  dev_priv->perf.oa.period_exponent);
+
+		dev_priv->perf.oa.ops.oa_disable(dev_priv);
+		dev_priv->perf.oa.ops.oa_enable(dev_priv);
+
+		/* Note: .oa_enable() is expected to re-init the oabuffer
+		 * and reset GEN8_OASTATUS for us
+		 */
+		oastatus = I915_READ(GEN8_OASTATUS);
+	}
+
+	if (oastatus & GEN8_OASTATUS_REPORT_LOST) {
+		ret = append_oa_status(stream, buf, count, offset,
+				       DRM_I915_PERF_RECORD_OA_REPORT_LOST);
+		if (ret)
+			return ret;
+		I915_WRITE(GEN8_OASTATUS,
+			   oastatus & ~GEN8_OASTATUS_REPORT_LOST);
+	}
+
+	return gen8_append_oa_reports(stream, buf, count, offset);
+}
+
+/**
+ * Copies all buffered OA reports into userspace read() buffer.
+ * @stream: An i915-perf stream opened for OA metrics
+ * @buf: destination buffer given by userspace
+ * @count: the number of bytes userspace wants to read
+ * @offset: (inout): the current position for writing into @buf
+ *
+ * Notably any error condition resulting in a short read (-%ENOSPC or
+ * -%EFAULT) will be returned even though one or more records may
+ * have been successfully copied. In this case it's up to the caller
+ * to decide if the error should be squashed before returning to
+ * userspace.
+ *
+ * Note: reports are consumed from the head, and appended to the
+ * tail, so the tail chases the head?... If you think that's mad
+ * and back-to-front you're not alone, but this follows the
+ * Gen PRM naming convention.
+ *
+ * Returns: 0 on success, negative error code on failure.
+ */
 static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 				  char __user *buf,
 				  size_t count,
@@ -733,7 +1045,8 @@ static int gen7_oa_read(struct i915_perf_stream *stream,
 		if (ret)
 			return ret;

-		DRM_DEBUG("OA buffer overflow: force restart\n");
+		DRM_DEBUG("OA buffer overflow (exponent = %d): force restart\n",
+			  dev_priv->perf.oa.period_exponent);

 		dev_priv->perf.oa.ops.oa_disable(dev_priv);
 		dev_priv->perf.oa.ops.oa_enable(dev_priv);
@@ -776,7 +1089,7 @@ static int i915_oa_wait_unlocked(struct i915_perf_stream *stream)
 		return -EIO;

 	return wait_event_interruptible(dev_priv->perf.oa.poll_wq,
-					dev_priv->perf.oa.ops.oa_buffer_check(dev_priv));
+					oa_buffer_check_unlocked(dev_priv));
 }

 /**
@@ -833,33 +1146,37 @@ static int i915_oa_read(struct i915_perf_stream *stream,
 static int oa_get_render_ctx_id(struct i915_perf_stream *stream)
 {
 	struct drm_i915_private *dev_priv = stream->dev_priv;
-	struct intel_engine_cs *engine = dev_priv->engine[RCS];
-	int ret;

-	ret = i915_mutex_lock_interruptible(&dev_priv->drm);
-	if (ret)
-		return ret;
+	if (i915.enable_execlists)
+		dev_priv->perf.oa.specific_ctx_id = stream->ctx->hw_id;
+	else {
+		struct intel_engine_cs *engine = dev_priv->engine[RCS];
+		int ret;

-	/* As the ID is the gtt offset of the context's vma we pin
-	 * the vma to ensure the ID remains fixed.
-	 *
-	 * NB: implied RCS engine...
-	 */
-	ret = engine->context_pin(engine, stream->ctx);
-	if (ret)
-		goto unlock;
+		ret = i915_mutex_lock_interruptible(&dev_priv->drm);
+		if (ret)
+			return ret;

-	/* Explicitly track the ID (instead of calling i915_ggtt_offset()
-	 * on the fly) considering the difference with gen8+ and
-	 * execlists
-	 */
-	dev_priv->perf.oa.specific_ctx_id =
-		i915_ggtt_offset(stream->ctx->engine[engine->id].state);
+		/* As the ID is the gtt offset of the context's vma we pin
+		 * the vma to ensure the ID remains fixed.
+		 */
+		ret = engine->context_pin(engine, stream->ctx);
+		if (ret) {
+			mutex_unlock(&dev_priv->drm.struct_mutex);
+			return ret;
+		}

-unlock:
-	mutex_unlock(&dev_priv->drm.struct_mutex);
+		/* Explicitly track the ID (instead of calling
+		 * i915_ggtt_offset() on the fly) considering the difference
+		 * with gen8+ and execlists
+		 */
+		dev_priv->perf.oa.specific_ctx_id =
+			i915_ggtt_offset(stream->ctx->engine[engine->id].state);

-	return ret;
+		mutex_unlock(&dev_priv->drm.struct_mutex);
+	}
+
+	return 0;
 }

 /**
@@ -874,12 +1191,16 @@ static void oa_put_render_ctx_id(struct i915_perf_stream *stream)
 	struct drm_i915_private *dev_priv = stream->dev_priv;
 	struct intel_engine_cs *engine = dev_priv->engine[RCS];

-	mutex_lock(&dev_priv->drm.struct_mutex);
+	if (i915.enable_execlists) {
+		dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
+	} else {
+		mutex_lock(&dev_priv->drm.struct_mutex);

-	dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
-	engine->context_unpin(engine, stream->ctx);
+		dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
+		engine->context_unpin(engine, stream->ctx);

-	mutex_unlock(&dev_priv->drm.struct_mutex);
+		mutex_unlock(&dev_priv->drm.struct_mutex);
+	}
 }

 static void
@@ -969,6 +1290,61 @@ static void gen7_init_oa_buffer(struct drm_i915_private *dev_priv)
 	dev_priv->perf.oa.pollin = false;
 }

+static void gen8_init_oa_buffer(struct drm_i915_private *dev_priv)
+{
+	u32 gtt_offset = i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
+	unsigned long flags;
+
+	spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	I915_WRITE(GEN8_OASTATUS, 0);
+	I915_WRITE(GEN8_OAHEADPTR, gtt_offset);
+	dev_priv->perf.oa.oa_buffer.head = gtt_offset;
+
+	I915_WRITE(GEN8_OABUFFER_UDW, 0);
+
+	/* PRM says:
+	 *
+	 *  "This MMIO must be set before the OATAILPTR
+	 *  register and after the OAHEADPTR register. This is
+	 *  to enable proper functionality of the overflow
+	 *  bit."
+	 */
+	I915_WRITE(GEN8_OABUFFER, gtt_offset |
+		   OABUFFER_SIZE_16M | OA_MEM_SELECT_GGTT);
+	I915_WRITE(GEN8_OATAILPTR, gtt_offset & GEN8_OATAILPTR_MASK);
+
+	/* Mark that we need updated tail pointers to read from... */
+	dev_priv->perf.oa.oa_buffer.tails[0].offset = INVALID_TAIL_PTR;
+	dev_priv->perf.oa.oa_buffer.tails[1].offset = INVALID_TAIL_PTR;
+
+	/* Reset state used to recognise context switches, affecting which
+	 * reports we will forward to userspace while filtering for a single
+	 * context.
+	 */
+	dev_priv->perf.oa.oa_buffer.last_ctx_id = INVALID_CTX_ID;
+
+	spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	/* NB: although the OA buffer will initially be allocated
+	 * zeroed via shmfs (and so this memset is redundant when
+	 * first allocating), we may re-init the OA buffer, either
+	 * when re-enabling a stream or in error/reset paths.
+	 *
+	 * The reason we clear the buffer for each re-init is for the
+	 * sanity check in gen8_append_oa_reports() that looks at the
+	 * reason field to make sure it's non-zero which relies on
+	 * the assumption that new reports are being written to zeroed
+	 * memory...
+	 */
+	memset(dev_priv->perf.oa.oa_buffer.vaddr, 0, OA_BUFFER_SIZE);
+
+	/* Maybe make ->pollin per-stream state if we support multiple
+	 * concurrent streams in the future.
+	 */
+	dev_priv->perf.oa.pollin = false;
+}
+
 static int alloc_oa_buffer(struct drm_i915_private *dev_priv)
 {
 	struct drm_i915_gem_object *bo;
@@ -1113,6 +1489,202 @@ static void hsw_disable_metric_set(struct drm_i915_private *dev_priv)
 				      ~GT_NOA_ENABLE));
 }

+/* NB: It must always remain pointer safe to run this even if the OA unit
+ * has been disabled.
+ *
+ * It's fine to put out-of-date values into these per-context registers
+ * in the case that the OA unit has been disabled.
+ */
+static void gen8_update_reg_state_unlocked(struct i915_gem_context *ctx,
+					   u32 *reg_state)
+{
+	struct drm_i915_private *dev_priv = ctx->i915;
+	const struct i915_oa_reg *flex_regs = dev_priv->perf.oa.flex_regs;
+	int n_flex_regs = dev_priv->perf.oa.flex_regs_len;
+	u32 ctx_oactxctrl = dev_priv->perf.oa.ctx_oactxctrl_offset;
+	u32 ctx_flexeu0 = dev_priv->perf.oa.ctx_flexeu0_offset;
+	/* The MMIO offsets for Flex EU registers aren't contiguous */
+	u32 flex_mmio[] = {
+		i915_mmio_reg_offset(EU_PERF_CNTL0),
+		i915_mmio_reg_offset(EU_PERF_CNTL1),
+		i915_mmio_reg_offset(EU_PERF_CNTL2),
+		i915_mmio_reg_offset(EU_PERF_CNTL3),
+		i915_mmio_reg_offset(EU_PERF_CNTL4),
+		i915_mmio_reg_offset(EU_PERF_CNTL5),
+		i915_mmio_reg_offset(EU_PERF_CNTL6),
+	};
+	int i;
+
+	reg_state[ctx_oactxctrl] = i915_mmio_reg_offset(GEN8_OACTXCONTROL);
+	reg_state[ctx_oactxctrl+1] = (dev_priv->perf.oa.period_exponent <<
+				      GEN8_OA_TIMER_PERIOD_SHIFT) |
+				     (dev_priv->perf.oa.periodic ?
+				      GEN8_OA_TIMER_ENABLE : 0) |
+				     GEN8_OA_COUNTER_RESUME;
+
+	for (i = 0; i < ARRAY_SIZE(flex_mmio); i++) {
+		u32 state_offset = ctx_flexeu0 + i * 2;
+		u32 mmio = flex_mmio[i];
+
+		/* This arbitrary default will select the 'EU FPU0 Pipeline
+		 * Active' event. In the future it's anticipated that there
+		 * will be an explicit 'No Event' we can select, but not yet...
+		 */
+		u32 value = 0;
+		int j;
+
+		for (j = 0; j < n_flex_regs; j++) {
+			if (i915_mmio_reg_offset(flex_regs[j].addr) == mmio) {
+				value = flex_regs[j].value;
+				break;
+			}
+		}
+
+		reg_state[state_offset] = mmio;
+		reg_state[state_offset+1] = value;
+	}
+}
+
+/* Manages updating the per-context aspects of the OA stream
+ * configuration across all contexts.
+ *
+ * The awkward consideration here is that OACTXCONTROL controls the
+ * exponent for periodic sampling which is primarily used for system
+ * wide profiling where we'd like a consistent sampling period even in
+ * the face of context switches.
+ *
+ * Our approach of updating the register state context (as opposed to
+ * say using a workaround batch buffer) ensures that the hardware
+ * won't automatically reload an out-of-date timer exponent even
+ * transiently before a WA BB could be parsed.
+ *
+ * This function needs to:
+ * - Ensure the currently running context's per-context OA state is
+ *   updated
+ * - Ensure that all existing contexts will have the correct per-context
+ *   OA state if they are scheduled for use.
+ * - Ensure any new contexts will be initialized with the correct
+ *   per-context OA state.
+ *
+ * Note: it's only the RCS/Render context that has any OA state.
+ */
+static int gen8_configure_all_contexts(struct drm_i915_private *dev_priv)
+{
+	struct intel_engine_cs *engine = dev_priv->engine[RCS];
+	struct i915_gem_context *ctx;
+	int ret;
+
+	ret = i915_mutex_lock_interruptible(&dev_priv->drm);
+	if (ret)
+		return ret;
+
+	/* Switch away from any user context.  */
+	ret = i915_gem_switch_to_kernel_context(dev_priv);
+	if (ret) {
+		mutex_unlock(&dev_priv->drm.struct_mutex);
+		return ret;
+	}
+
+	/* The OA register config is setup through the context image. This image
+	 * might be written to by the GPU on context switch (in particular on
+	 * lite-restore). This means we can't safely update a context's image,
+	 * if this context is scheduled/submitted to run on the GPU.
+	 *
+	 * We could emit the OA register config through the batch buffer but
+	 * this might leave small interval of time where the OA unit is
+	 * configured at an invalid sampling period.
+	 *
+	 * So far the best way to work around this issue seems to be draining
+	 * the GPU from any submitted work.
+	 */
+	ret = i915_gem_wait_for_idle(dev_priv,
+				     I915_WAIT_INTERRUPTIBLE |
+				     I915_WAIT_LOCKED);
+	if (ret) {
+		mutex_unlock(&dev_priv->drm.struct_mutex);
+		return ret;
+	}
+
+	/* Update all contexts now that we've stalled the submission. */
+	list_for_each_entry(ctx, &dev_priv->context_list, link) {
+		ret = engine->context_pin(engine, ctx);
+		if (ret) {
+			mutex_unlock(&dev_priv->drm.struct_mutex);
+			return ret;
+		}
+
+		gen8_update_reg_state_unlocked(ctx,
+					       ctx->engine[RCS].lrc_reg_state);
+	}
+
+	mutex_unlock(&dev_priv->drm.struct_mutex);
+
+	return 0;
+}
+
+static int gen8_enable_metric_set(struct drm_i915_private *dev_priv)
+{
+	int ret = dev_priv->perf.oa.ops.select_metric_set(dev_priv);
+
+	if (ret)
+		return ret;
+
+	/* We disable slice/unslice clock ratio change reports on SKL since
+	 * they are too noisy. The HW generates a lot of redundant reports
+	 * where the ratio hasn't really changed causing a lot of redundant
+	 * work to processes and increasing the chances we'll hit buffer
+	 * overruns.
+	 *
+	 * Although we don't currently use the 'disable overrun' OABUFFER
+	 * feature it's worth noting that clock ratio reports have to be
+	 * disabled before considering to use that feature since the HW doesn't
+	 * correctly block these reports.
+	 *
+	 * Currently none of the high-level metrics we have depend on knowing
+	 * this ratio to normalize.
+	 *
+	 * Note: This register is not power context saved and restored, but
+	 * that's OK considering that we disable RC6 while the OA unit is
+	 * enabled.
+	 *
+	 * The _INCLUDE_CLK_RATIO bit allows the slice/unslice frequency to
+	 * be read back from automatically triggered reports, as part of the
+	 * RPT_ID field.
+	 */
+	if (IS_SKYLAKE(dev_priv) || IS_BROXTON(dev_priv)) {
+		I915_WRITE(GEN8_OA_DEBUG,
+			   _MASKED_BIT_ENABLE(GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS |
+					      GEN9_OA_DEBUG_INCLUDE_CLK_RATIO));
+	}
+
+	I915_WRITE(GDT_CHICKEN_BITS, 0xA0);
+	config_oa_regs(dev_priv, dev_priv->perf.oa.mux_regs,
+		       dev_priv->perf.oa.mux_regs_len);
+	I915_WRITE(GDT_CHICKEN_BITS, 0x80);
+
+	/* It takes a fairly long time for a new MUX configuration to
+	 * be be applied after these register writes. This delay
+	 * duration is take from Haswell (derived empirically based on
+	 * the render_basic config) but hopefully it covers the
+	 * maximum configuration latency for Gen8+ too...
+	 */
+	usleep_range(15000, 20000);
+
+	config_oa_regs(dev_priv, dev_priv->perf.oa.b_counter_regs,
+		       dev_priv->perf.oa.b_counter_regs_len);
+
+	ret = gen8_configure_all_contexts(dev_priv);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+static void gen8_disable_metric_set(struct drm_i915_private *dev_priv)
+{
+	/* NOP */
+}
+
 static void gen7_update_oacontrol_locked(struct drm_i915_private *dev_priv)
 {
 	lockdep_assert_held(&dev_priv->perf.hook_lock);
@@ -1157,6 +1729,29 @@ static void gen7_oa_enable(struct drm_i915_private *dev_priv)
 	spin_unlock_irqrestore(&dev_priv->perf.hook_lock, flags);
 }

+static void gen8_oa_enable(struct drm_i915_private *dev_priv)
+{
+	u32 report_format = dev_priv->perf.oa.oa_buffer.format;
+
+	/* Reset buf pointers so we don't forward reports from before now.
+	 *
+	 * Think carefully if considering trying to avoid this, since it
+	 * also ensures status flags and the buffer itself are cleared
+	 * in error paths, and we have checks for invalid reports based
+	 * on the assumption that certain fields are written to zeroed
+	 * memory which this helps maintains.
+	 */
+	gen8_init_oa_buffer(dev_priv);
+
+	/* Note: we don't rely on the hardware to perform single context
+	 * filtering and instead filter on the cpu based on the context-id
+	 * field of reports
+	 */
+	I915_WRITE(GEN8_OACONTROL, (report_format <<
+				    GEN8_OA_REPORT_FORMAT_SHIFT) |
+				   GEN8_OA_COUNTER_ENABLE);
+}
+
 /**
  * i915_oa_stream_enable - handle `I915_PERF_IOCTL_ENABLE` for OA stream
  * @stream: An i915 perf stream opened for OA metrics
@@ -1183,6 +1778,11 @@ static void gen7_oa_disable(struct drm_i915_private *dev_priv)
 	I915_WRITE(GEN7_OACONTROL, 0);
 }

+static void gen8_oa_disable(struct drm_i915_private *dev_priv)
+{
+	I915_WRITE(GEN8_OACONTROL, 0);
+}
+
 /**
  * i915_oa_stream_disable - handle `I915_PERF_IOCTL_DISABLE` for OA stream
  * @stream: An i915 perf stream opened for OA metrics
@@ -1361,6 +1961,21 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream,
 	return ret;
 }

+void i915_oa_init_reg_state(struct intel_engine_cs *engine,
+			    struct i915_gem_context *ctx,
+			    u32 *reg_state)
+{
+	struct drm_i915_private *dev_priv = engine->i915;
+
+	if (engine->id != RCS)
+		return;
+
+	if (!dev_priv->perf.initialized)
+		return;
+
+	gen8_update_reg_state_unlocked(ctx, reg_state);
+}
+
 /**
  * i915_perf_read_locked - &i915_perf_stream_ops->read with error normalisation
  * @stream: An i915 perf stream
@@ -1486,7 +2101,7 @@ static enum hrtimer_restart oa_poll_check_timer_cb(struct hrtimer *hrtimer)
 		container_of(hrtimer, typeof(*dev_priv),
 			     perf.oa.poll_check_timer);

-	if (dev_priv->perf.oa.ops.oa_buffer_check(dev_priv)) {
+	if (oa_buffer_check_unlocked(dev_priv)) {
 		dev_priv->perf.oa.pollin = true;
 		wake_up(&dev_priv->perf.oa.poll_wq);
 	}
@@ -1775,6 +2390,7 @@ i915_perf_open_ioctl_locked(struct drm_i915_private *dev_priv,
 	struct i915_gem_context *specific_ctx = NULL;
 	struct i915_perf_stream *stream = NULL;
 	unsigned long f_flags = 0;
+	bool privileged_op = true;
 	int stream_fd;
 	int ret;

@@ -1792,12 +2408,28 @@ i915_perf_open_ioctl_locked(struct drm_i915_private *dev_priv,
 		}
 	}

+	/* On Haswell the OA unit supports clock gating off for a specific
+	 * context and in this mode there's no visibility of metrics for the
+	 * rest of the system, which we consider acceptable for a
+	 * non-privileged client.
+	 *
+	 * For Gen8+ the OA unit no longer supports clock gating off for a
+	 * specific context and the kernel can't securely stop the counters
+	 * from updating as system-wide / global values. Even though we can
+	 * filter reports based on the included context ID we can't block
+	 * clients from seeing the raw / global counter values via
+	 * MI_REPORT_PERF_COUNT commands and so consider it a privileged op to
+	 * enable the OA unit by default.
+	 */
+	if (IS_HASWELL(dev_priv) && specific_ctx)
+		privileged_op = false;
+
 	/* Similar to perf's kernel.perf_paranoid_cpu sysctl option
 	 * we check a dev.i915.perf_stream_paranoid sysctl option
 	 * to determine if it's ok to access system wide OA counters
 	 * without CAP_SYS_ADMIN privileges.
 	 */
-	if (!specific_ctx &&
+	if (privileged_op &&
 	    i915_perf_stream_paranoid && !capable(CAP_SYS_ADMIN)) {
 		DRM_DEBUG("Insufficient privileges to open system-wide i915 perf stream\n");
 		ret = -EACCES;
@@ -2069,9 +2701,6 @@ int i915_perf_open_ioctl(struct drm_device *dev, void *data,
  */
 void i915_perf_register(struct drm_i915_private *dev_priv)
 {
-	if (!IS_HASWELL(dev_priv))
-		return;
-
 	if (!dev_priv->perf.initialized)
 		return;

@@ -2087,11 +2716,38 @@ void i915_perf_register(struct drm_i915_private *dev_priv)
 	if (!dev_priv->perf.metrics_kobj)
 		goto exit;

-	if (i915_perf_register_sysfs_hsw(dev_priv)) {
-		kobject_put(dev_priv->perf.metrics_kobj);
-		dev_priv->perf.metrics_kobj = NULL;
+	if (IS_HASWELL(dev_priv)) {
+		if (i915_perf_register_sysfs_hsw(dev_priv))
+			goto sysfs_error;
+	} else if (IS_BROADWELL(dev_priv)) {
+		if (i915_perf_register_sysfs_bdw(dev_priv))
+			goto sysfs_error;
+	} else if (IS_CHERRYVIEW(dev_priv)) {
+		if (i915_perf_register_sysfs_chv(dev_priv))
+			goto sysfs_error;
+	} else if (IS_SKYLAKE(dev_priv)) {
+		if (IS_SKL_GT2(dev_priv)) {
+			if (i915_perf_register_sysfs_sklgt2(dev_priv))
+				goto sysfs_error;
+		} else if (IS_SKL_GT3(dev_priv)) {
+			if (i915_perf_register_sysfs_sklgt3(dev_priv))
+				goto sysfs_error;
+		} else if (IS_SKL_GT4(dev_priv)) {
+			if (i915_perf_register_sysfs_sklgt4(dev_priv))
+				goto sysfs_error;
+		} else
+			goto sysfs_error;
+	} else if (IS_BROXTON(dev_priv)) {
+		if (i915_perf_register_sysfs_bxt(dev_priv))
+			goto sysfs_error;
 	}

+	goto exit;
+
+sysfs_error:
+	kobject_put(dev_priv->perf.metrics_kobj);
+	dev_priv->perf.metrics_kobj = NULL;
+
 exit:
 	mutex_unlock(&dev_priv->perf.lock);
 }
@@ -2107,13 +2763,24 @@ void i915_perf_register(struct drm_i915_private *dev_priv)
  */
 void i915_perf_unregister(struct drm_i915_private *dev_priv)
 {
-	if (!IS_HASWELL(dev_priv))
-		return;
-
 	if (!dev_priv->perf.metrics_kobj)
 		return;

-	i915_perf_unregister_sysfs_hsw(dev_priv);
+	if (IS_HASWELL(dev_priv))
+		i915_perf_unregister_sysfs_hsw(dev_priv);
+	else if (IS_BROADWELL(dev_priv))
+		i915_perf_unregister_sysfs_bdw(dev_priv);
+	else if (IS_CHERRYVIEW(dev_priv))
+		i915_perf_unregister_sysfs_chv(dev_priv);
+	else if (IS_SKYLAKE(dev_priv)) {
+		if (IS_SKL_GT2(dev_priv))
+			i915_perf_unregister_sysfs_sklgt2(dev_priv);
+		else if (IS_SKL_GT3(dev_priv))
+			i915_perf_unregister_sysfs_sklgt3(dev_priv);
+		else if (IS_SKL_GT4(dev_priv))
+			i915_perf_unregister_sysfs_sklgt4(dev_priv);
+	} else if (IS_BROXTON(dev_priv))
+		i915_perf_unregister_sysfs_bxt(dev_priv);

 	kobject_put(dev_priv->perf.metrics_kobj);
 	dev_priv->perf.metrics_kobj = NULL;
@@ -2172,36 +2839,105 @@ static struct ctl_table dev_root[] = {
  */
 void i915_perf_init(struct drm_i915_private *dev_priv)
 {
-	if (!IS_HASWELL(dev_priv))
-		return;
-
-	hrtimer_init(&dev_priv->perf.oa.poll_check_timer,
-		     CLOCK_MONOTONIC, HRTIMER_MODE_REL);
-	dev_priv->perf.oa.poll_check_timer.function = oa_poll_check_timer_cb;
-	init_waitqueue_head(&dev_priv->perf.oa.poll_wq);
+	dev_priv->perf.oa.n_builtin_sets = 0;
+
+	if (IS_HASWELL(dev_priv)) {
+		dev_priv->perf.oa.ops.init_oa_buffer = gen7_init_oa_buffer;
+		dev_priv->perf.oa.ops.enable_metric_set = hsw_enable_metric_set;
+		dev_priv->perf.oa.ops.disable_metric_set = hsw_disable_metric_set;
+		dev_priv->perf.oa.ops.oa_enable = gen7_oa_enable;
+		dev_priv->perf.oa.ops.oa_disable = gen7_oa_disable;
+		dev_priv->perf.oa.ops.read = gen7_oa_read;
+		dev_priv->perf.oa.ops.oa_hw_tail_read =
+			gen7_oa_hw_tail_read;
+
+		dev_priv->perf.oa.oa_formats = hsw_oa_formats;
+
+		dev_priv->perf.oa.n_builtin_sets =
+			i915_oa_n_builtin_metric_sets_hsw;
+	} else if (i915.enable_execlists) {
+		/* Note: that although we could theoretically also support the
+		 * legacy ringbuffer mode on BDW (and earlier iterations of
+		 * this driver, before upstreaming did this) it didn't seem
+		 * worth the complexity to maintain now that BDW+ enable
+		 * execlist mode by default.
+		 */

-	INIT_LIST_HEAD(&dev_priv->perf.streams);
-	mutex_init(&dev_priv->perf.lock);
-	spin_lock_init(&dev_priv->perf.hook_lock);
-	spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);
+		if (IS_GEN8(dev_priv)) {
+			dev_priv->perf.oa.ctx_oactxctrl_offset = 0x120;
+			dev_priv->perf.oa.ctx_flexeu0_offset = 0x2ce;
+			dev_priv->perf.oa.gen8_valid_ctx_bit = (1<<25);
+
+			if (IS_BROADWELL(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_bdw;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_bdw;
+			} else if (IS_CHERRYVIEW(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_chv;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_chv;
+			}
+		} else if (IS_GEN9(dev_priv)) {
+			dev_priv->perf.oa.ctx_oactxctrl_offset = 0x128;
+			dev_priv->perf.oa.ctx_flexeu0_offset = 0x3de;
+			dev_priv->perf.oa.gen8_valid_ctx_bit = (1<<16);
+
+			if (IS_SKL_GT2(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_sklgt2;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_sklgt2;
+			} else if (IS_SKL_GT3(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_sklgt3;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_sklgt3;
+			} else if (IS_SKL_GT4(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_sklgt4;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_sklgt4;
+			} else if (IS_BROXTON(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_bxt;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_bxt;
+			}
+		}

-	dev_priv->perf.oa.ops.init_oa_buffer = gen7_init_oa_buffer;
-	dev_priv->perf.oa.ops.enable_metric_set = hsw_enable_metric_set;
-	dev_priv->perf.oa.ops.disable_metric_set = hsw_disable_metric_set;
-	dev_priv->perf.oa.ops.oa_enable = gen7_oa_enable;
-	dev_priv->perf.oa.ops.oa_disable = gen7_oa_disable;
-	dev_priv->perf.oa.ops.read = gen7_oa_read;
-	dev_priv->perf.oa.ops.oa_buffer_check =
-		gen7_oa_buffer_check_unlocked;
+		if (dev_priv->perf.oa.n_builtin_sets) {
+			dev_priv->perf.oa.ops.init_oa_buffer = gen8_init_oa_buffer;
+			dev_priv->perf.oa.ops.enable_metric_set =
+				gen8_enable_metric_set;
+			dev_priv->perf.oa.ops.disable_metric_set =
+				gen8_disable_metric_set;
+			dev_priv->perf.oa.ops.oa_enable = gen8_oa_enable;
+			dev_priv->perf.oa.ops.oa_disable = gen8_oa_disable;
+			dev_priv->perf.oa.ops.read = gen8_oa_read;
+			dev_priv->perf.oa.ops.oa_hw_tail_read =
+				gen8_oa_hw_tail_read;
+
+			dev_priv->perf.oa.oa_formats = gen8_plus_oa_formats;
+		}
+	}

-	dev_priv->perf.oa.oa_formats = hsw_oa_formats;
+	if (dev_priv->perf.oa.n_builtin_sets) {
+		hrtimer_init(&dev_priv->perf.oa.poll_check_timer,
+				CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+		dev_priv->perf.oa.poll_check_timer.function = oa_poll_check_timer_cb;
+		init_waitqueue_head(&dev_priv->perf.oa.poll_wq);

-	dev_priv->perf.oa.n_builtin_sets =
-		i915_oa_n_builtin_metric_sets_hsw;
+		INIT_LIST_HEAD(&dev_priv->perf.streams);
+		mutex_init(&dev_priv->perf.lock);
+		spin_lock_init(&dev_priv->perf.hook_lock);
+		spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);

-	dev_priv->perf.sysctl_header = register_sysctl_table(dev_root);
+		dev_priv->perf.sysctl_header = register_sysctl_table(dev_root);

-	dev_priv->perf.initialized = true;
+		dev_priv->perf.initialized = true;
+	}
 }

 /**
@@ -2216,5 +2952,6 @@ void i915_perf_fini(struct drm_i915_private *dev_priv)
 	unregister_sysctl_table(dev_priv->perf.sysctl_header);

 	memset(&dev_priv->perf.oa.ops, 0, sizeof(dev_priv->perf.oa.ops));
+
 	dev_priv->perf.initialized = false;
 }
diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index ee8170cda93e..f8acde02c314 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -653,6 +653,12 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)

 #define GEN8_OACTXID _MMIO(0x2364)

+#define GEN8_OA_DEBUG _MMIO(0x2B04)
+#define  GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS    (1<<5)
+#define  GEN9_OA_DEBUG_INCLUDE_CLK_RATIO	    (1<<6)
+#define  GEN9_OA_DEBUG_DISABLE_GO_1_0_REPORTS	    (1<<2)
+#define  GEN9_OA_DEBUG_DISABLE_CTX_SWITCH_REPORTS   (1<<1)
+
 #define GEN8_OACONTROL _MMIO(0x2B00)
 #define  GEN8_OA_REPORT_FORMAT_A12	    (0<<2)
 #define  GEN8_OA_REPORT_FORMAT_A12_B8_C8    (2<<2)
@@ -674,6 +680,7 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 #define  GEN7_OABUFFER_STOP_RESUME_ENABLE   (1<<1)
 #define  GEN7_OABUFFER_RESUME		    (1<<0)

+#define GEN8_OABUFFER_UDW _MMIO(0x23b4)
 #define GEN8_OABUFFER _MMIO(0x2b14)

 #define GEN7_OASTATUS1 _MMIO(0x2364)
@@ -692,7 +699,9 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 #define  GEN8_OASTATUS_REPORT_LOST	    (1<<0)

 #define GEN8_OAHEADPTR _MMIO(0x2B0C)
+#define GEN8_OAHEADPTR_MASK    0xffffffc0
 #define GEN8_OATAILPTR _MMIO(0x2B10)
+#define GEN8_OATAILPTR_MASK    0xffffffc0

 #define OABUFFER_SIZE_128K  (0<<3)
 #define OABUFFER_SIZE_256K  (1<<3)
@@ -705,7 +714,17 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)

 #define OA_MEM_SELECT_GGTT  (1<<0)

+/*
+ * Flexible, Aggregate EU Counter Registers.
+ * Note: these aren't contiguous
+ */
 #define EU_PERF_CNTL0	    _MMIO(0xe458)
+#define EU_PERF_CNTL1	    _MMIO(0xe558)
+#define EU_PERF_CNTL2	    _MMIO(0xe658)
+#define EU_PERF_CNTL3	    _MMIO(0xe758)
+#define EU_PERF_CNTL4	    _MMIO(0xe45c)
+#define EU_PERF_CNTL5	    _MMIO(0xe55c)
+#define EU_PERF_CNTL6	    _MMIO(0xe65c)

 #define GDT_CHICKEN_BITS    _MMIO(0x9840)
 #define GT_NOA_ENABLE	    0x00000080
@@ -2325,6 +2344,9 @@ enum skl_disp_power_wells {
 #define   GEN8_RC_SEMA_IDLE_MSG_DISABLE	(1 << 12)
 #define   GEN8_FF_DOP_CLOCK_GATE_DISABLE	(1<<10)

+#define GEN6_RCS_PWR_FSM _MMIO(0x22ac)
+#define GEN9_RCS_FE_FSM2 _MMIO(0x22a4)
+
 /* Fuse readout registers for GT */
 #define CHV_FUSE_GT			_MMIO(VLV_DISPLAY_BASE + 0x2168)
 #define   CHV_FGT_DISABLE_SS0		(1 << 10)
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 0909549ad320..58216c4bfbc1 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1877,6 +1877,8 @@ static void execlists_init_reg_state(u32 *regs,
 		regs[CTX_LRI_HEADER_2] = MI_LOAD_REGISTER_IMM(1);
 		CTX_REG(regs, CTX_R_PWR_CLK_STATE, GEN8_R_PWR_CLK_STATE,
 			make_rpcs(dev_priv));
+
+		i915_oa_init_reg_state(engine, ctx, regs);
 	}
 }

diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
index 464547d08173..15bc9f78ba4d 100644
--- a/include/uapi/drm/i915_drm.h
+++ b/include/uapi/drm/i915_drm.h
@@ -1316,13 +1316,18 @@ struct drm_i915_gem_context_param {
 };

 enum drm_i915_oa_format {
-	I915_OA_FORMAT_A13 = 1,
-	I915_OA_FORMAT_A29,
-	I915_OA_FORMAT_A13_B8_C8,
-	I915_OA_FORMAT_B4_C8,
-	I915_OA_FORMAT_A45_B8_C8,
-	I915_OA_FORMAT_B4_C8_A16,
-	I915_OA_FORMAT_C4_B8,
+	I915_OA_FORMAT_A13 = 1,	    /* HSW only */
+	I915_OA_FORMAT_A29,	    /* HSW only */
+	I915_OA_FORMAT_A13_B8_C8,   /* HSW only */
+	I915_OA_FORMAT_B4_C8,	    /* HSW only */
+	I915_OA_FORMAT_A45_B8_C8,   /* HSW only */
+	I915_OA_FORMAT_B4_C8_A16,   /* HSW only */
+	I915_OA_FORMAT_C4_B8,	    /* HSW+ */
+
+	/* Gen8+ */
+	I915_OA_FORMAT_A12,
+	I915_OA_FORMAT_A12_B8_C8,
+	I915_OA_FORMAT_A32u40_A4u32_B8_C8,

 	I915_OA_FORMAT_MAX	    /* non-ABI */
 };
--
2.11.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* Re: [PATCH v9 12/15] drm/i915/perf: Add OA unit support for Gen 8+
  2017-05-02  1:17               ` [PATCH v9 " Lionel Landwerlin
@ 2017-05-02 20:14                 ` Chris Wilson
  2017-05-02 21:44                   ` [PATCH v10 " Lionel Landwerlin
  0 siblings, 1 reply; 87+ messages in thread
From: Chris Wilson @ 2017-05-02 20:14 UTC (permalink / raw)
  To: Lionel Landwerlin; +Cc: intel-gfx, matthew.auld

On Mon, May 01, 2017 at 06:17:09PM -0700, Lionel Landwerlin wrote:

Focusing on the bit I know best and leaving the hw mumbo jumbo to one
side...

> +static int gen8_configure_all_contexts(struct drm_i915_private *dev_priv)
> +{
> +	struct intel_engine_cs *engine = dev_priv->engine[RCS];
> +	struct i915_gem_context *ctx;
> +	int ret;
> +
> +	ret = i915_mutex_lock_interruptible(&dev_priv->drm);
> +	if (ret)
> +		return ret;
> +
> +	/* Switch away from any user context.  */
> +	ret = i915_gem_switch_to_kernel_context(dev_priv);
> +	if (ret) {
> +		mutex_unlock(&dev_priv->drm.struct_mutex);
> +		return ret;
> +	}
> +
> +	/* The OA register config is setup through the context image. This image
> +	 * might be written to by the GPU on context switch (in particular on
> +	 * lite-restore). This means we can't safely update a context's image,
> +	 * if this context is scheduled/submitted to run on the GPU.
> +	 *
> +	 * We could emit the OA register config through the batch buffer but
> +	 * this might leave small interval of time where the OA unit is
> +	 * configured at an invalid sampling period.
> +	 *
> +	 * So far the best way to work around this issue seems to be draining
> +	 * the GPU from any submitted work.
> +	 */
> +	ret = i915_gem_wait_for_idle(dev_priv,
> +				     I915_WAIT_INTERRUPTIBLE |
> +				     I915_WAIT_LOCKED);
> +	if (ret) {
> +		mutex_unlock(&dev_priv->drm.struct_mutex);
> +		return ret;
> +	}
> +
> +	/* Update all contexts now that we've stalled the submission. */
> +	list_for_each_entry(ctx, &dev_priv->context_list, link) {
> +		ret = engine->context_pin(engine, ctx);
> +		if (ret) {
> +			mutex_unlock(&dev_priv->drm.struct_mutex);
> +			return ret;
> +		}
> +
> +		gen8_update_reg_state_unlocked(ctx,
> +					       ctx->engine[RCS].lrc_reg_state);

engine->context_unpin is missing.

Since you do populate the OA settings for contexts initialised later,
you can skip the context_pin here (keeping the deferred initialisation
for currently unused contexts) and use

	list_for_each_entry(ctx, &dev_priv->context_list, link) {
		struct intel_context *ce = &ctx->engine[RCS];
		u32 *regs;

		if (!ce->state) /* OA settings will be set upon first use */
			continue;

		/* Hmm, export this from intel_lrc.c? */
		regs = i915_gem_object_pin_map(ce->state->obj, I915_MAP_WB);
		if (IS_ERR(regs)) {
			/* teardown? */
			mutex_unlock(&ctx->i915->drm.struct_mutex);
			return PTR_ERR(regs);
		}
		ce->state->obj->mm.dirty = true;
		regs += LRC_STATE_PN * PAGE_SIZE / sizeof(*regs);
		/* Probably best since we don't really want to expose
		 * the LRC_STATE_PN trick.
		 */

		gen8_update_reg_state_unlocked(ctx, regs);

		i915_gem_object_unpin_map(ce->state->obj);
	}

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* [PATCH v10 12/15] drm/i915/perf: Add OA unit support for Gen 8+
  2017-05-02 20:14                 ` Chris Wilson
@ 2017-05-02 21:44                   ` Lionel Landwerlin
  2017-05-03 10:31                     ` Chris Wilson
  0 siblings, 1 reply; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-02 21:44 UTC (permalink / raw)
  To: intel-gfx; +Cc: matthew.auld

From: Robert Bragg <robert@sixbynine.org>

Enables access to OA unit metrics for BDW, CHV, SKL and BXT which all
share (more-or-less) the same OA unit design.

Of particular note in comparison to Haswell: some OA unit HW config
state has become per-context state and as a consequence it is somewhat
more complicated to manage synchronous state changes from the cpu while
there's no guarantee of what context (if any) is currently actively
running on the gpu.

The periodic sampling frequency which can be particularly useful for
system-wide analysis (as opposed to command stream synchronised
MI_REPORT_PERF_COUNT commands) is perhaps the most surprising state to
have become per-context save and restored (while the OABUFFER
destination is still a shared, system-wide resource).

This support for gen8+ takes care to consider a number of timing
challenges involved in synchronously updating per-context state
primarily by programming all config state from the cpu and updating all
current and saved contexts synchronously while the OA unit is still
disabled.

The driver intentionally avoids depending on command streamer
programming to update OA state considering the lack of synchronization
between the automatic loading of OACTXCONTROL state (that includes the
periodic sampling state and enable state) on context restore and the
parsing of any general purpose BB the driver can control. I.e. this
implementation is careful to avoid the possibility of a context restore
temporarily enabling any out-of-date periodic sampling state. In
addition to the risk of transiently-out-of-date state being loaded
automatically; there are also internal HW latencies involved in the
loading of MUX configurations which would be difficult to account for
from the command streamer (and we only want to enable the unit when once
the MUX configuration is complete).

Since the Gen8+ OA unit design no longer supports clock gating the unit
off for a single given context (which effectively stopped any progress
of counters while any other context was running) and instead supports
tagging OA reports with a context ID for filtering on the CPU, it means
we can no longer hide the system-wide progress of counters from a
non-privileged application only interested in metrics for its own
context. Although we could theoretically try and subtract the progress
of other contexts before forwarding reports via read() we aren't in a
position to filter reports captured via MI_REPORT_PERF_COUNT commands.
As a result, for Gen8+, we always require the
dev.i915.perf_stream_paranoid to be unset for any access to OA metrics
if not root.

v5: Drain submitted requests when enabling metric set to ensure no
    lite-restore erases the context image we just updated (Lionel)

v6: In addition to drain, switch to kernel context & update all
    context in place (Chris)

v7: Add missing mutex_unlock() if switching to kernel context fails
    (Matthew)

v8: Simplify OA period/flex-eu-counters programming by using the
    batchbuffer instead of modifying ctx-image (Lionel)

v9: Back to updating the context image (due to erroneous testing,
    batchbuffer programming the OA unit doesn't actually work)
    (Lionel)
    Pin context before updating context image (Chris)
    Drop MMIO programming now that we switch to a kernel context with
    right values in initial context image (Chris)

v10: Just pin_map the contexts we want to modify or let the
     configuration happen on first use (Chris)

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com> \o/
---
 drivers/gpu/drm/i915/i915_drv.h  |  45 +-
 drivers/gpu/drm/i915/i915_perf.c | 889 +++++++++++++++++++++++++++++++++++----
 drivers/gpu/drm/i915/i915_reg.h  |  22 +
 drivers/gpu/drm/i915/intel_lrc.c |   2 +
 include/uapi/drm/i915_drm.h      |  19 +-
 5 files changed, 884 insertions(+), 93 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index f15751534c29..eaf4ce5e5790 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2063,9 +2063,17 @@ struct i915_oa_ops {
 	void (*init_oa_buffer)(struct drm_i915_private *dev_priv);

 	/**
-	 * @enable_metric_set: Applies any MUX configuration to set up the
-	 * Boolean and Custom (B/C) counters that are part of the counter
-	 * reports being sampled. May apply system constraints such as
+	 * @select_metric_set: The auto generated code that checks whether a
+	 * requested OA config is applicable to the system and if so sets up
+	 * the mux, oa and flex eu register config pointers according to the
+	 * current dev_priv->perf.oa.metrics_set.
+	 */
+	int (*select_metric_set)(struct drm_i915_private *dev_priv);
+
+	/**
+	 * @enable_metric_set: Selects and applies any MUX configuration to set
+	 * up the Boolean and Custom (B/C) counters that are part of the
+	 * counter reports being sampled. May apply system constraints such as
 	 * disabling EU clock gating as required.
 	 */
 	int (*enable_metric_set)(struct drm_i915_private *dev_priv);
@@ -2096,20 +2104,13 @@ struct i915_oa_ops {
 		    size_t *offset);

 	/**
-	 * @oa_buffer_check: Check for OA buffer data + update tail
-	 *
-	 * This is either called via fops or the poll check hrtimer (atomic
-	 * ctx) without any locks taken.
+	 * @oa_hw_tail_read: read the OA tail pointer register
 	 *
-	 * It's safe to read OA config state here unlocked, assuming that this
-	 * is only called while the stream is enabled, while the global OA
-	 * configuration can't be modified.
-	 *
-	 * Efficiency is more important than avoiding some false positives
-	 * here, which will be handled gracefully - likely resulting in an
-	 * %EAGAIN error for userspace.
+	 * In particular this enables us to share all the fiddly code for
+	 * handling the OA unit tail pointer race that affects multiple
+	 * generations.
 	 */
-	bool (*oa_buffer_check)(struct drm_i915_private *dev_priv);
+	u32 (*oa_hw_tail_read)(struct drm_i915_private *dev_priv);
 };

 struct intel_cdclk_state {
@@ -2470,6 +2471,7 @@ struct drm_i915_private {
 			struct {
 				struct i915_vma *vma;
 				u8 *vaddr;
+				u32 last_ctx_id;
 				int format;
 				int format_size;

@@ -2539,6 +2541,14 @@ struct drm_i915_private {
 			} oa_buffer;

 			u32 gen7_latched_oastatus1;
+			u32 ctx_oactxctrl_offset;
+			u32 ctx_flexeu0_offset;
+
+			/* The RPT_ID/reason field for Gen8+ includes a bit
+			 * to determine if the CTX ID in the report is valid
+			 * but the specific bit differs between Gen 8 and 9
+			 */
+			u32 gen8_valid_ctx_bit;

 			struct i915_oa_ops ops;
 			const struct i915_oa_format *oa_formats;
@@ -2849,6 +2859,8 @@ intel_info(const struct drm_i915_private *dev_priv)
 #define IS_KBL_ULX(dev_priv)	(INTEL_DEVID(dev_priv) == 0x590E || \
 				 INTEL_DEVID(dev_priv) == 0x5915 || \
 				 INTEL_DEVID(dev_priv) == 0x591E)
+#define IS_SKL_GT2(dev_priv)	(IS_SKYLAKE(dev_priv) && \
+				 (INTEL_DEVID(dev_priv) & 0x00F0) == 0x0010)
 #define IS_SKL_GT3(dev_priv)	(IS_SKYLAKE(dev_priv) && \
 				 (INTEL_DEVID(dev_priv) & 0x00F0) == 0x0020)
 #define IS_SKL_GT4(dev_priv)	(IS_SKYLAKE(dev_priv) && \
@@ -3612,6 +3624,9 @@ i915_gem_context_lookup_timeline(struct i915_gem_context *ctx,

 int i915_perf_open_ioctl(struct drm_device *dev, void *data,
 			 struct drm_file *file);
+void i915_oa_init_reg_state(struct intel_engine_cs *engine,
+			    struct i915_gem_context *ctx,
+			    uint32_t *reg_state);

 /* i915_gem_evict.c */
 int __must_check i915_gem_evict_something(struct i915_address_space *vm,
diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 3277a52ce98e..7a82681d122e 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -196,6 +196,12 @@

 #include "i915_drv.h"
 #include "i915_oa_hsw.h"
+#include "i915_oa_bdw.h"
+#include "i915_oa_chv.h"
+#include "i915_oa_sklgt2.h"
+#include "i915_oa_sklgt3.h"
+#include "i915_oa_sklgt4.h"
+#include "i915_oa_bxt.h"

 /* HW requires this to be a power of two, between 128k and 16M, though driver
  * is currently generally designed assuming the largest 16M size is used such
@@ -215,7 +221,7 @@
  *
  * Although this can be observed explicitly while copying reports to userspace
  * by checking for a zeroed report-id field in tail reports, we want to account
- * for this earlier, as part of the _oa_buffer_check to avoid lots of redundant
+ * for this earlier, as part of the oa_buffer_check to avoid lots of redundant
  * read() attempts.
  *
  * In effect we define a tail pointer for reading that lags the real tail
@@ -237,7 +243,7 @@
  * indicates that an updated tail pointer is needed.
  *
  * Most of the implementation details for this workaround are in
- * gen7_oa_buffer_check_unlocked() and gen7_appand_oa_reports()
+ * oa_buffer_check_unlocked() and _append_oa_reports()
  *
  * Note for posterity: previously the driver used to define an effective tail
  * pointer that lagged the real pointer by a 'tail margin' measured in bytes
@@ -272,6 +278,13 @@ static u32 i915_perf_stream_paranoid = true;

 #define INVALID_CTX_ID 0xffffffff

+/* On Gen8+ automatically triggered OA reports include a 'reason' field... */
+#define OAREPORT_REASON_MASK           0x3f
+#define OAREPORT_REASON_SHIFT          19
+#define OAREPORT_REASON_TIMER          (1<<0)
+#define OAREPORT_REASON_CTX_SWITCH     (1<<3)
+#define OAREPORT_REASON_CLK_RATIO      (1<<5)
+

 /* For sysctl proc_dointvec_minmax of i915_oa_max_sample_rate
  *
@@ -303,6 +316,13 @@ static struct i915_oa_format hsw_oa_formats[I915_OA_FORMAT_MAX] = {
 	[I915_OA_FORMAT_C4_B8]	    = { 7, 64 },
 };

+static struct i915_oa_format gen8_plus_oa_formats[I915_OA_FORMAT_MAX] = {
+	[I915_OA_FORMAT_A12]		    = { 0, 64 },
+	[I915_OA_FORMAT_A12_B8_C8]	    = { 2, 128 },
+	[I915_OA_FORMAT_A32u40_A4u32_B8_C8] = { 5, 256 },
+	[I915_OA_FORMAT_C4_B8]		    = { 7, 64 },
+};
+
 #define SAMPLE_OA_REPORT      (1<<0)

 /**
@@ -332,8 +352,20 @@ struct perf_open_properties {
 	int oa_period_exponent;
 };

+static u32 gen8_oa_hw_tail_read(struct drm_i915_private *dev_priv)
+{
+	return I915_READ(GEN8_OATAILPTR) & GEN8_OATAILPTR_MASK;
+}
+
+static u32 gen7_oa_hw_tail_read(struct drm_i915_private *dev_priv)
+{
+	u32 oastatus1 = I915_READ(GEN7_OASTATUS1);
+
+	return oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
+}
+
 /**
- * gen7_oa_buffer_check_unlocked - check for data and update tail ptr state
+ * oa_buffer_check_unlocked - check for data and update tail ptr state
  * @dev_priv: i915 device instance
  *
  * This is either called via fops (for blocking reads in user ctx) or the poll
@@ -356,12 +388,11 @@ struct perf_open_properties {
  *
  * Returns: %true if the OA buffer contains data, else %false
  */
-static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
+static bool oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
 {
 	int report_size = dev_priv->perf.oa.oa_buffer.format_size;
 	unsigned long flags;
 	unsigned int aged_idx;
-	u32 oastatus1;
 	u32 head, hw_tail, aged_tail, aging_tail;
 	u64 now;

@@ -381,8 +412,7 @@ static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
 	aged_tail = dev_priv->perf.oa.oa_buffer.tails[aged_idx].offset;
 	aging_tail = dev_priv->perf.oa.oa_buffer.tails[!aged_idx].offset;

-	oastatus1 = I915_READ(GEN7_OASTATUS1);
-	hw_tail = oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
+	hw_tail = dev_priv->perf.oa.ops.oa_hw_tail_read(dev_priv);

 	/* The tail pointer increases in 64 byte increments,
 	 * not in report_size steps...
@@ -404,6 +434,7 @@ static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
 	if (aging_tail != INVALID_TAIL_PTR &&
 	    ((now - dev_priv->perf.oa.oa_buffer.aging_timestamp) >
 	     OA_TAIL_MARGIN_NSEC)) {
+
 		aged_idx ^= 1;
 		dev_priv->perf.oa.oa_buffer.aged_tail_idx = aged_idx;

@@ -553,6 +584,287 @@ static int append_oa_sample(struct i915_perf_stream *stream,
  *
  * Returns: 0 on success, negative error code on failure.
  */
+static int gen8_append_oa_reports(struct i915_perf_stream *stream,
+				  char __user *buf,
+				  size_t count,
+				  size_t *offset)
+{
+	struct drm_i915_private *dev_priv = stream->dev_priv;
+	int report_size = dev_priv->perf.oa.oa_buffer.format_size;
+	u8 *oa_buf_base = dev_priv->perf.oa.oa_buffer.vaddr;
+	u32 gtt_offset = i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
+	u32 mask = (OA_BUFFER_SIZE - 1);
+	size_t start_offset = *offset;
+	unsigned long flags;
+	unsigned int aged_tail_idx;
+	u32 head, tail;
+	u32 taken;
+	int ret = 0;
+
+	if (WARN_ON(!stream->enabled))
+		return -EIO;
+
+	spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	head = dev_priv->perf.oa.oa_buffer.head;
+	aged_tail_idx = dev_priv->perf.oa.oa_buffer.aged_tail_idx;
+	tail = dev_priv->perf.oa.oa_buffer.tails[aged_tail_idx].offset;
+
+	spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	/* An invalid tail pointer here means we're still waiting for the poll
+	 * hrtimer callback to give us a pointer
+	 */
+	if (tail == INVALID_TAIL_PTR)
+		return -EAGAIN;
+
+	/* NB: oa_buffer.head/tail include the gtt_offset which we don't want
+	 * while indexing relative to oa_buf_base.
+	 */
+	head -= gtt_offset;
+	tail -= gtt_offset;
+
+	/* An out of bounds or misaligned head or tail pointer implies a driver
+	 * bug since we validate + align the tail pointers we read from the
+	 * hardware and we are in full control of the head pointer which should
+	 * only be incremented by multiples of the report size (notably also
+	 * all a power of two).
+	 */
+	if (WARN_ONCE(head > OA_BUFFER_SIZE || head % report_size ||
+		      tail > OA_BUFFER_SIZE || tail % report_size,
+		      "Inconsistent OA buffer pointers: head = %u, tail = %u\n",
+		      head, tail))
+		return -EIO;
+
+
+	for (/* none */;
+	     (taken = OA_TAKEN(tail, head));
+	     head = (head + report_size) & mask) {
+		u8 *report = oa_buf_base + head;
+		u32 *report32 = (void *)report;
+		u32 ctx_id;
+		u32 reason;
+
+		/* All the report sizes factor neatly into the buffer
+		 * size so we never expect to see a report split
+		 * between the beginning and end of the buffer.
+		 *
+		 * Given the initial alignment check a misalignment
+		 * here would imply a driver bug that would result
+		 * in an overrun.
+		 */
+		if (WARN_ON((OA_BUFFER_SIZE - head) < report_size)) {
+			DRM_ERROR("Spurious OA head ptr: non-integral report offset\n");
+			break;
+		}
+
+		/* The reason field includes flags identifying what
+		 * triggered this specific report (mostly timer
+		 * triggered or e.g. due to a context switch).
+		 *
+		 * This field is never expected to be zero so we can
+		 * check that the report isn't invalid before copying
+		 * it to userspace...
+		 */
+		reason = ((report32[0] >> OAREPORT_REASON_SHIFT) &
+			  OAREPORT_REASON_MASK);
+		if (reason == 0) {
+			if (__ratelimit(&dev_priv->perf.oa.spurious_report_rs))
+				DRM_NOTE("Skipping spurious, invalid OA report\n");
+			continue;
+		}
+
+		/* XXX: Just keep the lower 21 bits for now since I'm not
+		 * entirely sure if the HW touches any of the higher bits in
+		 * this field
+		 */
+		ctx_id = report32[2] & 0x1fffff;
+
+		/* Squash whatever is in the CTX_ID field if it's marked as
+		 * invalid to be sure we avoid false-positive, single-context
+		 * filtering below...
+		 *
+		 * Note: that we don't clear the valid_ctx_bit so userspace can
+		 * understand that the ID has been squashed by the kernel.
+		 */
+		if (!(report32[0] & dev_priv->perf.oa.gen8_valid_ctx_bit))
+			ctx_id = report32[2] = INVALID_CTX_ID;
+
+		/* NB: For Gen 8 the OA unit no longer supports clock gating
+		 * off for a specific context and the kernel can't securely
+		 * stop the counters from updating as system-wide / global
+		 * values.
+		 *
+		 * Automatic reports now include a context ID so reports can be
+		 * filtered on the cpu but it's not worth trying to
+		 * automatically subtract/hide counter progress for other
+		 * contexts while filtering since we can't stop userspace
+		 * issuing MI_REPORT_PERF_COUNT commands which would still
+		 * provide a side-band view of the real values.
+		 *
+		 * To allow userspace (such as Mesa/GL_INTEL_performance_query)
+		 * to normalize counters for a single filtered context then it
+		 * needs be forwarded bookend context-switch reports so that it
+		 * can track switches in between MI_REPORT_PERF_COUNT commands
+		 * and can itself subtract/ignore the progress of counters
+		 * associated with other contexts. Note that the hardware
+		 * automatically triggers reports when switching to a new
+		 * context which are tagged with the ID of the newly active
+		 * context. To avoid the complexity (and likely fragility) of
+		 * reading ahead while parsing reports to try and minimize
+		 * forwarding redundant context switch reports (i.e. between
+		 * other, unrelated contexts) we simply elect to forward them
+		 * all.
+		 *
+		 * We don't rely solely on the reason field to identify context
+		 * switches since it's not-uncommon for periodic samples to
+		 * identify a switch before any 'context switch' report.
+		 */
+		if (!dev_priv->perf.oa.exclusive_stream->ctx ||
+		    dev_priv->perf.oa.specific_ctx_id == ctx_id ||
+		    (dev_priv->perf.oa.oa_buffer.last_ctx_id ==
+		     dev_priv->perf.oa.specific_ctx_id) ||
+		    reason & OAREPORT_REASON_CTX_SWITCH) {
+
+			/* While filtering for a single context we avoid
+			 * leaking the IDs of other contexts.
+			 */
+			if (dev_priv->perf.oa.exclusive_stream->ctx &&
+			    dev_priv->perf.oa.specific_ctx_id != ctx_id) {
+				report32[2] = INVALID_CTX_ID;
+			}
+
+			ret = append_oa_sample(stream, buf, count, offset,
+					       report);
+			if (ret)
+				break;
+
+			dev_priv->perf.oa.oa_buffer.last_ctx_id = ctx_id;
+		}
+
+		/* The above reason field sanity check is based on
+		 * the assumption that the OA buffer is initially
+		 * zeroed and we reset the field after copying so the
+		 * check is still meaningful once old reports start
+		 * being overwritten.
+		 */
+		report32[0] = 0;
+	}
+
+	if (start_offset != *offset) {
+		spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+		/* We removed the gtt_offset for the copy loop above, indexing
+		 * relative to oa_buf_base so put back here...
+		 */
+		head += gtt_offset;
+
+		I915_WRITE(GEN8_OAHEADPTR, head & GEN8_OAHEADPTR_MASK);
+		dev_priv->perf.oa.oa_buffer.head = head;
+
+		spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+	}
+
+	return ret;
+}
+
+/**
+ * gen8_oa_read - copy status records then buffered OA reports
+ * @stream: An i915-perf stream opened for OA metrics
+ * @buf: destination buffer given by userspace
+ * @count: the number of bytes userspace wants to read
+ * @offset: (inout): the current position for writing into @buf
+ *
+ * Checks OA unit status registers and if necessary appends corresponding
+ * status records for userspace (such as for a buffer full condition) and then
+ * initiate appending any buffered OA reports.
+ *
+ * Updates @offset according to the number of bytes successfully copied into
+ * the userspace buffer.
+ *
+ * NB: some data may be successfully copied to the userspace buffer
+ * even if an error is returned, and this is reflected in the
+ * updated @offset.
+ *
+ * Returns: zero on success or a negative error code
+ */
+static int gen8_oa_read(struct i915_perf_stream *stream,
+			char __user *buf,
+			size_t count,
+			size_t *offset)
+{
+	struct drm_i915_private *dev_priv = stream->dev_priv;
+	u32 oastatus;
+	int ret;
+
+	if (WARN_ON(!dev_priv->perf.oa.oa_buffer.vaddr))
+		return -EIO;
+
+	oastatus = I915_READ(GEN8_OASTATUS);
+
+	/* We treat OABUFFER_OVERFLOW as a significant error:
+	 *
+	 * Although theoretically we could handle this more gracefully
+	 * sometimes, some Gens don't correctly suppress certain
+	 * automatically triggered reports in this condition and so we
+	 * have to assume that old reports are now being trampled
+	 * over.
+	 *
+	 * Considering how we don't currently give userspace control
+	 * over the OA buffer size and always configure a large 16MB
+	 * buffer, then a buffer overflow does anyway likely indicate
+	 * that something has gone quite badly wrong.
+	 */
+	if (oastatus & GEN8_OASTATUS_OABUFFER_OVERFLOW) {
+		ret = append_oa_status(stream, buf, count, offset,
+				       DRM_I915_PERF_RECORD_OA_BUFFER_LOST);
+		if (ret)
+			return ret;
+
+		DRM_DEBUG("OA buffer overflow (exponent = %d): force restart\n",
+			  dev_priv->perf.oa.period_exponent);
+
+		dev_priv->perf.oa.ops.oa_disable(dev_priv);
+		dev_priv->perf.oa.ops.oa_enable(dev_priv);
+
+		/* Note: .oa_enable() is expected to re-init the oabuffer
+		 * and reset GEN8_OASTATUS for us
+		 */
+		oastatus = I915_READ(GEN8_OASTATUS);
+	}
+
+	if (oastatus & GEN8_OASTATUS_REPORT_LOST) {
+		ret = append_oa_status(stream, buf, count, offset,
+				       DRM_I915_PERF_RECORD_OA_REPORT_LOST);
+		if (ret)
+			return ret;
+		I915_WRITE(GEN8_OASTATUS,
+			   oastatus & ~GEN8_OASTATUS_REPORT_LOST);
+	}
+
+	return gen8_append_oa_reports(stream, buf, count, offset);
+}
+
+/**
+ * Copies all buffered OA reports into userspace read() buffer.
+ * @stream: An i915-perf stream opened for OA metrics
+ * @buf: destination buffer given by userspace
+ * @count: the number of bytes userspace wants to read
+ * @offset: (inout): the current position for writing into @buf
+ *
+ * Notably any error condition resulting in a short read (-%ENOSPC or
+ * -%EFAULT) will be returned even though one or more records may
+ * have been successfully copied. In this case it's up to the caller
+ * to decide if the error should be squashed before returning to
+ * userspace.
+ *
+ * Note: reports are consumed from the head, and appended to the
+ * tail, so the tail chases the head?... If you think that's mad
+ * and back-to-front you're not alone, but this follows the
+ * Gen PRM naming convention.
+ *
+ * Returns: 0 on success, negative error code on failure.
+ */
 static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 				  char __user *buf,
 				  size_t count,
@@ -733,7 +1045,8 @@ static int gen7_oa_read(struct i915_perf_stream *stream,
 		if (ret)
 			return ret;

-		DRM_DEBUG("OA buffer overflow: force restart\n");
+		DRM_DEBUG("OA buffer overflow (exponent = %d): force restart\n",
+			  dev_priv->perf.oa.period_exponent);

 		dev_priv->perf.oa.ops.oa_disable(dev_priv);
 		dev_priv->perf.oa.ops.oa_enable(dev_priv);
@@ -776,7 +1089,7 @@ static int i915_oa_wait_unlocked(struct i915_perf_stream *stream)
 		return -EIO;

 	return wait_event_interruptible(dev_priv->perf.oa.poll_wq,
-					dev_priv->perf.oa.ops.oa_buffer_check(dev_priv));
+					oa_buffer_check_unlocked(dev_priv));
 }

 /**
@@ -833,33 +1146,37 @@ static int i915_oa_read(struct i915_perf_stream *stream,
 static int oa_get_render_ctx_id(struct i915_perf_stream *stream)
 {
 	struct drm_i915_private *dev_priv = stream->dev_priv;
-	struct intel_engine_cs *engine = dev_priv->engine[RCS];
-	int ret;

-	ret = i915_mutex_lock_interruptible(&dev_priv->drm);
-	if (ret)
-		return ret;
+	if (i915.enable_execlists)
+		dev_priv->perf.oa.specific_ctx_id = stream->ctx->hw_id;
+	else {
+		struct intel_engine_cs *engine = dev_priv->engine[RCS];
+		int ret;

-	/* As the ID is the gtt offset of the context's vma we pin
-	 * the vma to ensure the ID remains fixed.
-	 *
-	 * NB: implied RCS engine...
-	 */
-	ret = engine->context_pin(engine, stream->ctx);
-	if (ret)
-		goto unlock;
+		ret = i915_mutex_lock_interruptible(&dev_priv->drm);
+		if (ret)
+			return ret;

-	/* Explicitly track the ID (instead of calling i915_ggtt_offset()
-	 * on the fly) considering the difference with gen8+ and
-	 * execlists
-	 */
-	dev_priv->perf.oa.specific_ctx_id =
-		i915_ggtt_offset(stream->ctx->engine[engine->id].state);
+		/* As the ID is the gtt offset of the context's vma we pin
+		 * the vma to ensure the ID remains fixed.
+		 */
+		ret = engine->context_pin(engine, stream->ctx);
+		if (ret) {
+			mutex_unlock(&dev_priv->drm.struct_mutex);
+			return ret;
+		}

-unlock:
-	mutex_unlock(&dev_priv->drm.struct_mutex);
+		/* Explicitly track the ID (instead of calling
+		 * i915_ggtt_offset() on the fly) considering the difference
+		 * with gen8+ and execlists
+		 */
+		dev_priv->perf.oa.specific_ctx_id =
+			i915_ggtt_offset(stream->ctx->engine[engine->id].state);

-	return ret;
+		mutex_unlock(&dev_priv->drm.struct_mutex);
+	}
+
+	return 0;
 }

 /**
@@ -874,12 +1191,16 @@ static void oa_put_render_ctx_id(struct i915_perf_stream *stream)
 	struct drm_i915_private *dev_priv = stream->dev_priv;
 	struct intel_engine_cs *engine = dev_priv->engine[RCS];

-	mutex_lock(&dev_priv->drm.struct_mutex);
+	if (i915.enable_execlists) {
+		dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
+	} else {
+		mutex_lock(&dev_priv->drm.struct_mutex);

-	dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
-	engine->context_unpin(engine, stream->ctx);
+		dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
+		engine->context_unpin(engine, stream->ctx);

-	mutex_unlock(&dev_priv->drm.struct_mutex);
+		mutex_unlock(&dev_priv->drm.struct_mutex);
+	}
 }

 static void
@@ -969,6 +1290,61 @@ static void gen7_init_oa_buffer(struct drm_i915_private *dev_priv)
 	dev_priv->perf.oa.pollin = false;
 }

+static void gen8_init_oa_buffer(struct drm_i915_private *dev_priv)
+{
+	u32 gtt_offset = i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
+	unsigned long flags;
+
+	spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	I915_WRITE(GEN8_OASTATUS, 0);
+	I915_WRITE(GEN8_OAHEADPTR, gtt_offset);
+	dev_priv->perf.oa.oa_buffer.head = gtt_offset;
+
+	I915_WRITE(GEN8_OABUFFER_UDW, 0);
+
+	/* PRM says:
+	 *
+	 *  "This MMIO must be set before the OATAILPTR
+	 *  register and after the OAHEADPTR register. This is
+	 *  to enable proper functionality of the overflow
+	 *  bit."
+	 */
+	I915_WRITE(GEN8_OABUFFER, gtt_offset |
+		   OABUFFER_SIZE_16M | OA_MEM_SELECT_GGTT);
+	I915_WRITE(GEN8_OATAILPTR, gtt_offset & GEN8_OATAILPTR_MASK);
+
+	/* Mark that we need updated tail pointers to read from... */
+	dev_priv->perf.oa.oa_buffer.tails[0].offset = INVALID_TAIL_PTR;
+	dev_priv->perf.oa.oa_buffer.tails[1].offset = INVALID_TAIL_PTR;
+
+	/* Reset state used to recognise context switches, affecting which
+	 * reports we will forward to userspace while filtering for a single
+	 * context.
+	 */
+	dev_priv->perf.oa.oa_buffer.last_ctx_id = INVALID_CTX_ID;
+
+	spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	/* NB: although the OA buffer will initially be allocated
+	 * zeroed via shmfs (and so this memset is redundant when
+	 * first allocating), we may re-init the OA buffer, either
+	 * when re-enabling a stream or in error/reset paths.
+	 *
+	 * The reason we clear the buffer for each re-init is for the
+	 * sanity check in gen8_append_oa_reports() that looks at the
+	 * reason field to make sure it's non-zero which relies on
+	 * the assumption that new reports are being written to zeroed
+	 * memory...
+	 */
+	memset(dev_priv->perf.oa.oa_buffer.vaddr, 0, OA_BUFFER_SIZE);
+
+	/* Maybe make ->pollin per-stream state if we support multiple
+	 * concurrent streams in the future.
+	 */
+	dev_priv->perf.oa.pollin = false;
+}
+
 static int alloc_oa_buffer(struct drm_i915_private *dev_priv)
 {
 	struct drm_i915_gem_object *bo;
@@ -1113,6 +1489,212 @@ static void hsw_disable_metric_set(struct drm_i915_private *dev_priv)
 				      ~GT_NOA_ENABLE));
 }

+/* NB: It must always remain pointer safe to run this even if the OA unit
+ * has been disabled.
+ *
+ * It's fine to put out-of-date values into these per-context registers
+ * in the case that the OA unit has been disabled.
+ */
+static void gen8_update_reg_state_unlocked(struct i915_gem_context *ctx,
+					   u32 *reg_state)
+{
+	struct drm_i915_private *dev_priv = ctx->i915;
+	const struct i915_oa_reg *flex_regs = dev_priv->perf.oa.flex_regs;
+	int n_flex_regs = dev_priv->perf.oa.flex_regs_len;
+	u32 ctx_oactxctrl = dev_priv->perf.oa.ctx_oactxctrl_offset;
+	u32 ctx_flexeu0 = dev_priv->perf.oa.ctx_flexeu0_offset;
+	/* The MMIO offsets for Flex EU registers aren't contiguous */
+	u32 flex_mmio[] = {
+		i915_mmio_reg_offset(EU_PERF_CNTL0),
+		i915_mmio_reg_offset(EU_PERF_CNTL1),
+		i915_mmio_reg_offset(EU_PERF_CNTL2),
+		i915_mmio_reg_offset(EU_PERF_CNTL3),
+		i915_mmio_reg_offset(EU_PERF_CNTL4),
+		i915_mmio_reg_offset(EU_PERF_CNTL5),
+		i915_mmio_reg_offset(EU_PERF_CNTL6),
+	};
+	int i;
+
+	reg_state[ctx_oactxctrl] = i915_mmio_reg_offset(GEN8_OACTXCONTROL);
+	reg_state[ctx_oactxctrl+1] = (dev_priv->perf.oa.period_exponent <<
+				      GEN8_OA_TIMER_PERIOD_SHIFT) |
+				     (dev_priv->perf.oa.periodic ?
+				      GEN8_OA_TIMER_ENABLE : 0) |
+				     GEN8_OA_COUNTER_RESUME;
+
+	for (i = 0; i < ARRAY_SIZE(flex_mmio); i++) {
+		u32 state_offset = ctx_flexeu0 + i * 2;
+		u32 mmio = flex_mmio[i];
+
+		/* This arbitrary default will select the 'EU FPU0 Pipeline
+		 * Active' event. In the future it's anticipated that there
+		 * will be an explicit 'No Event' we can select, but not yet...
+		 */
+		u32 value = 0;
+		int j;
+
+		for (j = 0; j < n_flex_regs; j++) {
+			if (i915_mmio_reg_offset(flex_regs[j].addr) == mmio) {
+				value = flex_regs[j].value;
+				break;
+			}
+		}
+
+		reg_state[state_offset] = mmio;
+		reg_state[state_offset+1] = value;
+	}
+}
+
+/* Manages updating the per-context aspects of the OA stream
+ * configuration across all contexts.
+ *
+ * The awkward consideration here is that OACTXCONTROL controls the
+ * exponent for periodic sampling which is primarily used for system
+ * wide profiling where we'd like a consistent sampling period even in
+ * the face of context switches.
+ *
+ * Our approach of updating the register state context (as opposed to
+ * say using a workaround batch buffer) ensures that the hardware
+ * won't automatically reload an out-of-date timer exponent even
+ * transiently before a WA BB could be parsed.
+ *
+ * This function needs to:
+ * - Ensure the currently running context's per-context OA state is
+ *   updated
+ * - Ensure that all existing contexts will have the correct per-context
+ *   OA state if they are scheduled for use.
+ * - Ensure any new contexts will be initialized with the correct
+ *   per-context OA state.
+ *
+ * Note: it's only the RCS/Render context that has any OA state.
+ */
+static int gen8_configure_all_contexts(struct drm_i915_private *dev_priv)
+{
+	struct i915_gem_context *ctx;
+	int ret;
+
+	ret = i915_mutex_lock_interruptible(&dev_priv->drm);
+	if (ret)
+		return ret;
+
+	/* Switch away from any user context.  */
+	ret = i915_gem_switch_to_kernel_context(dev_priv);
+	if (ret)
+		goto out;
+
+	/* The OA register config is setup through the context image. This image
+	 * might be written to by the GPU on context switch (in particular on
+	 * lite-restore). This means we can't safely update a context's image,
+	 * if this context is scheduled/submitted to run on the GPU.
+	 *
+	 * We could emit the OA register config through the batch buffer but
+	 * this might leave small interval of time where the OA unit is
+	 * configured at an invalid sampling period.
+	 *
+	 * So far the best way to work around this issue seems to be draining
+	 * the GPU from any submitted work.
+	 */
+	ret = i915_gem_wait_for_idle(dev_priv,
+				     I915_WAIT_INTERRUPTIBLE |
+				     I915_WAIT_LOCKED);
+	if (ret)
+		goto out;
+
+	/* Update all contexts now that we've stalled the submission. */
+	list_for_each_entry(ctx, &dev_priv->context_list, link) {
+		struct intel_context *ce = &ctx->engine[RCS];
+		u32 *regs;
+
+		/* OA settings will be set upon first use */
+		if (!ce->state)
+			continue;
+
+		regs = i915_gem_object_pin_map(ce->state->obj, I915_MAP_WB);
+		if (IS_ERR(regs)) {
+			ret = PTR_ERR(regs);
+			goto out;
+		}
+
+		ce->state->obj->mm.dirty = true;
+		regs += LRC_STATE_PN * PAGE_SIZE / sizeof(*regs);
+		/* Probably best since we don't really want to expose
+		 * the LRC_STATE_PN trick.
+		 */
+
+		gen8_update_reg_state_unlocked(ctx, regs);
+
+		i915_gem_object_unpin_map(ce->state->obj);
+	}
+
+ out:
+	mutex_unlock(&dev_priv->drm.struct_mutex);
+
+	return ret;
+}
+
+static int gen8_enable_metric_set(struct drm_i915_private *dev_priv)
+{
+	int ret = dev_priv->perf.oa.ops.select_metric_set(dev_priv);
+
+	if (ret)
+		return ret;
+
+	/* We disable slice/unslice clock ratio change reports on SKL since
+	 * they are too noisy. The HW generates a lot of redundant reports
+	 * where the ratio hasn't really changed causing a lot of redundant
+	 * work to processes and increasing the chances we'll hit buffer
+	 * overruns.
+	 *
+	 * Although we don't currently use the 'disable overrun' OABUFFER
+	 * feature it's worth noting that clock ratio reports have to be
+	 * disabled before considering to use that feature since the HW doesn't
+	 * correctly block these reports.
+	 *
+	 * Currently none of the high-level metrics we have depend on knowing
+	 * this ratio to normalize.
+	 *
+	 * Note: This register is not power context saved and restored, but
+	 * that's OK considering that we disable RC6 while the OA unit is
+	 * enabled.
+	 *
+	 * The _INCLUDE_CLK_RATIO bit allows the slice/unslice frequency to
+	 * be read back from automatically triggered reports, as part of the
+	 * RPT_ID field.
+	 */
+	if (IS_SKYLAKE(dev_priv) || IS_BROXTON(dev_priv)) {
+		I915_WRITE(GEN8_OA_DEBUG,
+			   _MASKED_BIT_ENABLE(GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS |
+					      GEN9_OA_DEBUG_INCLUDE_CLK_RATIO));
+	}
+
+	I915_WRITE(GDT_CHICKEN_BITS, 0xA0);
+	config_oa_regs(dev_priv, dev_priv->perf.oa.mux_regs,
+		       dev_priv->perf.oa.mux_regs_len);
+	I915_WRITE(GDT_CHICKEN_BITS, 0x80);
+
+	/* It takes a fairly long time for a new MUX configuration to
+	 * be be applied after these register writes. This delay
+	 * duration is take from Haswell (derived empirically based on
+	 * the render_basic config) but hopefully it covers the
+	 * maximum configuration latency for Gen8+ too...
+	 */
+	usleep_range(15000, 20000);
+
+	config_oa_regs(dev_priv, dev_priv->perf.oa.b_counter_regs,
+		       dev_priv->perf.oa.b_counter_regs_len);
+
+	ret = gen8_configure_all_contexts(dev_priv);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+static void gen8_disable_metric_set(struct drm_i915_private *dev_priv)
+{
+	/* NOP */
+}
+
 static void gen7_update_oacontrol_locked(struct drm_i915_private *dev_priv)
 {
 	lockdep_assert_held(&dev_priv->perf.hook_lock);
@@ -1157,6 +1739,29 @@ static void gen7_oa_enable(struct drm_i915_private *dev_priv)
 	spin_unlock_irqrestore(&dev_priv->perf.hook_lock, flags);
 }

+static void gen8_oa_enable(struct drm_i915_private *dev_priv)
+{
+	u32 report_format = dev_priv->perf.oa.oa_buffer.format;
+
+	/* Reset buf pointers so we don't forward reports from before now.
+	 *
+	 * Think carefully if considering trying to avoid this, since it
+	 * also ensures status flags and the buffer itself are cleared
+	 * in error paths, and we have checks for invalid reports based
+	 * on the assumption that certain fields are written to zeroed
+	 * memory which this helps maintains.
+	 */
+	gen8_init_oa_buffer(dev_priv);
+
+	/* Note: we don't rely on the hardware to perform single context
+	 * filtering and instead filter on the cpu based on the context-id
+	 * field of reports
+	 */
+	I915_WRITE(GEN8_OACONTROL, (report_format <<
+				    GEN8_OA_REPORT_FORMAT_SHIFT) |
+				   GEN8_OA_COUNTER_ENABLE);
+}
+
 /**
  * i915_oa_stream_enable - handle `I915_PERF_IOCTL_ENABLE` for OA stream
  * @stream: An i915 perf stream opened for OA metrics
@@ -1183,6 +1788,11 @@ static void gen7_oa_disable(struct drm_i915_private *dev_priv)
 	I915_WRITE(GEN7_OACONTROL, 0);
 }

+static void gen8_oa_disable(struct drm_i915_private *dev_priv)
+{
+	I915_WRITE(GEN8_OACONTROL, 0);
+}
+
 /**
  * i915_oa_stream_disable - handle `I915_PERF_IOCTL_DISABLE` for OA stream
  * @stream: An i915 perf stream opened for OA metrics
@@ -1361,6 +1971,21 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream,
 	return ret;
 }

+void i915_oa_init_reg_state(struct intel_engine_cs *engine,
+			    struct i915_gem_context *ctx,
+			    u32 *reg_state)
+{
+	struct drm_i915_private *dev_priv = engine->i915;
+
+	if (engine->id != RCS)
+		return;
+
+	if (!dev_priv->perf.initialized)
+		return;
+
+	gen8_update_reg_state_unlocked(ctx, reg_state);
+}
+
 /**
  * i915_perf_read_locked - &i915_perf_stream_ops->read with error normalisation
  * @stream: An i915 perf stream
@@ -1486,7 +2111,7 @@ static enum hrtimer_restart oa_poll_check_timer_cb(struct hrtimer *hrtimer)
 		container_of(hrtimer, typeof(*dev_priv),
 			     perf.oa.poll_check_timer);

-	if (dev_priv->perf.oa.ops.oa_buffer_check(dev_priv)) {
+	if (oa_buffer_check_unlocked(dev_priv)) {
 		dev_priv->perf.oa.pollin = true;
 		wake_up(&dev_priv->perf.oa.poll_wq);
 	}
@@ -1775,6 +2400,7 @@ i915_perf_open_ioctl_locked(struct drm_i915_private *dev_priv,
 	struct i915_gem_context *specific_ctx = NULL;
 	struct i915_perf_stream *stream = NULL;
 	unsigned long f_flags = 0;
+	bool privileged_op = true;
 	int stream_fd;
 	int ret;

@@ -1792,12 +2418,28 @@ i915_perf_open_ioctl_locked(struct drm_i915_private *dev_priv,
 		}
 	}

+	/* On Haswell the OA unit supports clock gating off for a specific
+	 * context and in this mode there's no visibility of metrics for the
+	 * rest of the system, which we consider acceptable for a
+	 * non-privileged client.
+	 *
+	 * For Gen8+ the OA unit no longer supports clock gating off for a
+	 * specific context and the kernel can't securely stop the counters
+	 * from updating as system-wide / global values. Even though we can
+	 * filter reports based on the included context ID we can't block
+	 * clients from seeing the raw / global counter values via
+	 * MI_REPORT_PERF_COUNT commands and so consider it a privileged op to
+	 * enable the OA unit by default.
+	 */
+	if (IS_HASWELL(dev_priv) && specific_ctx)
+		privileged_op = false;
+
 	/* Similar to perf's kernel.perf_paranoid_cpu sysctl option
 	 * we check a dev.i915.perf_stream_paranoid sysctl option
 	 * to determine if it's ok to access system wide OA counters
 	 * without CAP_SYS_ADMIN privileges.
 	 */
-	if (!specific_ctx &&
+	if (privileged_op &&
 	    i915_perf_stream_paranoid && !capable(CAP_SYS_ADMIN)) {
 		DRM_DEBUG("Insufficient privileges to open system-wide i915 perf stream\n");
 		ret = -EACCES;
@@ -2069,9 +2711,6 @@ int i915_perf_open_ioctl(struct drm_device *dev, void *data,
  */
 void i915_perf_register(struct drm_i915_private *dev_priv)
 {
-	if (!IS_HASWELL(dev_priv))
-		return;
-
 	if (!dev_priv->perf.initialized)
 		return;

@@ -2087,11 +2726,38 @@ void i915_perf_register(struct drm_i915_private *dev_priv)
 	if (!dev_priv->perf.metrics_kobj)
 		goto exit;

-	if (i915_perf_register_sysfs_hsw(dev_priv)) {
-		kobject_put(dev_priv->perf.metrics_kobj);
-		dev_priv->perf.metrics_kobj = NULL;
+	if (IS_HASWELL(dev_priv)) {
+		if (i915_perf_register_sysfs_hsw(dev_priv))
+			goto sysfs_error;
+	} else if (IS_BROADWELL(dev_priv)) {
+		if (i915_perf_register_sysfs_bdw(dev_priv))
+			goto sysfs_error;
+	} else if (IS_CHERRYVIEW(dev_priv)) {
+		if (i915_perf_register_sysfs_chv(dev_priv))
+			goto sysfs_error;
+	} else if (IS_SKYLAKE(dev_priv)) {
+		if (IS_SKL_GT2(dev_priv)) {
+			if (i915_perf_register_sysfs_sklgt2(dev_priv))
+				goto sysfs_error;
+		} else if (IS_SKL_GT3(dev_priv)) {
+			if (i915_perf_register_sysfs_sklgt3(dev_priv))
+				goto sysfs_error;
+		} else if (IS_SKL_GT4(dev_priv)) {
+			if (i915_perf_register_sysfs_sklgt4(dev_priv))
+				goto sysfs_error;
+		} else
+			goto sysfs_error;
+	} else if (IS_BROXTON(dev_priv)) {
+		if (i915_perf_register_sysfs_bxt(dev_priv))
+			goto sysfs_error;
 	}

+	goto exit;
+
+sysfs_error:
+	kobject_put(dev_priv->perf.metrics_kobj);
+	dev_priv->perf.metrics_kobj = NULL;
+
 exit:
 	mutex_unlock(&dev_priv->perf.lock);
 }
@@ -2107,13 +2773,24 @@ void i915_perf_register(struct drm_i915_private *dev_priv)
  */
 void i915_perf_unregister(struct drm_i915_private *dev_priv)
 {
-	if (!IS_HASWELL(dev_priv))
-		return;
-
 	if (!dev_priv->perf.metrics_kobj)
 		return;

-	i915_perf_unregister_sysfs_hsw(dev_priv);
+	if (IS_HASWELL(dev_priv))
+		i915_perf_unregister_sysfs_hsw(dev_priv);
+	else if (IS_BROADWELL(dev_priv))
+		i915_perf_unregister_sysfs_bdw(dev_priv);
+	else if (IS_CHERRYVIEW(dev_priv))
+		i915_perf_unregister_sysfs_chv(dev_priv);
+	else if (IS_SKYLAKE(dev_priv)) {
+		if (IS_SKL_GT2(dev_priv))
+			i915_perf_unregister_sysfs_sklgt2(dev_priv);
+		else if (IS_SKL_GT3(dev_priv))
+			i915_perf_unregister_sysfs_sklgt3(dev_priv);
+		else if (IS_SKL_GT4(dev_priv))
+			i915_perf_unregister_sysfs_sklgt4(dev_priv);
+	} else if (IS_BROXTON(dev_priv))
+		i915_perf_unregister_sysfs_bxt(dev_priv);

 	kobject_put(dev_priv->perf.metrics_kobj);
 	dev_priv->perf.metrics_kobj = NULL;
@@ -2172,36 +2849,105 @@ static struct ctl_table dev_root[] = {
  */
 void i915_perf_init(struct drm_i915_private *dev_priv)
 {
-	if (!IS_HASWELL(dev_priv))
-		return;
-
-	hrtimer_init(&dev_priv->perf.oa.poll_check_timer,
-		     CLOCK_MONOTONIC, HRTIMER_MODE_REL);
-	dev_priv->perf.oa.poll_check_timer.function = oa_poll_check_timer_cb;
-	init_waitqueue_head(&dev_priv->perf.oa.poll_wq);
+	dev_priv->perf.oa.n_builtin_sets = 0;
+
+	if (IS_HASWELL(dev_priv)) {
+		dev_priv->perf.oa.ops.init_oa_buffer = gen7_init_oa_buffer;
+		dev_priv->perf.oa.ops.enable_metric_set = hsw_enable_metric_set;
+		dev_priv->perf.oa.ops.disable_metric_set = hsw_disable_metric_set;
+		dev_priv->perf.oa.ops.oa_enable = gen7_oa_enable;
+		dev_priv->perf.oa.ops.oa_disable = gen7_oa_disable;
+		dev_priv->perf.oa.ops.read = gen7_oa_read;
+		dev_priv->perf.oa.ops.oa_hw_tail_read =
+			gen7_oa_hw_tail_read;
+
+		dev_priv->perf.oa.oa_formats = hsw_oa_formats;
+
+		dev_priv->perf.oa.n_builtin_sets =
+			i915_oa_n_builtin_metric_sets_hsw;
+	} else if (i915.enable_execlists) {
+		/* Note: that although we could theoretically also support the
+		 * legacy ringbuffer mode on BDW (and earlier iterations of
+		 * this driver, before upstreaming did this) it didn't seem
+		 * worth the complexity to maintain now that BDW+ enable
+		 * execlist mode by default.
+		 */

-	INIT_LIST_HEAD(&dev_priv->perf.streams);
-	mutex_init(&dev_priv->perf.lock);
-	spin_lock_init(&dev_priv->perf.hook_lock);
-	spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);
+		if (IS_GEN8(dev_priv)) {
+			dev_priv->perf.oa.ctx_oactxctrl_offset = 0x120;
+			dev_priv->perf.oa.ctx_flexeu0_offset = 0x2ce;
+			dev_priv->perf.oa.gen8_valid_ctx_bit = (1<<25);
+
+			if (IS_BROADWELL(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_bdw;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_bdw;
+			} else if (IS_CHERRYVIEW(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_chv;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_chv;
+			}
+		} else if (IS_GEN9(dev_priv)) {
+			dev_priv->perf.oa.ctx_oactxctrl_offset = 0x128;
+			dev_priv->perf.oa.ctx_flexeu0_offset = 0x3de;
+			dev_priv->perf.oa.gen8_valid_ctx_bit = (1<<16);
+
+			if (IS_SKL_GT2(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_sklgt2;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_sklgt2;
+			} else if (IS_SKL_GT3(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_sklgt3;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_sklgt3;
+			} else if (IS_SKL_GT4(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_sklgt4;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_sklgt4;
+			} else if (IS_BROXTON(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_bxt;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_bxt;
+			}
+		}

-	dev_priv->perf.oa.ops.init_oa_buffer = gen7_init_oa_buffer;
-	dev_priv->perf.oa.ops.enable_metric_set = hsw_enable_metric_set;
-	dev_priv->perf.oa.ops.disable_metric_set = hsw_disable_metric_set;
-	dev_priv->perf.oa.ops.oa_enable = gen7_oa_enable;
-	dev_priv->perf.oa.ops.oa_disable = gen7_oa_disable;
-	dev_priv->perf.oa.ops.read = gen7_oa_read;
-	dev_priv->perf.oa.ops.oa_buffer_check =
-		gen7_oa_buffer_check_unlocked;
+		if (dev_priv->perf.oa.n_builtin_sets) {
+			dev_priv->perf.oa.ops.init_oa_buffer = gen8_init_oa_buffer;
+			dev_priv->perf.oa.ops.enable_metric_set =
+				gen8_enable_metric_set;
+			dev_priv->perf.oa.ops.disable_metric_set =
+				gen8_disable_metric_set;
+			dev_priv->perf.oa.ops.oa_enable = gen8_oa_enable;
+			dev_priv->perf.oa.ops.oa_disable = gen8_oa_disable;
+			dev_priv->perf.oa.ops.read = gen8_oa_read;
+			dev_priv->perf.oa.ops.oa_hw_tail_read =
+				gen8_oa_hw_tail_read;
+
+			dev_priv->perf.oa.oa_formats = gen8_plus_oa_formats;
+		}
+	}

-	dev_priv->perf.oa.oa_formats = hsw_oa_formats;
+	if (dev_priv->perf.oa.n_builtin_sets) {
+		hrtimer_init(&dev_priv->perf.oa.poll_check_timer,
+				CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+		dev_priv->perf.oa.poll_check_timer.function = oa_poll_check_timer_cb;
+		init_waitqueue_head(&dev_priv->perf.oa.poll_wq);

-	dev_priv->perf.oa.n_builtin_sets =
-		i915_oa_n_builtin_metric_sets_hsw;
+		INIT_LIST_HEAD(&dev_priv->perf.streams);
+		mutex_init(&dev_priv->perf.lock);
+		spin_lock_init(&dev_priv->perf.hook_lock);
+		spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);

-	dev_priv->perf.sysctl_header = register_sysctl_table(dev_root);
+		dev_priv->perf.sysctl_header = register_sysctl_table(dev_root);

-	dev_priv->perf.initialized = true;
+		dev_priv->perf.initialized = true;
+	}
 }

 /**
@@ -2216,5 +2962,6 @@ void i915_perf_fini(struct drm_i915_private *dev_priv)
 	unregister_sysctl_table(dev_priv->perf.sysctl_header);

 	memset(&dev_priv->perf.oa.ops, 0, sizeof(dev_priv->perf.oa.ops));
+
 	dev_priv->perf.initialized = false;
 }
diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index ee8170cda93e..f8acde02c314 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -653,6 +653,12 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)

 #define GEN8_OACTXID _MMIO(0x2364)

+#define GEN8_OA_DEBUG _MMIO(0x2B04)
+#define  GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS    (1<<5)
+#define  GEN9_OA_DEBUG_INCLUDE_CLK_RATIO	    (1<<6)
+#define  GEN9_OA_DEBUG_DISABLE_GO_1_0_REPORTS	    (1<<2)
+#define  GEN9_OA_DEBUG_DISABLE_CTX_SWITCH_REPORTS   (1<<1)
+
 #define GEN8_OACONTROL _MMIO(0x2B00)
 #define  GEN8_OA_REPORT_FORMAT_A12	    (0<<2)
 #define  GEN8_OA_REPORT_FORMAT_A12_B8_C8    (2<<2)
@@ -674,6 +680,7 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 #define  GEN7_OABUFFER_STOP_RESUME_ENABLE   (1<<1)
 #define  GEN7_OABUFFER_RESUME		    (1<<0)

+#define GEN8_OABUFFER_UDW _MMIO(0x23b4)
 #define GEN8_OABUFFER _MMIO(0x2b14)

 #define GEN7_OASTATUS1 _MMIO(0x2364)
@@ -692,7 +699,9 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 #define  GEN8_OASTATUS_REPORT_LOST	    (1<<0)

 #define GEN8_OAHEADPTR _MMIO(0x2B0C)
+#define GEN8_OAHEADPTR_MASK    0xffffffc0
 #define GEN8_OATAILPTR _MMIO(0x2B10)
+#define GEN8_OATAILPTR_MASK    0xffffffc0

 #define OABUFFER_SIZE_128K  (0<<3)
 #define OABUFFER_SIZE_256K  (1<<3)
@@ -705,7 +714,17 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)

 #define OA_MEM_SELECT_GGTT  (1<<0)

+/*
+ * Flexible, Aggregate EU Counter Registers.
+ * Note: these aren't contiguous
+ */
 #define EU_PERF_CNTL0	    _MMIO(0xe458)
+#define EU_PERF_CNTL1	    _MMIO(0xe558)
+#define EU_PERF_CNTL2	    _MMIO(0xe658)
+#define EU_PERF_CNTL3	    _MMIO(0xe758)
+#define EU_PERF_CNTL4	    _MMIO(0xe45c)
+#define EU_PERF_CNTL5	    _MMIO(0xe55c)
+#define EU_PERF_CNTL6	    _MMIO(0xe65c)

 #define GDT_CHICKEN_BITS    _MMIO(0x9840)
 #define GT_NOA_ENABLE	    0x00000080
@@ -2325,6 +2344,9 @@ enum skl_disp_power_wells {
 #define   GEN8_RC_SEMA_IDLE_MSG_DISABLE	(1 << 12)
 #define   GEN8_FF_DOP_CLOCK_GATE_DISABLE	(1<<10)

+#define GEN6_RCS_PWR_FSM _MMIO(0x22ac)
+#define GEN9_RCS_FE_FSM2 _MMIO(0x22a4)
+
 /* Fuse readout registers for GT */
 #define CHV_FUSE_GT			_MMIO(VLV_DISPLAY_BASE + 0x2168)
 #define   CHV_FGT_DISABLE_SS0		(1 << 10)
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 0909549ad320..58216c4bfbc1 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1877,6 +1877,8 @@ static void execlists_init_reg_state(u32 *regs,
 		regs[CTX_LRI_HEADER_2] = MI_LOAD_REGISTER_IMM(1);
 		CTX_REG(regs, CTX_R_PWR_CLK_STATE, GEN8_R_PWR_CLK_STATE,
 			make_rpcs(dev_priv));
+
+		i915_oa_init_reg_state(engine, ctx, regs);
 	}
 }

diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
index 464547d08173..15bc9f78ba4d 100644
--- a/include/uapi/drm/i915_drm.h
+++ b/include/uapi/drm/i915_drm.h
@@ -1316,13 +1316,18 @@ struct drm_i915_gem_context_param {
 };

 enum drm_i915_oa_format {
-	I915_OA_FORMAT_A13 = 1,
-	I915_OA_FORMAT_A29,
-	I915_OA_FORMAT_A13_B8_C8,
-	I915_OA_FORMAT_B4_C8,
-	I915_OA_FORMAT_A45_B8_C8,
-	I915_OA_FORMAT_B4_C8_A16,
-	I915_OA_FORMAT_C4_B8,
+	I915_OA_FORMAT_A13 = 1,	    /* HSW only */
+	I915_OA_FORMAT_A29,	    /* HSW only */
+	I915_OA_FORMAT_A13_B8_C8,   /* HSW only */
+	I915_OA_FORMAT_B4_C8,	    /* HSW only */
+	I915_OA_FORMAT_A45_B8_C8,   /* HSW only */
+	I915_OA_FORMAT_B4_C8_A16,   /* HSW only */
+	I915_OA_FORMAT_C4_B8,	    /* HSW+ */
+
+	/* Gen8+ */
+	I915_OA_FORMAT_A12,
+	I915_OA_FORMAT_A12_B8_C8,
+	I915_OA_FORMAT_A32u40_A4u32_B8_C8,

 	I915_OA_FORMAT_MAX	    /* non-ABI */
 };
--
2.11.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* Re: [PATCH v10 12/15] drm/i915/perf: Add OA unit support for Gen 8+
  2017-05-02 21:44                   ` [PATCH v10 " Lionel Landwerlin
@ 2017-05-03 10:31                     ` Chris Wilson
  0 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2017-05-03 10:31 UTC (permalink / raw)
  To: Lionel Landwerlin; +Cc: intel-gfx, matthew.auld

On Tue, May 02, 2017 at 02:44:59PM -0700, Lionel Landwerlin wrote:
> From: Robert Bragg <robert@sixbynine.org>
> 
> Enables access to OA unit metrics for BDW, CHV, SKL and BXT which all
> share (more-or-less) the same OA unit design.
> 
> Of particular note in comparison to Haswell: some OA unit HW config
> state has become per-context state and as a consequence it is somewhat
> more complicated to manage synchronous state changes from the cpu while
> there's no guarantee of what context (if any) is currently actively
> running on the gpu.
> 
> The periodic sampling frequency which can be particularly useful for
> system-wide analysis (as opposed to command stream synchronised
> MI_REPORT_PERF_COUNT commands) is perhaps the most surprising state to
> have become per-context save and restored (while the OABUFFER
> destination is still a shared, system-wide resource).
> 
> This support for gen8+ takes care to consider a number of timing
> challenges involved in synchronously updating per-context state
> primarily by programming all config state from the cpu and updating all
> current and saved contexts synchronously while the OA unit is still
> disabled.
> 
> The driver intentionally avoids depending on command streamer
> programming to update OA state considering the lack of synchronization
> between the automatic loading of OACTXCONTROL state (that includes the
> periodic sampling state and enable state) on context restore and the
> parsing of any general purpose BB the driver can control. I.e. this
> implementation is careful to avoid the possibility of a context restore
> temporarily enabling any out-of-date periodic sampling state. In
> addition to the risk of transiently-out-of-date state being loaded
> automatically; there are also internal HW latencies involved in the
> loading of MUX configurations which would be difficult to account for
> from the command streamer (and we only want to enable the unit when once
> the MUX configuration is complete).
> 
> Since the Gen8+ OA unit design no longer supports clock gating the unit
> off for a single given context (which effectively stopped any progress
> of counters while any other context was running) and instead supports
> tagging OA reports with a context ID for filtering on the CPU, it means
> we can no longer hide the system-wide progress of counters from a
> non-privileged application only interested in metrics for its own
> context. Although we could theoretically try and subtract the progress
> of other contexts before forwarding reports via read() we aren't in a
> position to filter reports captured via MI_REPORT_PERF_COUNT commands.
> As a result, for Gen8+, we always require the
> dev.i915.perf_stream_paranoid to be unset for any access to OA metrics
> if not root.
> 
> v5: Drain submitted requests when enabling metric set to ensure no
>     lite-restore erases the context image we just updated (Lionel)
> 
> v6: In addition to drain, switch to kernel context & update all
>     context in place (Chris)
> 
> v7: Add missing mutex_unlock() if switching to kernel context fails
>     (Matthew)
> 
> v8: Simplify OA period/flex-eu-counters programming by using the
>     batchbuffer instead of modifying ctx-image (Lionel)
> 
> v9: Back to updating the context image (due to erroneous testing,
>     batchbuffer programming the OA unit doesn't actually work)
>     (Lionel)
>     Pin context before updating context image (Chris)
>     Drop MMIO programming now that we switch to a kernel context with
>     right values in initial context image (Chris)
> 
> v10: Just pin_map the contexts we want to modify or let the
>      configuration happen on first use (Chris)
> 
> Signed-off-by: Robert Bragg <robert@sixbynine.org>
> Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
> Reviewed-by: Matthew Auld <matthew.auld@intel.com> \o/
> ---
>  drivers/gpu/drm/i915/i915_drv.h  |  45 +-
>  drivers/gpu/drm/i915/i915_perf.c | 889 +++++++++++++++++++++++++++++++++++----
>  drivers/gpu/drm/i915/i915_reg.h  |  22 +
>  drivers/gpu/drm/i915/intel_lrc.c |   2 +
>  include/uapi/drm/i915_drm.h      |  19 +-
>  5 files changed, 884 insertions(+), 93 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index f15751534c29..eaf4ce5e5790 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -2063,9 +2063,17 @@ struct i915_oa_ops {
>  	void (*init_oa_buffer)(struct drm_i915_private *dev_priv);
> 
>  	/**
> -	 * @enable_metric_set: Applies any MUX configuration to set up the
> -	 * Boolean and Custom (B/C) counters that are part of the counter
> -	 * reports being sampled. May apply system constraints such as
> +	 * @select_metric_set: The auto generated code that checks whether a
> +	 * requested OA config is applicable to the system and if so sets up
> +	 * the mux, oa and flex eu register config pointers according to the
> +	 * current dev_priv->perf.oa.metrics_set.
> +	 */
> +	int (*select_metric_set)(struct drm_i915_private *dev_priv);
> +
> +	/**
> +	 * @enable_metric_set: Selects and applies any MUX configuration to set
> +	 * up the Boolean and Custom (B/C) counters that are part of the
> +	 * counter reports being sampled. May apply system constraints such as
>  	 * disabling EU clock gating as required.
>  	 */
>  	int (*enable_metric_set)(struct drm_i915_private *dev_priv);
> @@ -2096,20 +2104,13 @@ struct i915_oa_ops {
>  		    size_t *offset);
> 
>  	/**
> -	 * @oa_buffer_check: Check for OA buffer data + update tail
> -	 *
> -	 * This is either called via fops or the poll check hrtimer (atomic
> -	 * ctx) without any locks taken.
> +	 * @oa_hw_tail_read: read the OA tail pointer register
>  	 *
> -	 * It's safe to read OA config state here unlocked, assuming that this
> -	 * is only called while the stream is enabled, while the global OA
> -	 * configuration can't be modified.
> -	 *
> -	 * Efficiency is more important than avoiding some false positives
> -	 * here, which will be handled gracefully - likely resulting in an
> -	 * %EAGAIN error for userspace.
> +	 * In particular this enables us to share all the fiddly code for
> +	 * handling the OA unit tail pointer race that affects multiple
> +	 * generations.
>  	 */
> -	bool (*oa_buffer_check)(struct drm_i915_private *dev_priv);
> +	u32 (*oa_hw_tail_read)(struct drm_i915_private *dev_priv);
>  };
> 
>  struct intel_cdclk_state {
> @@ -2470,6 +2471,7 @@ struct drm_i915_private {
>  			struct {
>  				struct i915_vma *vma;
>  				u8 *vaddr;
> +				u32 last_ctx_id;
>  				int format;
>  				int format_size;
> 
> @@ -2539,6 +2541,14 @@ struct drm_i915_private {
>  			} oa_buffer;
> 
>  			u32 gen7_latched_oastatus1;
> +			u32 ctx_oactxctrl_offset;
> +			u32 ctx_flexeu0_offset;
> +
> +			/* The RPT_ID/reason field for Gen8+ includes a bit
> +			 * to determine if the CTX ID in the report is valid
> +			 * but the specific bit differs between Gen 8 and 9
> +			 */
> +			u32 gen8_valid_ctx_bit;
> 
>  			struct i915_oa_ops ops;
>  			const struct i915_oa_format *oa_formats;
> @@ -2849,6 +2859,8 @@ intel_info(const struct drm_i915_private *dev_priv)
>  #define IS_KBL_ULX(dev_priv)	(INTEL_DEVID(dev_priv) == 0x590E || \
>  				 INTEL_DEVID(dev_priv) == 0x5915 || \
>  				 INTEL_DEVID(dev_priv) == 0x591E)
> +#define IS_SKL_GT2(dev_priv)	(IS_SKYLAKE(dev_priv) && \
> +				 (INTEL_DEVID(dev_priv) & 0x00F0) == 0x0010)
>  #define IS_SKL_GT3(dev_priv)	(IS_SKYLAKE(dev_priv) && \
>  				 (INTEL_DEVID(dev_priv) & 0x00F0) == 0x0020)
>  #define IS_SKL_GT4(dev_priv)	(IS_SKYLAKE(dev_priv) && \
> @@ -3612,6 +3624,9 @@ i915_gem_context_lookup_timeline(struct i915_gem_context *ctx,
> 
>  int i915_perf_open_ioctl(struct drm_device *dev, void *data,
>  			 struct drm_file *file);
> +void i915_oa_init_reg_state(struct intel_engine_cs *engine,
> +			    struct i915_gem_context *ctx,
> +			    uint32_t *reg_state);
> 
>  /* i915_gem_evict.c */
>  int __must_check i915_gem_evict_something(struct i915_address_space *vm,
> diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
> index 3277a52ce98e..7a82681d122e 100644
> --- a/drivers/gpu/drm/i915/i915_perf.c
> +++ b/drivers/gpu/drm/i915/i915_perf.c
> @@ -196,6 +196,12 @@
> 
>  #include "i915_drv.h"
>  #include "i915_oa_hsw.h"
> +#include "i915_oa_bdw.h"
> +#include "i915_oa_chv.h"
> +#include "i915_oa_sklgt2.h"
> +#include "i915_oa_sklgt3.h"
> +#include "i915_oa_sklgt4.h"
> +#include "i915_oa_bxt.h"
> 
>  /* HW requires this to be a power of two, between 128k and 16M, though driver
>   * is currently generally designed assuming the largest 16M size is used such
> @@ -215,7 +221,7 @@
>   *
>   * Although this can be observed explicitly while copying reports to userspace
>   * by checking for a zeroed report-id field in tail reports, we want to account
> - * for this earlier, as part of the _oa_buffer_check to avoid lots of redundant
> + * for this earlier, as part of the oa_buffer_check to avoid lots of redundant
>   * read() attempts.
>   *
>   * In effect we define a tail pointer for reading that lags the real tail
> @@ -237,7 +243,7 @@
>   * indicates that an updated tail pointer is needed.
>   *
>   * Most of the implementation details for this workaround are in
> - * gen7_oa_buffer_check_unlocked() and gen7_appand_oa_reports()
> + * oa_buffer_check_unlocked() and _append_oa_reports()
>   *
>   * Note for posterity: previously the driver used to define an effective tail
>   * pointer that lagged the real pointer by a 'tail margin' measured in bytes
> @@ -272,6 +278,13 @@ static u32 i915_perf_stream_paranoid = true;
> 
>  #define INVALID_CTX_ID 0xffffffff
> 
> +/* On Gen8+ automatically triggered OA reports include a 'reason' field... */
> +#define OAREPORT_REASON_MASK           0x3f
> +#define OAREPORT_REASON_SHIFT          19
> +#define OAREPORT_REASON_TIMER          (1<<0)
> +#define OAREPORT_REASON_CTX_SWITCH     (1<<3)
> +#define OAREPORT_REASON_CLK_RATIO      (1<<5)
> +
> 
>  /* For sysctl proc_dointvec_minmax of i915_oa_max_sample_rate
>   *
> @@ -303,6 +316,13 @@ static struct i915_oa_format hsw_oa_formats[I915_OA_FORMAT_MAX] = {
>  	[I915_OA_FORMAT_C4_B8]	    = { 7, 64 },
>  };
> 
> +static struct i915_oa_format gen8_plus_oa_formats[I915_OA_FORMAT_MAX] = {
> +	[I915_OA_FORMAT_A12]		    = { 0, 64 },
> +	[I915_OA_FORMAT_A12_B8_C8]	    = { 2, 128 },
> +	[I915_OA_FORMAT_A32u40_A4u32_B8_C8] = { 5, 256 },
> +	[I915_OA_FORMAT_C4_B8]		    = { 7, 64 },
> +};
> +
>  #define SAMPLE_OA_REPORT      (1<<0)
> 
>  /**
> @@ -332,8 +352,20 @@ struct perf_open_properties {
>  	int oa_period_exponent;
>  };
> 
> +static u32 gen8_oa_hw_tail_read(struct drm_i915_private *dev_priv)
> +{
> +	return I915_READ(GEN8_OATAILPTR) & GEN8_OATAILPTR_MASK;
> +}
> +
> +static u32 gen7_oa_hw_tail_read(struct drm_i915_private *dev_priv)
> +{
> +	u32 oastatus1 = I915_READ(GEN7_OASTATUS1);
> +
> +	return oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
> +}
> +
>  /**
> - * gen7_oa_buffer_check_unlocked - check for data and update tail ptr state
> + * oa_buffer_check_unlocked - check for data and update tail ptr state
>   * @dev_priv: i915 device instance
>   *
>   * This is either called via fops (for blocking reads in user ctx) or the poll
> @@ -356,12 +388,11 @@ struct perf_open_properties {
>   *
>   * Returns: %true if the OA buffer contains data, else %false
>   */
> -static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
> +static bool oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
>  {
>  	int report_size = dev_priv->perf.oa.oa_buffer.format_size;
>  	unsigned long flags;
>  	unsigned int aged_idx;
> -	u32 oastatus1;
>  	u32 head, hw_tail, aged_tail, aging_tail;
>  	u64 now;
> 
> @@ -381,8 +412,7 @@ static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
>  	aged_tail = dev_priv->perf.oa.oa_buffer.tails[aged_idx].offset;
>  	aging_tail = dev_priv->perf.oa.oa_buffer.tails[!aged_idx].offset;
> 
> -	oastatus1 = I915_READ(GEN7_OASTATUS1);
> -	hw_tail = oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
> +	hw_tail = dev_priv->perf.oa.ops.oa_hw_tail_read(dev_priv);
> 
>  	/* The tail pointer increases in 64 byte increments,
>  	 * not in report_size steps...
> @@ -404,6 +434,7 @@ static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
>  	if (aging_tail != INVALID_TAIL_PTR &&
>  	    ((now - dev_priv->perf.oa.oa_buffer.aging_timestamp) >
>  	     OA_TAIL_MARGIN_NSEC)) {
> +
>  		aged_idx ^= 1;
>  		dev_priv->perf.oa.oa_buffer.aged_tail_idx = aged_idx;
> 
> @@ -553,6 +584,287 @@ static int append_oa_sample(struct i915_perf_stream *stream,
>   *
>   * Returns: 0 on success, negative error code on failure.
>   */
> +static int gen8_append_oa_reports(struct i915_perf_stream *stream,
> +				  char __user *buf,
> +				  size_t count,
> +				  size_t *offset)
> +{
> +	struct drm_i915_private *dev_priv = stream->dev_priv;
> +	int report_size = dev_priv->perf.oa.oa_buffer.format_size;
> +	u8 *oa_buf_base = dev_priv->perf.oa.oa_buffer.vaddr;
> +	u32 gtt_offset = i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
> +	u32 mask = (OA_BUFFER_SIZE - 1);
> +	size_t start_offset = *offset;
> +	unsigned long flags;
> +	unsigned int aged_tail_idx;
> +	u32 head, tail;
> +	u32 taken;
> +	int ret = 0;
> +
> +	if (WARN_ON(!stream->enabled))
> +		return -EIO;
> +
> +	spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
> +
> +	head = dev_priv->perf.oa.oa_buffer.head;
> +	aged_tail_idx = dev_priv->perf.oa.oa_buffer.aged_tail_idx;
> +	tail = dev_priv->perf.oa.oa_buffer.tails[aged_tail_idx].offset;
> +
> +	spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
> +
> +	/* An invalid tail pointer here means we're still waiting for the poll
> +	 * hrtimer callback to give us a pointer
> +	 */
> +	if (tail == INVALID_TAIL_PTR)
> +		return -EAGAIN;
> +
> +	/* NB: oa_buffer.head/tail include the gtt_offset which we don't want
> +	 * while indexing relative to oa_buf_base.
> +	 */
> +	head -= gtt_offset;
> +	tail -= gtt_offset;
> +
> +	/* An out of bounds or misaligned head or tail pointer implies a driver
> +	 * bug since we validate + align the tail pointers we read from the
> +	 * hardware and we are in full control of the head pointer which should
> +	 * only be incremented by multiples of the report size (notably also
> +	 * all a power of two).
> +	 */
> +	if (WARN_ONCE(head > OA_BUFFER_SIZE || head % report_size ||
> +		      tail > OA_BUFFER_SIZE || tail % report_size,
> +		      "Inconsistent OA buffer pointers: head = %u, tail = %u\n",
> +		      head, tail))
> +		return -EIO;
> +
> +
> +	for (/* none */;
> +	     (taken = OA_TAKEN(tail, head));
> +	     head = (head + report_size) & mask) {
> +		u8 *report = oa_buf_base + head;
> +		u32 *report32 = (void *)report;
> +		u32 ctx_id;
> +		u32 reason;
> +
> +		/* All the report sizes factor neatly into the buffer
> +		 * size so we never expect to see a report split
> +		 * between the beginning and end of the buffer.
> +		 *
> +		 * Given the initial alignment check a misalignment
> +		 * here would imply a driver bug that would result
> +		 * in an overrun.
> +		 */
> +		if (WARN_ON((OA_BUFFER_SIZE - head) < report_size)) {
> +			DRM_ERROR("Spurious OA head ptr: non-integral report offset\n");
> +			break;
> +		}
> +
> +		/* The reason field includes flags identifying what
> +		 * triggered this specific report (mostly timer
> +		 * triggered or e.g. due to a context switch).
> +		 *
> +		 * This field is never expected to be zero so we can
> +		 * check that the report isn't invalid before copying
> +		 * it to userspace...
> +		 */
> +		reason = ((report32[0] >> OAREPORT_REASON_SHIFT) &
> +			  OAREPORT_REASON_MASK);
> +		if (reason == 0) {
> +			if (__ratelimit(&dev_priv->perf.oa.spurious_report_rs))
> +				DRM_NOTE("Skipping spurious, invalid OA report\n");
> +			continue;
> +		}
> +
> +		/* XXX: Just keep the lower 21 bits for now since I'm not
> +		 * entirely sure if the HW touches any of the higher bits in
> +		 * this field
> +		 */
> +		ctx_id = report32[2] & 0x1fffff;
> +
> +		/* Squash whatever is in the CTX_ID field if it's marked as
> +		 * invalid to be sure we avoid false-positive, single-context
> +		 * filtering below...
> +		 *
> +		 * Note: that we don't clear the valid_ctx_bit so userspace can
> +		 * understand that the ID has been squashed by the kernel.
> +		 */
> +		if (!(report32[0] & dev_priv->perf.oa.gen8_valid_ctx_bit))
> +			ctx_id = report32[2] = INVALID_CTX_ID;
> +
> +		/* NB: For Gen 8 the OA unit no longer supports clock gating
> +		 * off for a specific context and the kernel can't securely
> +		 * stop the counters from updating as system-wide / global
> +		 * values.
> +		 *
> +		 * Automatic reports now include a context ID so reports can be
> +		 * filtered on the cpu but it's not worth trying to
> +		 * automatically subtract/hide counter progress for other
> +		 * contexts while filtering since we can't stop userspace
> +		 * issuing MI_REPORT_PERF_COUNT commands which would still
> +		 * provide a side-band view of the real values.
> +		 *
> +		 * To allow userspace (such as Mesa/GL_INTEL_performance_query)
> +		 * to normalize counters for a single filtered context then it
> +		 * needs be forwarded bookend context-switch reports so that it
> +		 * can track switches in between MI_REPORT_PERF_COUNT commands
> +		 * and can itself subtract/ignore the progress of counters
> +		 * associated with other contexts. Note that the hardware
> +		 * automatically triggers reports when switching to a new
> +		 * context which are tagged with the ID of the newly active
> +		 * context. To avoid the complexity (and likely fragility) of
> +		 * reading ahead while parsing reports to try and minimize
> +		 * forwarding redundant context switch reports (i.e. between
> +		 * other, unrelated contexts) we simply elect to forward them
> +		 * all.
> +		 *
> +		 * We don't rely solely on the reason field to identify context
> +		 * switches since it's not-uncommon for periodic samples to
> +		 * identify a switch before any 'context switch' report.
> +		 */
> +		if (!dev_priv->perf.oa.exclusive_stream->ctx ||
> +		    dev_priv->perf.oa.specific_ctx_id == ctx_id ||
> +		    (dev_priv->perf.oa.oa_buffer.last_ctx_id ==
> +		     dev_priv->perf.oa.specific_ctx_id) ||
> +		    reason & OAREPORT_REASON_CTX_SWITCH) {
> +
> +			/* While filtering for a single context we avoid
> +			 * leaking the IDs of other contexts.
> +			 */
> +			if (dev_priv->perf.oa.exclusive_stream->ctx &&
> +			    dev_priv->perf.oa.specific_ctx_id != ctx_id) {
> +				report32[2] = INVALID_CTX_ID;
> +			}
> +
> +			ret = append_oa_sample(stream, buf, count, offset,
> +					       report);
> +			if (ret)
> +				break;
> +
> +			dev_priv->perf.oa.oa_buffer.last_ctx_id = ctx_id;
> +		}
> +
> +		/* The above reason field sanity check is based on
> +		 * the assumption that the OA buffer is initially
> +		 * zeroed and we reset the field after copying so the
> +		 * check is still meaningful once old reports start
> +		 * being overwritten.
> +		 */
> +		report32[0] = 0;
> +	}
> +
> +	if (start_offset != *offset) {
> +		spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
> +
> +		/* We removed the gtt_offset for the copy loop above, indexing
> +		 * relative to oa_buf_base so put back here...
> +		 */
> +		head += gtt_offset;
> +
> +		I915_WRITE(GEN8_OAHEADPTR, head & GEN8_OAHEADPTR_MASK);
> +		dev_priv->perf.oa.oa_buffer.head = head;
> +
> +		spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
> +	}
> +
> +	return ret;
> +}
> +
> +/**
> + * gen8_oa_read - copy status records then buffered OA reports
> + * @stream: An i915-perf stream opened for OA metrics
> + * @buf: destination buffer given by userspace
> + * @count: the number of bytes userspace wants to read
> + * @offset: (inout): the current position for writing into @buf
> + *
> + * Checks OA unit status registers and if necessary appends corresponding
> + * status records for userspace (such as for a buffer full condition) and then
> + * initiate appending any buffered OA reports.
> + *
> + * Updates @offset according to the number of bytes successfully copied into
> + * the userspace buffer.
> + *
> + * NB: some data may be successfully copied to the userspace buffer
> + * even if an error is returned, and this is reflected in the
> + * updated @offset.
> + *
> + * Returns: zero on success or a negative error code
> + */
> +static int gen8_oa_read(struct i915_perf_stream *stream,
> +			char __user *buf,
> +			size_t count,
> +			size_t *offset)
> +{
> +	struct drm_i915_private *dev_priv = stream->dev_priv;
> +	u32 oastatus;
> +	int ret;
> +
> +	if (WARN_ON(!dev_priv->perf.oa.oa_buffer.vaddr))
> +		return -EIO;
> +
> +	oastatus = I915_READ(GEN8_OASTATUS);
> +
> +	/* We treat OABUFFER_OVERFLOW as a significant error:
> +	 *
> +	 * Although theoretically we could handle this more gracefully
> +	 * sometimes, some Gens don't correctly suppress certain
> +	 * automatically triggered reports in this condition and so we
> +	 * have to assume that old reports are now being trampled
> +	 * over.
> +	 *
> +	 * Considering how we don't currently give userspace control
> +	 * over the OA buffer size and always configure a large 16MB
> +	 * buffer, then a buffer overflow does anyway likely indicate
> +	 * that something has gone quite badly wrong.
> +	 */
> +	if (oastatus & GEN8_OASTATUS_OABUFFER_OVERFLOW) {
> +		ret = append_oa_status(stream, buf, count, offset,
> +				       DRM_I915_PERF_RECORD_OA_BUFFER_LOST);
> +		if (ret)
> +			return ret;
> +
> +		DRM_DEBUG("OA buffer overflow (exponent = %d): force restart\n",
> +			  dev_priv->perf.oa.period_exponent);
> +
> +		dev_priv->perf.oa.ops.oa_disable(dev_priv);
> +		dev_priv->perf.oa.ops.oa_enable(dev_priv);
> +
> +		/* Note: .oa_enable() is expected to re-init the oabuffer
> +		 * and reset GEN8_OASTATUS for us
> +		 */
> +		oastatus = I915_READ(GEN8_OASTATUS);
> +	}
> +
> +	if (oastatus & GEN8_OASTATUS_REPORT_LOST) {
> +		ret = append_oa_status(stream, buf, count, offset,
> +				       DRM_I915_PERF_RECORD_OA_REPORT_LOST);
> +		if (ret)
> +			return ret;
> +		I915_WRITE(GEN8_OASTATUS,
> +			   oastatus & ~GEN8_OASTATUS_REPORT_LOST);
> +	}
> +
> +	return gen8_append_oa_reports(stream, buf, count, offset);
> +}
> +
> +/**
> + * Copies all buffered OA reports into userspace read() buffer.
> + * @stream: An i915-perf stream opened for OA metrics
> + * @buf: destination buffer given by userspace
> + * @count: the number of bytes userspace wants to read
> + * @offset: (inout): the current position for writing into @buf
> + *
> + * Notably any error condition resulting in a short read (-%ENOSPC or
> + * -%EFAULT) will be returned even though one or more records may
> + * have been successfully copied. In this case it's up to the caller
> + * to decide if the error should be squashed before returning to
> + * userspace.
> + *
> + * Note: reports are consumed from the head, and appended to the
> + * tail, so the tail chases the head?... If you think that's mad
> + * and back-to-front you're not alone, but this follows the
> + * Gen PRM naming convention.
> + *
> + * Returns: 0 on success, negative error code on failure.
> + */
>  static int gen7_append_oa_reports(struct i915_perf_stream *stream,
>  				  char __user *buf,
>  				  size_t count,
> @@ -733,7 +1045,8 @@ static int gen7_oa_read(struct i915_perf_stream *stream,
>  		if (ret)
>  			return ret;
> 
> -		DRM_DEBUG("OA buffer overflow: force restart\n");
> +		DRM_DEBUG("OA buffer overflow (exponent = %d): force restart\n",
> +			  dev_priv->perf.oa.period_exponent);
> 
>  		dev_priv->perf.oa.ops.oa_disable(dev_priv);
>  		dev_priv->perf.oa.ops.oa_enable(dev_priv);
> @@ -776,7 +1089,7 @@ static int i915_oa_wait_unlocked(struct i915_perf_stream *stream)
>  		return -EIO;
> 
>  	return wait_event_interruptible(dev_priv->perf.oa.poll_wq,
> -					dev_priv->perf.oa.ops.oa_buffer_check(dev_priv));
> +					oa_buffer_check_unlocked(dev_priv));
>  }
> 
>  /**
> @@ -833,33 +1146,37 @@ static int i915_oa_read(struct i915_perf_stream *stream,
>  static int oa_get_render_ctx_id(struct i915_perf_stream *stream)
>  {
>  	struct drm_i915_private *dev_priv = stream->dev_priv;
> -	struct intel_engine_cs *engine = dev_priv->engine[RCS];
> -	int ret;
> 
> -	ret = i915_mutex_lock_interruptible(&dev_priv->drm);
> -	if (ret)
> -		return ret;
> +	if (i915.enable_execlists)
> +		dev_priv->perf.oa.specific_ctx_id = stream->ctx->hw_id;
> +	else {
> +		struct intel_engine_cs *engine = dev_priv->engine[RCS];
> +		int ret;
> 
> -	/* As the ID is the gtt offset of the context's vma we pin
> -	 * the vma to ensure the ID remains fixed.
> -	 *
> -	 * NB: implied RCS engine...
> -	 */
> -	ret = engine->context_pin(engine, stream->ctx);
> -	if (ret)
> -		goto unlock;
> +		ret = i915_mutex_lock_interruptible(&dev_priv->drm);
> +		if (ret)
> +			return ret;
> 
> -	/* Explicitly track the ID (instead of calling i915_ggtt_offset()
> -	 * on the fly) considering the difference with gen8+ and
> -	 * execlists
> -	 */
> -	dev_priv->perf.oa.specific_ctx_id =
> -		i915_ggtt_offset(stream->ctx->engine[engine->id].state);
> +		/* As the ID is the gtt offset of the context's vma we pin
> +		 * the vma to ensure the ID remains fixed.
> +		 */
> +		ret = engine->context_pin(engine, stream->ctx);
> +		if (ret) {
> +			mutex_unlock(&dev_priv->drm.struct_mutex);
> +			return ret;
> +		}
> 
> -unlock:
> -	mutex_unlock(&dev_priv->drm.struct_mutex);
> +		/* Explicitly track the ID (instead of calling
> +		 * i915_ggtt_offset() on the fly) considering the difference
> +		 * with gen8+ and execlists
> +		 */
> +		dev_priv->perf.oa.specific_ctx_id =
> +			i915_ggtt_offset(stream->ctx->engine[engine->id].state);
> 
> -	return ret;
> +		mutex_unlock(&dev_priv->drm.struct_mutex);
> +	}
> +
> +	return 0;
>  }
> 
>  /**
> @@ -874,12 +1191,16 @@ static void oa_put_render_ctx_id(struct i915_perf_stream *stream)
>  	struct drm_i915_private *dev_priv = stream->dev_priv;
>  	struct intel_engine_cs *engine = dev_priv->engine[RCS];
> 
> -	mutex_lock(&dev_priv->drm.struct_mutex);
> +	if (i915.enable_execlists) {
> +		dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
> +	} else {
> +		mutex_lock(&dev_priv->drm.struct_mutex);
> 
> -	dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
> -	engine->context_unpin(engine, stream->ctx);
> +		dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
> +		engine->context_unpin(engine, stream->ctx);
> 
> -	mutex_unlock(&dev_priv->drm.struct_mutex);
> +		mutex_unlock(&dev_priv->drm.struct_mutex);
> +	}
>  }
> 
>  static void
> @@ -969,6 +1290,61 @@ static void gen7_init_oa_buffer(struct drm_i915_private *dev_priv)
>  	dev_priv->perf.oa.pollin = false;
>  }
> 
> +static void gen8_init_oa_buffer(struct drm_i915_private *dev_priv)
> +{
> +	u32 gtt_offset = i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
> +
> +	I915_WRITE(GEN8_OASTATUS, 0);
> +	I915_WRITE(GEN8_OAHEADPTR, gtt_offset);
> +	dev_priv->perf.oa.oa_buffer.head = gtt_offset;
> +
> +	I915_WRITE(GEN8_OABUFFER_UDW, 0);
> +
> +	/* PRM says:
> +	 *
> +	 *  "This MMIO must be set before the OATAILPTR
> +	 *  register and after the OAHEADPTR register. This is
> +	 *  to enable proper functionality of the overflow
> +	 *  bit."
> +	 */
> +	I915_WRITE(GEN8_OABUFFER, gtt_offset |
> +		   OABUFFER_SIZE_16M | OA_MEM_SELECT_GGTT);
> +	I915_WRITE(GEN8_OATAILPTR, gtt_offset & GEN8_OATAILPTR_MASK);
> +
> +	/* Mark that we need updated tail pointers to read from... */
> +	dev_priv->perf.oa.oa_buffer.tails[0].offset = INVALID_TAIL_PTR;
> +	dev_priv->perf.oa.oa_buffer.tails[1].offset = INVALID_TAIL_PTR;
> +
> +	/* Reset state used to recognise context switches, affecting which
> +	 * reports we will forward to userspace while filtering for a single
> +	 * context.
> +	 */
> +	dev_priv->perf.oa.oa_buffer.last_ctx_id = INVALID_CTX_ID;
> +
> +	spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
> +
> +	/* NB: although the OA buffer will initially be allocated
> +	 * zeroed via shmfs (and so this memset is redundant when
> +	 * first allocating), we may re-init the OA buffer, either
> +	 * when re-enabling a stream or in error/reset paths.
> +	 *
> +	 * The reason we clear the buffer for each re-init is for the
> +	 * sanity check in gen8_append_oa_reports() that looks at the
> +	 * reason field to make sure it's non-zero which relies on
> +	 * the assumption that new reports are being written to zeroed
> +	 * memory...
> +	 */
> +	memset(dev_priv->perf.oa.oa_buffer.vaddr, 0, OA_BUFFER_SIZE);
> +
> +	/* Maybe make ->pollin per-stream state if we support multiple
> +	 * concurrent streams in the future.
> +	 */
> +	dev_priv->perf.oa.pollin = false;
> +}
> +
>  static int alloc_oa_buffer(struct drm_i915_private *dev_priv)
>  {
>  	struct drm_i915_gem_object *bo;
> @@ -1113,6 +1489,212 @@ static void hsw_disable_metric_set(struct drm_i915_private *dev_priv)
>  				      ~GT_NOA_ENABLE));
>  }
> 
> +/* NB: It must always remain pointer safe to run this even if the OA unit
> + * has been disabled.
> + *
> + * It's fine to put out-of-date values into these per-context registers
> + * in the case that the OA unit has been disabled.
> + */
> +static void gen8_update_reg_state_unlocked(struct i915_gem_context *ctx,
> +					   u32 *reg_state)
> +{
> +	struct drm_i915_private *dev_priv = ctx->i915;
> +	const struct i915_oa_reg *flex_regs = dev_priv->perf.oa.flex_regs;
> +	int n_flex_regs = dev_priv->perf.oa.flex_regs_len;
> +	u32 ctx_oactxctrl = dev_priv->perf.oa.ctx_oactxctrl_offset;
> +	u32 ctx_flexeu0 = dev_priv->perf.oa.ctx_flexeu0_offset;
> +	/* The MMIO offsets for Flex EU registers aren't contiguous */
> +	u32 flex_mmio[] = {
> +		i915_mmio_reg_offset(EU_PERF_CNTL0),
> +		i915_mmio_reg_offset(EU_PERF_CNTL1),
> +		i915_mmio_reg_offset(EU_PERF_CNTL2),
> +		i915_mmio_reg_offset(EU_PERF_CNTL3),
> +		i915_mmio_reg_offset(EU_PERF_CNTL4),
> +		i915_mmio_reg_offset(EU_PERF_CNTL5),
> +		i915_mmio_reg_offset(EU_PERF_CNTL6),
> +	};
> +	int i;
> +
> +	reg_state[ctx_oactxctrl] = i915_mmio_reg_offset(GEN8_OACTXCONTROL);
> +	reg_state[ctx_oactxctrl+1] = (dev_priv->perf.oa.period_exponent <<
> +				      GEN8_OA_TIMER_PERIOD_SHIFT) |
> +				     (dev_priv->perf.oa.periodic ?
> +				      GEN8_OA_TIMER_ENABLE : 0) |
> +				     GEN8_OA_COUNTER_RESUME;
> +
> +	for (i = 0; i < ARRAY_SIZE(flex_mmio); i++) {
> +		u32 state_offset = ctx_flexeu0 + i * 2;
> +		u32 mmio = flex_mmio[i];
> +
> +		/* This arbitrary default will select the 'EU FPU0 Pipeline
> +		 * Active' event. In the future it's anticipated that there
> +		 * will be an explicit 'No Event' we can select, but not yet...
> +		 */
> +		u32 value = 0;
> +		int j;
> +
> +		for (j = 0; j < n_flex_regs; j++) {
> +			if (i915_mmio_reg_offset(flex_regs[j].addr) == mmio) {
> +				value = flex_regs[j].value;
> +				break;
> +			}
> +		}
> +
> +		reg_state[state_offset] = mmio;
> +		reg_state[state_offset+1] = value;
> +	}
> +}
> +
> +/* Manages updating the per-context aspects of the OA stream
> + * configuration across all contexts.
> + *
> + * The awkward consideration here is that OACTXCONTROL controls the
> + * exponent for periodic sampling which is primarily used for system
> + * wide profiling where we'd like a consistent sampling period even in
> + * the face of context switches.
> + *
> + * Our approach of updating the register state context (as opposed to
> + * say using a workaround batch buffer) ensures that the hardware
> + * won't automatically reload an out-of-date timer exponent even
> + * transiently before a WA BB could be parsed.
> + *
> + * This function needs to:
> + * - Ensure the currently running context's per-context OA state is
> + *   updated
> + * - Ensure that all existing contexts will have the correct per-context
> + *   OA state if they are scheduled for use.
> + * - Ensure any new contexts will be initialized with the correct
> + *   per-context OA state.
> + *
> + * Note: it's only the RCS/Render context that has any OA state.
> + */
> +static int gen8_configure_all_contexts(struct drm_i915_private *dev_priv)
> +{
> +	struct i915_gem_context *ctx;
> +	int ret;
> +
> +	ret = i915_mutex_lock_interruptible(&dev_priv->drm);
> +	if (ret)
> +		return ret;
> +
> +	/* Switch away from any user context.  */
> +	ret = i915_gem_switch_to_kernel_context(dev_priv);
> +	if (ret)
> +		goto out;
> +
> +	/* The OA register config is setup through the context image. This image
> +	 * might be written to by the GPU on context switch (in particular on
> +	 * lite-restore). This means we can't safely update a context's image,
> +	 * if this context is scheduled/submitted to run on the GPU.
> +	 *
> +	 * We could emit the OA register config through the batch buffer but
> +	 * this might leave small interval of time where the OA unit is
> +	 * configured at an invalid sampling period.
> +	 *
> +	 * So far the best way to work around this issue seems to be draining
> +	 * the GPU from any submitted work.
> +	 */
> +	ret = i915_gem_wait_for_idle(dev_priv,
> +				     I915_WAIT_INTERRUPTIBLE |
> +				     I915_WAIT_LOCKED);
> +	if (ret)
> +		goto out;
> +
> +	/* Update all contexts now that we've stalled the submission. */
> +	list_for_each_entry(ctx, &dev_priv->context_list, link) {
> +		struct intel_context *ce = &ctx->engine[RCS];
> +		u32 *regs;
> +
> +		/* OA settings will be set upon first use */
> +		if (!ce->state)
> +			continue;
> +
> +		regs = i915_gem_object_pin_map(ce->state->obj, I915_MAP_WB);
> +		if (IS_ERR(regs)) {
> +			ret = PTR_ERR(regs);
> +			goto out;
> +		}
> +
> +		ce->state->obj->mm.dirty = true;
> +		regs += LRC_STATE_PN * PAGE_SIZE / sizeof(*regs);
> +		/* Probably best since we don't really want to expose
> +		 * the LRC_STATE_PN trick.
> +		 */

Sorry, was writing comments to myself :) This is not about the code,
just about the interface to export mapping the context from intel_lrc.c

The only worry I have here is a failure whilst pinning the contexts,
leaving some updated, some not, and upon error we will assume the change
did not occur.

Other than that, this looks correct, and fwiw I do much prefer the idea
of the OA adjustment being synchronous wrt to request submission (that
all previous requests complete using the current OA settings and all new
submissions will use the next OA config). The downside is that it may
stall the driver for an indefinite period of time - but for a global
config that seems acceptable in order to have it well defined.
Per-context OA configs presumably are faster to apply?

For the context image adjustment,
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
and Matthew can take responsibility for having checked the rest ;)
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* [PATCH v2 9/15] drm/i915: expose _SLICE_MASK GETPARM
  2017-04-24  7:17 ` [PATCH 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                     ` (14 preceding siblings ...)
  2017-04-24  7:17   ` [PATCH v5 15/15] drm/i915/perf: remove perf.hook_lock Lionel Landwerlin
@ 2017-05-03 16:23   ` Lionel Landwerlin
  2017-05-03 16:23     ` [PATCH v2 9/15] drm/i915: expose _SUBSLICE_MASK GETPARM Lionel Landwerlin
  2017-05-03 16:33     ` [PATCH v2 9/15] drm/i915: expose _SLICE_MASK GETPARM Chris Wilson
  2017-05-26 11:56   ` [PATCH v14 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                     ` (2 subsequent siblings)
  18 siblings, 2 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-03 16:23 UTC (permalink / raw)
  To: intel-gfx

From: Robert Bragg <robert@sixbynine.org>

Enables userspace to determine the number of slices enabled and also
know what specific slices are enabled. This information is required, for
example, to be able to analyse some OA counter reports where the counter
configuration depends on the HW slice configuration.

v2: Rebase

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Acked-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.c | 5 +++++
 include/uapi/drm/i915_drm.h     | 3 +++
 2 files changed, 8 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index 72fb47a439d2..59a0f9de36b0 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -358,6 +358,11 @@ static int i915_getparam(struct drm_device *dev, void *data,
 		 */
 		value = 1;
 		break;
+	case I915_PARAM_SLICE_MASK:
+		value = INTEL_INFO(dev_priv)->sseu.slice_mask;
+		if (!value)
+			return -ENODEV;
+		break;
 	default:
 		DRM_DEBUG("Unknown parameter %d\n", param->param);
 		return -EINVAL;
diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
index f24a80d2d42e..25695c3d9a76 100644
--- a/include/uapi/drm/i915_drm.h
+++ b/include/uapi/drm/i915_drm.h
@@ -418,6 +418,9 @@ typedef struct drm_i915_irq_wait {
  */
 #define I915_PARAM_HAS_EXEC_CAPTURE	 45

+/* Query the mask of slices available for this system */
+#define I915_PARAM_SLICE_MASK		 46
+
 typedef struct drm_i915_getparam {
 	__s32 param;
 	/*
--
2.11.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v2 9/15] drm/i915: expose _SUBSLICE_MASK GETPARM
  2017-05-03 16:23   ` [PATCH v2 9/15] drm/i915: expose _SLICE_MASK GETPARM Lionel Landwerlin
@ 2017-05-03 16:23     ` Lionel Landwerlin
  2017-05-03 16:33     ` [PATCH v2 9/15] drm/i915: expose _SLICE_MASK GETPARM Chris Wilson
  1 sibling, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-03 16:23 UTC (permalink / raw)
  To: intel-gfx

From: Robert Bragg <robert@sixbynine.org>

Assuming a uniform mask across all slices, this enables userspace to
determine the specific sub slices enabled. This information is required,
for example, to be able to analyse some OA counter reports where the
counter configuration depends on the HW sub slice configuration.

v2: Rebase

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Acked-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.c | 5 +++++
 include/uapi/drm/i915_drm.h     | 5 +++++
 2 files changed, 10 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index 59a0f9de36b0..f9cadfbbb69f 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -363,6 +363,11 @@ static int i915_getparam(struct drm_device *dev, void *data,
 		if (!value)
 			return -ENODEV;
 		break;
+	case I915_PARAM_SUBSLICE_MASK:
+		value = INTEL_INFO(dev_priv)->sseu.subslice_mask;
+		if (!value)
+			return -ENODEV;
+		break;
 	default:
 		DRM_DEBUG("Unknown parameter %d\n", param->param);
 		return -EINVAL;
diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
index 25695c3d9a76..464547d08173 100644
--- a/include/uapi/drm/i915_drm.h
+++ b/include/uapi/drm/i915_drm.h
@@ -421,6 +421,11 @@ typedef struct drm_i915_irq_wait {
 /* Query the mask of slices available for this system */
 #define I915_PARAM_SLICE_MASK		 46

+/* Assuming it's uniform for each slice, this queries the mask of subslices
+ * per-slice for this system.
+ */
+#define I915_PARAM_SUBSLICE_MASK	 47
+
 typedef struct drm_i915_getparam {
 	__s32 param;
 	/*
--
2.11.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 9/15] drm/i915: expose _SLICE_MASK GETPARM
  2017-05-03 16:23   ` [PATCH v2 9/15] drm/i915: expose _SLICE_MASK GETPARM Lionel Landwerlin
  2017-05-03 16:23     ` [PATCH v2 9/15] drm/i915: expose _SUBSLICE_MASK GETPARM Lionel Landwerlin
@ 2017-05-03 16:33     ` Chris Wilson
  2017-05-03 18:14       ` Lionel Landwerlin
  1 sibling, 1 reply; 87+ messages in thread
From: Chris Wilson @ 2017-05-03 16:33 UTC (permalink / raw)
  To: Lionel Landwerlin; +Cc: intel-gfx

On Wed, May 03, 2017 at 09:23:56AM -0700, Lionel Landwerlin wrote:
> From: Robert Bragg <robert@sixbynine.org>
> 
> Enables userspace to determine the number of slices enabled and also
> know what specific slices are enabled. This information is required, for
> example, to be able to analyse some OA counter reports where the counter
> configuration depends on the HW slice configuration.

This is counter to the suggestion that contexts program these
themselves. There will be no global value.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v2 9/15] drm/i915: expose _SLICE_MASK GETPARM
  2017-05-03 16:33     ` [PATCH v2 9/15] drm/i915: expose _SLICE_MASK GETPARM Chris Wilson
@ 2017-05-03 18:14       ` Lionel Landwerlin
  0 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-03 18:14 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 876 bytes --]

On 03/05/17 09:33, Chris Wilson wrote:
> On Wed, May 03, 2017 at 09:23:56AM -0700, Lionel Landwerlin wrote:
>> From: Robert Bragg<robert@sixbynine.org>
>>
>> Enables userspace to determine the number of slices enabled and also
>> know what specific slices are enabled. This information is required, for
>> example, to be able to analyse some OA counter reports where the counter
>> configuration depends on the HW slice configuration.
> This is counter to the suggestion that contexts program these
> themselves. There will be no global value.
> -Chris
>
I saw your patches and was wondering whether the actual value userspace 
wants is the all on mask without the fused parts.

If that's not the case, we might have to turn everything on for as long 
as i915 perf is being used.
In any case, I'll have to update the commit message.

I'll get back to you.

Thanks,

-
Lionel


[-- Attachment #1.2: Type: text/html, Size: 1499 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* [PATCH v14 00/14] Enable OA unit for Gen 8 and 9 in i915 perf
  2017-04-24  7:17 ` [PATCH 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                     ` (15 preceding siblings ...)
  2017-05-03 16:23   ` [PATCH v2 9/15] drm/i915: expose _SLICE_MASK GETPARM Lionel Landwerlin
@ 2017-05-26 11:56   ` Lionel Landwerlin
  2017-05-26 11:56     ` [PATCH v14 01/14] drm/i915: Record both min/max eu_per_subslice in sseu_dev_info Lionel Landwerlin
                       ` (13 more replies)
  2017-05-26 12:28   ` ✓ Fi.CI.BAT: success for series starting with [v2,9/15] drm/i915: expose _SLICE_MASK GETPARM (rev2) Patchwork
  2017-05-31 12:33   ` [PATCH v15 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
  18 siblings, 14 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-26 11:56 UTC (permalink / raw)
  To: intel-gfx

Hi,

Here are the changes from v13 :

     * there is no more locking mechanism of the SSEU configuration on
       the whole system, instead we now make the i915 driver monitors
       the changes on submissions when OA is enabled and events are
       inserted in the perf stream to enable userspace to parse
       reports appropriately (this is reflected by changes in patch 6
       and patch 14).

     * now that we let SSEU configuration be configurable per context
       & engine, we need to reprogram the NOA muxes in between batches
       when OA is enabled (patch 13).

With exception of patch 6, I think patches 1 to 12 don't need much
looking at. In patch 6 the only bit that actually changed is the
gen8_configure_all_contexts() function.

The IGT tests for this are available here :

https://github.com/djdeath/intel-gpu-tools/tree/wip/djdeath/oa-next

Cheers,

Chris Wilson (3):
  drm/i915: Record both min/max eu_per_subslice in sseu_dev_info
  drm/i915: Program RPCS for Broadwell
  drm/i915: Record the sseu configuration per-context & engine

Lionel Landwerlin (6):
  drm/i915/perf: rework mux configurations queries
  drm/i915: add KBL GT2/GT3 check macros
  drm/i915/perf: add KBL support
  drm/i915/perf: add GLK support
  drm/i915/perf: reprogram NOA muxes at the beginning of each workload
  drm/i915/perf: notify sseu configuration changes

Robert Bragg (5):
  drm/i915/perf: Add 'render basic' Gen8+ OA unit configs
  drm/i915/perf: Add OA unit support for Gen 8+
  drm/i915/perf: Add more OA configs for BDW, CHV, SKL + BXT
  drm/i915/perf: per-gen timebase for checking sample freq
  drm/i915/perf: remove perf.hook_lock

 drivers/gpu/drm/i915/Makefile            |   11 +-
 drivers/gpu/drm/i915/i915_debugfs.c      |   36 +-
 drivers/gpu/drm/i915/i915_drv.h          |  125 +-
 drivers/gpu/drm/i915/i915_gem_context.c  |    9 +
 drivers/gpu/drm/i915/i915_gem_context.h  |   42 +
 drivers/gpu/drm/i915/i915_oa_bdw.c       | 5376 ++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_bdw.h       |   40 +
 drivers/gpu/drm/i915/i915_oa_bxt.c       | 2690 +++++++++++++++
 drivers/gpu/drm/i915/i915_oa_bxt.h       |   40 +
 drivers/gpu/drm/i915/i915_oa_chv.c       | 2873 ++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_chv.h       |   40 +
 drivers/gpu/drm/i915/i915_oa_glk.c       | 2602 +++++++++++++++
 drivers/gpu/drm/i915/i915_oa_glk.h       |   40 +
 drivers/gpu/drm/i915/i915_oa_hsw.c       |  263 +-
 drivers/gpu/drm/i915/i915_oa_hsw.h       |    4 +-
 drivers/gpu/drm/i915/i915_oa_kblgt2.c    | 2991 +++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_kblgt2.h    |   40 +
 drivers/gpu/drm/i915/i915_oa_kblgt3.c    | 3040 +++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_kblgt3.h    |   40 +
 drivers/gpu/drm/i915/i915_oa_sklgt2.c    | 3479 +++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_sklgt2.h    |   40 +
 drivers/gpu/drm/i915/i915_oa_sklgt3.c    | 3039 +++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_sklgt3.h    |   40 +
 drivers/gpu/drm/i915/i915_oa_sklgt4.c    | 3093 +++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_sklgt4.h    |   40 +
 drivers/gpu/drm/i915/i915_perf.c         | 1622 ++++++++-
 drivers/gpu/drm/i915/i915_reg.h          |   22 +
 drivers/gpu/drm/i915/intel_device_info.c |   32 +-
 drivers/gpu/drm/i915/intel_lrc.c         |   39 +-
 drivers/gpu/drm/i915/intel_lrc.h         |    2 +
 drivers/gpu/drm/i915/intel_ringbuffer.c  |    3 +
 include/uapi/drm/i915_drm.h              |   49 +-
 32 files changed, 31511 insertions(+), 291 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_oa_bdw.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_bdw.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_bxt.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_bxt.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_chv.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_chv.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_glk.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_glk.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_kblgt2.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_kblgt2.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_kblgt3.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_kblgt3.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt2.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt2.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt3.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt3.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt4.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt4.h

--
2.11.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* [PATCH v14 01/14] drm/i915: Record both min/max eu_per_subslice in sseu_dev_info
  2017-05-26 11:56   ` [PATCH v14 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
@ 2017-05-26 11:56     ` Lionel Landwerlin
  2017-05-26 11:56     ` [PATCH v14 02/14] drm/i915: Program RPCS for Broadwell Lionel Landwerlin
                       ` (12 subsequent siblings)
  13 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-26 11:56 UTC (permalink / raw)
  To: intel-gfx

From: Chris Wilson <chris@chris-wilson.co.uk>

When we query the available eu on each subslice, we currently only
report the max. It would also be useful to report the minimum found as
well.

When we set RPCS (power gating over the EU), we can also specify both
the min and max number of eu to configure on each slice; currently we
just set it to a single value, but the flexibility may be beneficial in
future.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
---
 drivers/gpu/drm/i915/i915_debugfs.c      | 36 +++++++++++++++++++++++---------
 drivers/gpu/drm/i915/i915_drv.h          |  3 ++-
 drivers/gpu/drm/i915/intel_device_info.c | 32 +++++++++++++++++-----------
 drivers/gpu/drm/i915/intel_lrc.c         |  4 ++--
 4 files changed, 50 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index c08a6d8a4e07..149214ba36c3 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -4489,6 +4489,7 @@ DEFINE_SIMPLE_ATTRIBUTE(i915_cache_sharing_fops,
 static void cherryview_sseu_device_status(struct drm_i915_private *dev_priv,
 					  struct sseu_dev_info *sseu)
 {
+	unsigned int min_eu_per_subslice, max_eu_per_subslice;
 	int ss_max = 2;
 	int ss;
 	u32 sig1[ss_max], sig2[ss_max];
@@ -4498,6 +4499,9 @@ static void cherryview_sseu_device_status(struct drm_i915_private *dev_priv,
 	sig2[0] = I915_READ(CHV_POWER_SS0_SIG2);
 	sig2[1] = I915_READ(CHV_POWER_SS1_SIG2);
 
+	min_eu_per_subslice = ~0u;
+	max_eu_per_subslice = 0;
+
 	for (ss = 0; ss < ss_max; ss++) {
 		unsigned int eu_cnt;
 
@@ -4512,14 +4516,18 @@ static void cherryview_sseu_device_status(struct drm_i915_private *dev_priv,
 			 ((sig1[ss] & CHV_EU210_PG_ENABLE) ? 0 : 2) +
 			 ((sig2[ss] & CHV_EU311_PG_ENABLE) ? 0 : 2);
 		sseu->eu_total += eu_cnt;
-		sseu->eu_per_subslice = max_t(unsigned int,
-					      sseu->eu_per_subslice, eu_cnt);
+		min_eu_per_subslice = min(min_eu_per_subslice, eu_cnt);
+		max_eu_per_subslice = max(max_eu_per_subslice, eu_cnt);
 	}
+
+	sseu->min_eu_per_subslice = min_eu_per_subslice;
+	sseu->max_eu_per_subslice = max_eu_per_subslice;
 }
 
 static void gen9_sseu_device_status(struct drm_i915_private *dev_priv,
 				    struct sseu_dev_info *sseu)
 {
+	unsigned int min_eu_per_subslice, max_eu_per_subslice;
 	int s_max = 3, ss_max = 4;
 	int s, ss;
 	u32 s_reg[s_max], eu_reg[2*s_max], eu_mask[2];
@@ -4545,6 +4553,9 @@ static void gen9_sseu_device_status(struct drm_i915_private *dev_priv,
 		     GEN9_PGCTL_SSB_EU210_ACK |
 		     GEN9_PGCTL_SSB_EU311_ACK;
 
+	min_eu_per_subslice = ~0u;
+	max_eu_per_subslice = 0;
+
 	for (s = 0; s < s_max; s++) {
 		if ((s_reg[s] & GEN9_PGCTL_SLICE_ACK) == 0)
 			/* skip disabled slice */
@@ -4570,11 +4581,14 @@ static void gen9_sseu_device_status(struct drm_i915_private *dev_priv,
 			eu_cnt = 2 * hweight32(eu_reg[2*s + ss/2] &
 					       eu_mask[ss%2]);
 			sseu->eu_total += eu_cnt;
-			sseu->eu_per_subslice = max_t(unsigned int,
-						      sseu->eu_per_subslice,
-						      eu_cnt);
+
+			min_eu_per_subslice = min(min_eu_per_subslice, eu_cnt);
+			max_eu_per_subslice = max(max_eu_per_subslice, eu_cnt);
 		}
 	}
+
+	sseu->min_eu_per_subslice = min_eu_per_subslice;
+	sseu->max_eu_per_subslice = max_eu_per_subslice;
 }
 
 static void broadwell_sseu_device_status(struct drm_i915_private *dev_priv,
@@ -4587,9 +4601,11 @@ static void broadwell_sseu_device_status(struct drm_i915_private *dev_priv,
 
 	if (sseu->slice_mask) {
 		sseu->subslice_mask = INTEL_INFO(dev_priv)->sseu.subslice_mask;
-		sseu->eu_per_subslice =
-				INTEL_INFO(dev_priv)->sseu.eu_per_subslice;
-		sseu->eu_total = sseu->eu_per_subslice *
+		sseu->min_eu_per_subslice =
+			INTEL_INFO(dev_priv)->sseu.min_eu_per_subslice;
+		sseu->max_eu_per_subslice =
+			INTEL_INFO(dev_priv)->sseu.max_eu_per_subslice;
+		sseu->eu_total = sseu->max_eu_per_subslice *
 				 sseu_subslice_total(sseu);
 
 		/* subtract fused off EU(s) from enabled slice(s) */
@@ -4620,8 +4636,8 @@ static void i915_print_sseu_info(struct seq_file *m, bool is_available_info,
 		   hweight8(sseu->subslice_mask));
 	seq_printf(m, "  %s EU Total: %u\n", type,
 		   sseu->eu_total);
-	seq_printf(m, "  %s EU Per Subslice: %u\n", type,
-		   sseu->eu_per_subslice);
+	seq_printf(m, "  %s EU Per Subslice: [%u, %u]\n", type,
+		   sseu->min_eu_per_subslice, sseu->max_eu_per_subslice);
 
 	if (!is_available_info)
 		return;
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 9ba22427c05c..7faad0ea18a8 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -783,7 +783,8 @@ struct sseu_dev_info {
 	u8 slice_mask;
 	u8 subslice_mask;
 	u8 eu_total;
-	u8 eu_per_subslice;
+	u8 min_eu_per_subslice;
+	u8 max_eu_per_subslice;
 	u8 min_eu_in_pool;
 	/* For each slice, which subslice(s) has(have) 7 EUs (bitfield)? */
 	u8 subslice_7eu[3];
diff --git a/drivers/gpu/drm/i915/intel_device_info.c b/drivers/gpu/drm/i915/intel_device_info.c
index 3718341662c2..f21f597c5cb9 100644
--- a/drivers/gpu/drm/i915/intel_device_info.c
+++ b/drivers/gpu/drm/i915/intel_device_info.c
@@ -83,6 +83,7 @@ void intel_device_info_dump(struct drm_i915_private *dev_priv)
 static void cherryview_sseu_info_init(struct drm_i915_private *dev_priv)
 {
 	struct sseu_dev_info *sseu = &mkwrite_device_info(dev_priv)->sseu;
+	unsigned int eu_per_subslice;
 	u32 fuse, eu_dis;
 
 	fuse = I915_READ(CHV_FUSE_GT);
@@ -107,9 +108,10 @@ static void cherryview_sseu_info_init(struct drm_i915_private *dev_priv)
 	 * CHV expected to always have a uniform distribution of EU
 	 * across subslices.
 	*/
-	sseu->eu_per_subslice = sseu_subslice_total(sseu) ?
-				sseu->eu_total / sseu_subslice_total(sseu) :
-				0;
+	eu_per_subslice = sseu_subslice_total(sseu) ?
+		sseu->eu_total / sseu_subslice_total(sseu) : 0;
+	sseu->min_eu_per_subslice = eu_per_subslice;
+	sseu->max_eu_per_subslice = eu_per_subslice;
 	/*
 	 * CHV supports subslice power gating on devices with more than
 	 * one subslice, and supports EU power gating on devices with
@@ -117,13 +119,14 @@ static void cherryview_sseu_info_init(struct drm_i915_private *dev_priv)
 	*/
 	sseu->has_slice_pg = 0;
 	sseu->has_subslice_pg = sseu_subslice_total(sseu) > 1;
-	sseu->has_eu_pg = (sseu->eu_per_subslice > 2);
+	sseu->has_eu_pg = eu_per_subslice > 2;
 }
 
 static void gen9_sseu_info_init(struct drm_i915_private *dev_priv)
 {
 	struct intel_device_info *info = mkwrite_device_info(dev_priv);
 	struct sseu_dev_info *sseu = &info->sseu;
+	unsigned int eu_per_subslice;
 	int s_max = 3, ss_max = 4, eu_max = 8;
 	int s, ss;
 	u32 fuse2, eu_disable;
@@ -179,9 +182,10 @@ static void gen9_sseu_info_init(struct drm_i915_private *dev_priv)
 	 * recovery. BXT is expected to be perfectly uniform in EU
 	 * distribution.
 	*/
-	sseu->eu_per_subslice = sseu_subslice_total(sseu) ?
-				DIV_ROUND_UP(sseu->eu_total,
-					     sseu_subslice_total(sseu)) : 0;
+	eu_per_subslice = sseu_subslice_total(sseu) ?
+		DIV_ROUND_UP(sseu->eu_total, sseu_subslice_total(sseu)) : 0;
+	sseu->min_eu_per_subslice = eu_per_subslice;
+	sseu->max_eu_per_subslice = eu_per_subslice;
 	/*
 	 * SKL supports slice power gating on devices with more than
 	 * one slice, and supports EU power gating on devices with
@@ -195,7 +199,7 @@ static void gen9_sseu_info_init(struct drm_i915_private *dev_priv)
 		hweight8(sseu->slice_mask) > 1;
 	sseu->has_subslice_pg =
 		IS_GEN9_LP(dev_priv) && sseu_subslice_total(sseu) > 1;
-	sseu->has_eu_pg = sseu->eu_per_subslice > 2;
+	sseu->has_eu_pg = eu_per_subslice > 2;
 
 	if (IS_GEN9_LP(dev_priv)) {
 #define IS_SS_DISABLED(ss)	(!(sseu->subslice_mask & BIT(ss)))
@@ -228,6 +232,7 @@ static void broadwell_sseu_info_init(struct drm_i915_private *dev_priv)
 {
 	struct sseu_dev_info *sseu = &mkwrite_device_info(dev_priv)->sseu;
 	const int s_max = 3, ss_max = 3, eu_max = 8;
+	unsigned int eu_per_subslice;
 	int s, ss;
 	u32 fuse2, eu_disable[3]; /* s_max */
 
@@ -282,9 +287,10 @@ static void broadwell_sseu_info_init(struct drm_i915_private *dev_priv)
 	 * subslices with the exception that any one EU in any one subslice may
 	 * be fused off for die recovery.
 	 */
-	sseu->eu_per_subslice = sseu_subslice_total(sseu) ?
-				DIV_ROUND_UP(sseu->eu_total,
-					     sseu_subslice_total(sseu)) : 0;
+	eu_per_subslice = sseu_subslice_total(sseu) ?
+		DIV_ROUND_UP(sseu->eu_total, sseu_subslice_total(sseu)) : 0;
+	sseu->min_eu_per_subslice = eu_per_subslice;
+	sseu->max_eu_per_subslice = eu_per_subslice;
 
 	/*
 	 * BDW supports slice power gating on devices with more than
@@ -421,7 +427,9 @@ void intel_device_info_runtime_init(struct drm_i915_private *dev_priv)
 	DRM_DEBUG_DRIVER("subslice per slice: %u\n",
 			 hweight8(info->sseu.subslice_mask));
 	DRM_DEBUG_DRIVER("EU total: %u\n", info->sseu.eu_total);
-	DRM_DEBUG_DRIVER("EU per subslice: %u\n", info->sseu.eu_per_subslice);
+	DRM_DEBUG_DRIVER("EU per subslice: [%u, %u]\n",
+			 info->sseu.min_eu_per_subslice,
+			 info->sseu.max_eu_per_subslice);
 	DRM_DEBUG_DRIVER("has slice power gating: %s\n",
 			 info->sseu.has_slice_pg ? "y" : "n");
 	DRM_DEBUG_DRIVER("has subslice power gating: %s\n",
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 014b30ace8a0..9a279a559a0f 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1843,9 +1843,9 @@ make_rpcs(struct drm_i915_private *dev_priv)
 	}
 
 	if (INTEL_INFO(dev_priv)->sseu.has_eu_pg) {
-		rpcs |= INTEL_INFO(dev_priv)->sseu.eu_per_subslice <<
+		rpcs |= INTEL_INFO(dev_priv)->sseu.min_eu_per_subslice <<
 			GEN8_RPCS_EU_MIN_SHIFT;
-		rpcs |= INTEL_INFO(dev_priv)->sseu.eu_per_subslice <<
+		rpcs |= INTEL_INFO(dev_priv)->sseu.max_eu_per_subslice <<
 			GEN8_RPCS_EU_MAX_SHIFT;
 		rpcs |= GEN8_RPCS_ENABLE;
 	}
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v14 02/14] drm/i915: Program RPCS for Broadwell
  2017-05-26 11:56   ` [PATCH v14 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
  2017-05-26 11:56     ` [PATCH v14 01/14] drm/i915: Record both min/max eu_per_subslice in sseu_dev_info Lionel Landwerlin
@ 2017-05-26 11:56     ` Lionel Landwerlin
  2017-05-26 11:56     ` [PATCH v14 03/14] drm/i915: Record the sseu configuration per-context & engine Lionel Landwerlin
                       ` (11 subsequent siblings)
  13 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-26 11:56 UTC (permalink / raw)
  To: intel-gfx

From: Chris Wilson <chris@chris-wilson.co.uk>

Currently we only configure the power gating for Skylake and above, but
the configuration should equally apply to Broadwell and Braswell. Even
though, there is not as much variation as for later generations, we want
to expose control over the configuration to userspace and may want to
opt out of the "always-enabled" setting.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
---
 drivers/gpu/drm/i915/intel_lrc.c | 7 -------
 1 file changed, 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 9a279a559a0f..fec66204fe88 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1816,13 +1816,6 @@ make_rpcs(struct drm_i915_private *dev_priv)
 	u32 rpcs = 0;
 
 	/*
-	 * No explicit RPCS request is needed to ensure full
-	 * slice/subslice/EU enablement prior to Gen9.
-	*/
-	if (INTEL_GEN(dev_priv) < 9)
-		return 0;
-
-	/*
 	 * Starting in Gen9, render power gating can leave
 	 * slice/subslice/EU in a partially enabled state. We
 	 * must make an explicit request through RPCS for full
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v14 03/14] drm/i915: Record the sseu configuration per-context & engine
  2017-05-26 11:56   ` [PATCH v14 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
  2017-05-26 11:56     ` [PATCH v14 01/14] drm/i915: Record both min/max eu_per_subslice in sseu_dev_info Lionel Landwerlin
  2017-05-26 11:56     ` [PATCH v14 02/14] drm/i915: Program RPCS for Broadwell Lionel Landwerlin
@ 2017-05-26 11:56     ` Lionel Landwerlin
  2017-05-26 11:56     ` [PATCH v14 04/14] drm/i915/perf: rework mux configurations queries Lionel Landwerlin
                       ` (10 subsequent siblings)
  13 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-26 11:56 UTC (permalink / raw)
  To: intel-gfx

From: Chris Wilson <chris@chris-wilson.co.uk>

We want to expose the ability to reconfigure the slices, subslice and
eu per context and per engine. To facilitate that, store the current
configuration on the context for each engine, which is initially set
to the device default upon creation.

v2: record sseu configuration per context & engine (Chris)

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h         | 19 -------------------
 drivers/gpu/drm/i915/i915_gem_context.c |  6 ++++++
 drivers/gpu/drm/i915/i915_gem_context.h | 21 +++++++++++++++++++++
 drivers/gpu/drm/i915/intel_lrc.c        | 23 +++++++++--------------
 4 files changed, 36 insertions(+), 33 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 7faad0ea18a8..d76e731e8156 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -779,25 +779,6 @@ struct intel_csr {
 	func(overlay_needs_physical); \
 	func(supports_tv);
 
-struct sseu_dev_info {
-	u8 slice_mask;
-	u8 subslice_mask;
-	u8 eu_total;
-	u8 min_eu_per_subslice;
-	u8 max_eu_per_subslice;
-	u8 min_eu_in_pool;
-	/* For each slice, which subslice(s) has(have) 7 EUs (bitfield)? */
-	u8 subslice_7eu[3];
-	u8 has_slice_pg:1;
-	u8 has_subslice_pg:1;
-	u8 has_eu_pg:1;
-};
-
-static inline unsigned int sseu_subslice_total(const struct sseu_dev_info *sseu)
-{
-	return hweight8(sseu->slice_mask) * hweight8(sseu->subslice_mask);
-}
-
 /* Keep in gen based order, and chronological order within a gen */
 enum intel_platform {
 	INTEL_PLATFORM_UNINITIALIZED = 0,
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index c5d1666d7071..60d0bbf0ae2b 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -184,6 +184,8 @@ __create_hw_context(struct drm_i915_private *dev_priv,
 		    struct drm_i915_file_private *file_priv)
 {
 	struct i915_gem_context *ctx;
+	struct intel_engine_cs *engine;
+	enum intel_engine_id id;
 	int ret;
 
 	ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
@@ -229,6 +231,10 @@ __create_hw_context(struct drm_i915_private *dev_priv,
 	 * is no remap info, it will be a NOP. */
 	ctx->remap_slice = ALL_L3_SLICES(dev_priv);
 
+	/* On all engines, use the whole device by default */
+	for_each_engine(engine, dev_priv, id)
+		ctx->engine[id].sseu = INTEL_INFO(dev_priv)->sseu;
+
 	i915_gem_context_set_bannable(ctx);
 	ctx->ring_size = 4 * PAGE_SIZE;
 	ctx->desc_template =
diff --git a/drivers/gpu/drm/i915/i915_gem_context.h b/drivers/gpu/drm/i915/i915_gem_context.h
index 4af2ab94558b..e8247e2f6011 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.h
+++ b/drivers/gpu/drm/i915/i915_gem_context.h
@@ -39,6 +39,25 @@ struct i915_hw_ppgtt;
 struct i915_vma;
 struct intel_ring;
 
+struct sseu_dev_info {
+	u8 slice_mask;
+	u8 subslice_mask;
+	u8 eu_total;
+	u8 min_eu_per_subslice;
+	u8 max_eu_per_subslice;
+	u8 min_eu_in_pool;
+	/* For each slice, which subslice(s) has(have) 7 EUs (bitfield)? */
+	u8 subslice_7eu[3];
+	u8 has_slice_pg:1;
+	u8 has_subslice_pg:1;
+	u8 has_eu_pg:1;
+};
+
+static inline unsigned int sseu_subslice_total(const struct sseu_dev_info *sseu)
+{
+	return hweight8(sseu->slice_mask) * hweight8(sseu->subslice_mask);
+}
+
 #define DEFAULT_CONTEXT_HANDLE 0
 
 /**
@@ -151,6 +170,8 @@ struct i915_gem_context {
 		u64 lrc_desc;
 		int pin_count;
 		bool initialised;
+		/** sseu: Control eu/slice partitioning */
+		struct sseu_dev_info sseu;
 	} engine[I915_NUM_ENGINES];
 
 	/** ring_size: size for allocating the per-engine ring buffer */
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index fec66204fe88..78fb1f452891 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1810,8 +1810,7 @@ int logical_xcs_ring_init(struct intel_engine_cs *engine)
 	return logical_ring_init(engine);
 }
 
-static u32
-make_rpcs(struct drm_i915_private *dev_priv)
+static u32 make_rpcs(const struct sseu_dev_info *sseu)
 {
 	u32 rpcs = 0;
 
@@ -1821,25 +1820,21 @@ make_rpcs(struct drm_i915_private *dev_priv)
 	 * must make an explicit request through RPCS for full
 	 * enablement.
 	*/
-	if (INTEL_INFO(dev_priv)->sseu.has_slice_pg) {
+	if (sseu->has_slice_pg) {
 		rpcs |= GEN8_RPCS_S_CNT_ENABLE;
-		rpcs |= hweight8(INTEL_INFO(dev_priv)->sseu.slice_mask) <<
-			GEN8_RPCS_S_CNT_SHIFT;
+		rpcs |= hweight8(sseu->slice_mask) << GEN8_RPCS_S_CNT_SHIFT;
 		rpcs |= GEN8_RPCS_ENABLE;
 	}
 
-	if (INTEL_INFO(dev_priv)->sseu.has_subslice_pg) {
+	if (sseu->has_subslice_pg) {
 		rpcs |= GEN8_RPCS_SS_CNT_ENABLE;
-		rpcs |= hweight8(INTEL_INFO(dev_priv)->sseu.subslice_mask) <<
-			GEN8_RPCS_SS_CNT_SHIFT;
+		rpcs |= hweight8(sseu->subslice_mask) << GEN8_RPCS_SS_CNT_SHIFT;
 		rpcs |= GEN8_RPCS_ENABLE;
 	}
 
-	if (INTEL_INFO(dev_priv)->sseu.has_eu_pg) {
-		rpcs |= INTEL_INFO(dev_priv)->sseu.min_eu_per_subslice <<
-			GEN8_RPCS_EU_MIN_SHIFT;
-		rpcs |= INTEL_INFO(dev_priv)->sseu.max_eu_per_subslice <<
-			GEN8_RPCS_EU_MAX_SHIFT;
+	if (sseu->has_eu_pg) {
+		rpcs |= sseu->min_eu_per_subslice << GEN8_RPCS_EU_MIN_SHIFT;
+		rpcs |= sseu->max_eu_per_subslice << GEN8_RPCS_EU_MAX_SHIFT;
 		rpcs |= GEN8_RPCS_ENABLE;
 	}
 
@@ -1949,7 +1944,7 @@ static void execlists_init_reg_state(u32 *regs,
 	if (rcs) {
 		regs[CTX_LRI_HEADER_2] = MI_LOAD_REGISTER_IMM(1);
 		CTX_REG(regs, CTX_R_PWR_CLK_STATE, GEN8_R_PWR_CLK_STATE,
-			make_rpcs(dev_priv));
+			make_rpcs(&ctx->engine[engine->id].sseu));
 	}
 }
 
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v14 04/14] drm/i915/perf: rework mux configurations queries
  2017-05-26 11:56   ` [PATCH v14 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                       ` (2 preceding siblings ...)
  2017-05-26 11:56     ` [PATCH v14 03/14] drm/i915: Record the sseu configuration per-context & engine Lionel Landwerlin
@ 2017-05-26 11:56     ` Lionel Landwerlin
  2017-05-26 11:56     ` [PATCH v14 05/14] drm/i915/perf: Add 'render basic' Gen8+ OA unit configs Lionel Landwerlin
                       ` (9 subsequent siblings)
  13 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-26 11:56 UTC (permalink / raw)
  To: intel-gfx

Gen8+ might have mux configurations per slices/subslices. Depending on
whether slices/subslices have been fused off, only part of the
configuration needs to be applied. This change reworks the mux
configurations query mechanism to allow more than one set of registers
to be programmed.

v2: s/n_mux_regs/n_mux_configs/ (Matthew)

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h    |   6 +-
 drivers/gpu/drm/i915/i915_oa_hsw.c | 215 ++++++++++++++++++++++++-------------
 drivers/gpu/drm/i915/i915_oa_hsw.h |   4 +-
 drivers/gpu/drm/i915/i915_perf.c   |   7 +-
 4 files changed, 151 insertions(+), 81 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index d76e731e8156..225bee890822 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2396,8 +2396,10 @@ struct drm_i915_private {
 
 			int metrics_set;
 
-			const struct i915_oa_reg *mux_regs;
-			int mux_regs_len;
+			const struct i915_oa_reg *mux_regs[1];
+			int mux_regs_lens[1];
+			int n_mux_configs;
+
 			const struct i915_oa_reg *b_counter_regs;
 			int b_counter_regs_len;
 
diff --git a/drivers/gpu/drm/i915/i915_oa_hsw.c b/drivers/gpu/drm/i915/i915_oa_hsw.c
index 4ddf756add31..8c13e0880e53 100644
--- a/drivers/gpu/drm/i915/i915_oa_hsw.c
+++ b/drivers/gpu/drm/i915/i915_oa_hsw.c
@@ -1,5 +1,7 @@
 /*
- * Autogenerated file, DO NOT EDIT manually!
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
  *
  * Copyright (c) 2015 Intel Corporation
  *
@@ -109,12 +111,21 @@ static const struct i915_oa_reg mux_config_render_basic[] = {
 	{ _MMIO(0x25428), 0x00042049 },
 };
 
-static const struct i915_oa_reg *
+static int
 get_render_basic_mux_config(struct drm_i915_private *dev_priv,
-			    int *len)
+			    const struct i915_oa_reg **regs,
+			    int *lens)
 {
-	*len = ARRAY_SIZE(mux_config_render_basic);
-	return mux_config_render_basic;
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_render_basic;
+	lens[n] = ARRAY_SIZE(mux_config_render_basic);
+	n++;
+
+	return n;
 }
 
 static const struct i915_oa_reg b_counter_config_compute_basic[] = {
@@ -172,12 +183,21 @@ static const struct i915_oa_reg mux_config_compute_basic[] = {
 	{ _MMIO(0x25428), 0x00000c03 },
 };
 
-static const struct i915_oa_reg *
+static int
 get_compute_basic_mux_config(struct drm_i915_private *dev_priv,
-			     int *len)
+			     const struct i915_oa_reg **regs,
+			     int *lens)
 {
-	*len = ARRAY_SIZE(mux_config_compute_basic);
-	return mux_config_compute_basic;
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_basic;
+	lens[n] = ARRAY_SIZE(mux_config_compute_basic);
+	n++;
+
+	return n;
 }
 
 static const struct i915_oa_reg b_counter_config_compute_extended[] = {
@@ -221,12 +241,21 @@ static const struct i915_oa_reg mux_config_compute_extended[] = {
 	{ _MMIO(0x25428), 0x00000000 },
 };
 
-static const struct i915_oa_reg *
+static int
 get_compute_extended_mux_config(struct drm_i915_private *dev_priv,
-				int *len)
+				const struct i915_oa_reg **regs,
+				int *lens)
 {
-	*len = ARRAY_SIZE(mux_config_compute_extended);
-	return mux_config_compute_extended;
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_extended;
+	lens[n] = ARRAY_SIZE(mux_config_compute_extended);
+	n++;
+
+	return n;
 }
 
 static const struct i915_oa_reg b_counter_config_memory_reads[] = {
@@ -281,12 +310,21 @@ static const struct i915_oa_reg mux_config_memory_reads[] = {
 	{ _MMIO(0x25428), 0x00000000 },
 };
 
-static const struct i915_oa_reg *
+static int
 get_memory_reads_mux_config(struct drm_i915_private *dev_priv,
-			    int *len)
+			    const struct i915_oa_reg **regs,
+			    int *lens)
 {
-	*len = ARRAY_SIZE(mux_config_memory_reads);
-	return mux_config_memory_reads;
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_memory_reads;
+	lens[n] = ARRAY_SIZE(mux_config_memory_reads);
+	n++;
+
+	return n;
 }
 
 static const struct i915_oa_reg b_counter_config_memory_writes[] = {
@@ -341,12 +379,21 @@ static const struct i915_oa_reg mux_config_memory_writes[] = {
 	{ _MMIO(0x25428), 0x00000000 },
 };
 
-static const struct i915_oa_reg *
+static int
 get_memory_writes_mux_config(struct drm_i915_private *dev_priv,
-			     int *len)
+			     const struct i915_oa_reg **regs,
+			     int *lens)
 {
-	*len = ARRAY_SIZE(mux_config_memory_writes);
-	return mux_config_memory_writes;
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_memory_writes;
+	lens[n] = ARRAY_SIZE(mux_config_memory_writes);
+	n++;
+
+	return n;
 }
 
 static const struct i915_oa_reg b_counter_config_sampler_balance[] = {
@@ -401,31 +448,40 @@ static const struct i915_oa_reg mux_config_sampler_balance[] = {
 	{ _MMIO(0x25428), 0x0004a54a },
 };
 
-static const struct i915_oa_reg *
+static int
 get_sampler_balance_mux_config(struct drm_i915_private *dev_priv,
-			       int *len)
+			       const struct i915_oa_reg **regs,
+			       int *lens)
 {
-	*len = ARRAY_SIZE(mux_config_sampler_balance);
-	return mux_config_sampler_balance;
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_sampler_balance;
+	lens[n] = ARRAY_SIZE(mux_config_sampler_balance);
+	n++;
+
+	return n;
 }
 
 int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 {
-	dev_priv->perf.oa.mux_regs = NULL;
-	dev_priv->perf.oa.mux_regs_len = 0;
+	dev_priv->perf.oa.n_mux_configs = 0;
 	dev_priv->perf.oa.b_counter_regs = NULL;
 	dev_priv->perf.oa.b_counter_regs_len = 0;
 
 	switch (dev_priv->perf.oa.metrics_set) {
 	case METRIC_SET_ID_RENDER_BASIC:
-		dev_priv->perf.oa.mux_regs =
+		dev_priv->perf.oa.n_mux_configs =
 			get_render_basic_mux_config(dev_priv,
-						    &dev_priv->perf.oa.mux_regs_len);
-		if (!dev_priv->perf.oa.mux_regs) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set");
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
 
 			/* EINVAL because *_register_sysfs already checked this
-			 * and so it wouldn't have been advertised so userspace and
+			 * and so it wouldn't have been advertised to userspace and
 			 * so shouldn't have been requested
 			 */
 			return -EINVAL;
@@ -438,14 +494,15 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 
 		return 0;
 	case METRIC_SET_ID_COMPUTE_BASIC:
-		dev_priv->perf.oa.mux_regs =
+		dev_priv->perf.oa.n_mux_configs =
 			get_compute_basic_mux_config(dev_priv,
-						     &dev_priv->perf.oa.mux_regs_len);
-		if (!dev_priv->perf.oa.mux_regs) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set");
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set\n");
 
 			/* EINVAL because *_register_sysfs already checked this
-			 * and so it wouldn't have been advertised so userspace and
+			 * and so it wouldn't have been advertised to userspace and
 			 * so shouldn't have been requested
 			 */
 			return -EINVAL;
@@ -458,14 +515,15 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 
 		return 0;
 	case METRIC_SET_ID_COMPUTE_EXTENDED:
-		dev_priv->perf.oa.mux_regs =
+		dev_priv->perf.oa.n_mux_configs =
 			get_compute_extended_mux_config(dev_priv,
-							&dev_priv->perf.oa.mux_regs_len);
-		if (!dev_priv->perf.oa.mux_regs) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTENDED\" metric set");
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTENDED\" metric set\n");
 
 			/* EINVAL because *_register_sysfs already checked this
-			 * and so it wouldn't have been advertised so userspace and
+			 * and so it wouldn't have been advertised to userspace and
 			 * so shouldn't have been requested
 			 */
 			return -EINVAL;
@@ -478,14 +536,15 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 
 		return 0;
 	case METRIC_SET_ID_MEMORY_READS:
-		dev_priv->perf.oa.mux_regs =
+		dev_priv->perf.oa.n_mux_configs =
 			get_memory_reads_mux_config(dev_priv,
-						    &dev_priv->perf.oa.mux_regs_len);
-		if (!dev_priv->perf.oa.mux_regs) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_READS\" metric set");
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_READS\" metric set\n");
 
 			/* EINVAL because *_register_sysfs already checked this
-			 * and so it wouldn't have been advertised so userspace and
+			 * and so it wouldn't have been advertised to userspace and
 			 * so shouldn't have been requested
 			 */
 			return -EINVAL;
@@ -498,14 +557,15 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 
 		return 0;
 	case METRIC_SET_ID_MEMORY_WRITES:
-		dev_priv->perf.oa.mux_regs =
+		dev_priv->perf.oa.n_mux_configs =
 			get_memory_writes_mux_config(dev_priv,
-						     &dev_priv->perf.oa.mux_regs_len);
-		if (!dev_priv->perf.oa.mux_regs) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_WRITES\" metric set");
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_WRITES\" metric set\n");
 
 			/* EINVAL because *_register_sysfs already checked this
-			 * and so it wouldn't have been advertised so userspace and
+			 * and so it wouldn't have been advertised to userspace and
 			 * so shouldn't have been requested
 			 */
 			return -EINVAL;
@@ -518,14 +578,15 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 
 		return 0;
 	case METRIC_SET_ID_SAMPLER_BALANCE:
-		dev_priv->perf.oa.mux_regs =
+		dev_priv->perf.oa.n_mux_configs =
 			get_sampler_balance_mux_config(dev_priv,
-						       &dev_priv->perf.oa.mux_regs_len);
-		if (!dev_priv->perf.oa.mux_regs) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER_BALANCE\" metric set");
+						       dev_priv->perf.oa.mux_regs,
+						       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER_BALANCE\" metric set\n");
 
 			/* EINVAL because *_register_sysfs already checked this
-			 * and so it wouldn't have been advertised so userspace and
+			 * and so it wouldn't have been advertised to userspace and
 			 * so shouldn't have been requested
 			 */
 			return -EINVAL;
@@ -677,35 +738,36 @@ static struct attribute_group group_sampler_balance = {
 int
 i915_perf_register_sysfs_hsw(struct drm_i915_private *dev_priv)
 {
-	int mux_len;
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
 	int ret = 0;
 
-	if (get_render_basic_mux_config(dev_priv, &mux_len)) {
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
 		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_basic);
 		if (ret)
 			goto error_render_basic;
 	}
-	if (get_compute_basic_mux_config(dev_priv, &mux_len)) {
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
 		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
 		if (ret)
 			goto error_compute_basic;
 	}
-	if (get_compute_extended_mux_config(dev_priv, &mux_len)) {
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens)) {
 		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
 		if (ret)
 			goto error_compute_extended;
 	}
-	if (get_memory_reads_mux_config(dev_priv, &mux_len)) {
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens)) {
 		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
 		if (ret)
 			goto error_memory_reads;
 	}
-	if (get_memory_writes_mux_config(dev_priv, &mux_len)) {
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens)) {
 		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
 		if (ret)
 			goto error_memory_writes;
 	}
-	if (get_sampler_balance_mux_config(dev_priv, &mux_len)) {
+	if (get_sampler_balance_mux_config(dev_priv, mux_regs, mux_lens)) {
 		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler_balance);
 		if (ret)
 			goto error_sampler_balance;
@@ -714,19 +776,19 @@ i915_perf_register_sysfs_hsw(struct drm_i915_private *dev_priv)
 	return 0;
 
 error_sampler_balance:
-	if (get_sampler_balance_mux_config(dev_priv, &mux_len))
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
 error_memory_writes:
-	if (get_sampler_balance_mux_config(dev_priv, &mux_len))
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
 error_memory_reads:
-	if (get_sampler_balance_mux_config(dev_priv, &mux_len))
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
 error_compute_extended:
-	if (get_sampler_balance_mux_config(dev_priv, &mux_len))
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
 error_compute_basic:
-	if (get_sampler_balance_mux_config(dev_priv, &mux_len))
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
 error_render_basic:
 	return ret;
@@ -735,18 +797,19 @@ i915_perf_register_sysfs_hsw(struct drm_i915_private *dev_priv)
 void
 i915_perf_unregister_sysfs_hsw(struct drm_i915_private *dev_priv)
 {
-	int mux_len;
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
 
-	if (get_render_basic_mux_config(dev_priv, &mux_len))
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
-	if (get_compute_basic_mux_config(dev_priv, &mux_len))
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
-	if (get_compute_extended_mux_config(dev_priv, &mux_len))
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
-	if (get_memory_reads_mux_config(dev_priv, &mux_len))
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
-	if (get_memory_writes_mux_config(dev_priv, &mux_len))
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
-	if (get_sampler_balance_mux_config(dev_priv, &mux_len))
+	if (get_sampler_balance_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_balance);
 }
diff --git a/drivers/gpu/drm/i915/i915_oa_hsw.h b/drivers/gpu/drm/i915/i915_oa_hsw.h
index 429a229b5158..6fe7e0690ef3 100644
--- a/drivers/gpu/drm/i915/i915_oa_hsw.h
+++ b/drivers/gpu/drm/i915/i915_oa_hsw.h
@@ -1,5 +1,7 @@
 /*
- * Autogenerated file, DO NOT EDIT manually!
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
  *
  * Copyright (c) 2015 Intel Corporation
  *
diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 85269bcc8372..7e56b895fd34 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -1047,6 +1047,7 @@ static void config_oa_regs(struct drm_i915_private *dev_priv,
 static int hsw_enable_metric_set(struct drm_i915_private *dev_priv)
 {
 	int ret = i915_oa_select_metric_set_hsw(dev_priv);
+	int i;
 
 	if (ret)
 		return ret;
@@ -1068,8 +1069,10 @@ static int hsw_enable_metric_set(struct drm_i915_private *dev_priv)
 	I915_WRITE(GEN6_UCGCTL1, (I915_READ(GEN6_UCGCTL1) |
 				  GEN6_CSUNIT_CLOCK_GATE_DISABLE));
 
-	config_oa_regs(dev_priv, dev_priv->perf.oa.mux_regs,
-		       dev_priv->perf.oa.mux_regs_len);
+	for (i = 0; i < dev_priv->perf.oa.n_mux_configs; i++) {
+		config_oa_regs(dev_priv, dev_priv->perf.oa.mux_regs[i],
+			       dev_priv->perf.oa.mux_regs_lens[i]);
+	}
 
 	/* It apparently takes a fairly long time for a new MUX
 	 * configuration to be be applied after these register writes.
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v14 05/14] drm/i915/perf: Add 'render basic' Gen8+ OA unit configs
  2017-05-26 11:56   ` [PATCH v14 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                       ` (3 preceding siblings ...)
  2017-05-26 11:56     ` [PATCH v14 04/14] drm/i915/perf: rework mux configurations queries Lionel Landwerlin
@ 2017-05-26 11:56     ` Lionel Landwerlin
  2017-05-26 11:56     ` [PATCH v14 06/14] drm/i915/perf: Add OA unit support for Gen 8+ Lionel Landwerlin
                       ` (8 subsequent siblings)
  13 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-26 11:56 UTC (permalink / raw)
  To: intel-gfx

From: Robert Bragg <robert@sixbynine.org>

Adds a static OA unit, MUX, B Counter + Flex EU configurations for basic
render metrics on Broadwell, Cherryview, Skylake and Broxton. These are
auto generated from an XML description of metric sets, currently
maintained in gputop, ref:

 https://github.com/rib/gputop
 > gputop-data/oa-*.xml
 > scripts/i915-perf-kernelgen.py

 $ make -C gputop-data -f Makefile.xml WHITELIST=RenderBasic

v2: add newlines to debug messages + fix comment (Matthew Auld)

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/Makefile         |   8 +-
 drivers/gpu/drm/i915/i915_drv.h       |   6 +-
 drivers/gpu/drm/i915/i915_oa_bdw.c    | 392 ++++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_bdw.h    |  40 ++++
 drivers/gpu/drm/i915/i915_oa_bxt.c    | 248 +++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_bxt.h    |  40 ++++
 drivers/gpu/drm/i915/i915_oa_chv.c    | 238 +++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_chv.h    |  40 ++++
 drivers/gpu/drm/i915/i915_oa_sklgt2.c | 238 +++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_sklgt2.h |  40 ++++
 drivers/gpu/drm/i915/i915_oa_sklgt3.c | 249 +++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_sklgt3.h |  40 ++++
 drivers/gpu/drm/i915/i915_oa_sklgt4.c | 260 ++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_sklgt4.h |  40 ++++
 14 files changed, 1876 insertions(+), 3 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_oa_bdw.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_bdw.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_bxt.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_bxt.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_chv.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_chv.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt2.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt2.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt3.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt3.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt4.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt4.h

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index 7b05fb802f4c..aabc660f94cb 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -128,7 +128,13 @@ i915-y += i915_vgpu.o
 
 # perf code
 i915-y += i915_perf.o \
-	  i915_oa_hsw.o
+	  i915_oa_hsw.o \
+	  i915_oa_bdw.o \
+	  i915_oa_chv.o \
+	  i915_oa_sklgt2.o \
+	  i915_oa_sklgt3.o \
+	  i915_oa_sklgt4.o \
+	  i915_oa_bxt.o
 
 ifeq ($(CONFIG_DRM_I915_GVT),y)
 i915-y += intel_gvt.o
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 225bee890822..db0378b71941 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2396,12 +2396,14 @@ struct drm_i915_private {
 
 			int metrics_set;
 
-			const struct i915_oa_reg *mux_regs[1];
-			int mux_regs_lens[1];
+			const struct i915_oa_reg *mux_regs[2];
+			int mux_regs_lens[2];
 			int n_mux_configs;
 
 			const struct i915_oa_reg *b_counter_regs;
 			int b_counter_regs_len;
+			const struct i915_oa_reg *flex_regs;
+			int flex_regs_len;
 
 			struct {
 				struct i915_vma *vma;
diff --git a/drivers/gpu/drm/i915/i915_oa_bdw.c b/drivers/gpu/drm/i915/i915_oa_bdw.c
new file mode 100644
index 000000000000..9a11c03b4ecb
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_bdw.c
@@ -0,0 +1,392 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/sysfs.h>
+
+#include "i915_drv.h"
+#include "i915_oa_bdw.h"
+
+enum metric_set_id {
+	METRIC_SET_ID_RENDER_BASIC = 1,
+};
+
+int i915_oa_n_builtin_metric_sets_bdw = 1;
+
+static const struct i915_oa_reg b_counter_config_render_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic_0_slices_0x01[] = {
+	{ _MMIO(0x9888), 0x143f000f },
+	{ _MMIO(0x9888), 0x14110014 },
+	{ _MMIO(0x9888), 0x14310014 },
+	{ _MMIO(0x9888), 0x14bf000f },
+	{ _MMIO(0x9888), 0x118a0317 },
+	{ _MMIO(0x9888), 0x13837be0 },
+	{ _MMIO(0x9888), 0x3b800060 },
+	{ _MMIO(0x9888), 0x3d800005 },
+	{ _MMIO(0x9888), 0x005c4000 },
+	{ _MMIO(0x9888), 0x065c8000 },
+	{ _MMIO(0x9888), 0x085cc000 },
+	{ _MMIO(0x9888), 0x003d8000 },
+	{ _MMIO(0x9888), 0x183d0800 },
+	{ _MMIO(0x9888), 0x0a3f0023 },
+	{ _MMIO(0x9888), 0x103f0000 },
+	{ _MMIO(0x9888), 0x00584000 },
+	{ _MMIO(0x9888), 0x08584000 },
+	{ _MMIO(0x9888), 0x0a5a4000 },
+	{ _MMIO(0x9888), 0x005b4000 },
+	{ _MMIO(0x9888), 0x0e5b8000 },
+	{ _MMIO(0x9888), 0x185b2400 },
+	{ _MMIO(0x9888), 0x0a1d4000 },
+	{ _MMIO(0x9888), 0x0c1f0800 },
+	{ _MMIO(0x9888), 0x0e1faa00 },
+	{ _MMIO(0x9888), 0x00384000 },
+	{ _MMIO(0x9888), 0x0e384000 },
+	{ _MMIO(0x9888), 0x16384000 },
+	{ _MMIO(0x9888), 0x18380001 },
+	{ _MMIO(0x9888), 0x00392000 },
+	{ _MMIO(0x9888), 0x06398000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a391000 },
+	{ _MMIO(0x9888), 0x00104000 },
+	{ _MMIO(0x9888), 0x08104000 },
+	{ _MMIO(0x9888), 0x00110030 },
+	{ _MMIO(0x9888), 0x08110031 },
+	{ _MMIO(0x9888), 0x10110000 },
+	{ _MMIO(0x9888), 0x00134000 },
+	{ _MMIO(0x9888), 0x16130020 },
+	{ _MMIO(0x9888), 0x06308000 },
+	{ _MMIO(0x9888), 0x08308000 },
+	{ _MMIO(0x9888), 0x06311800 },
+	{ _MMIO(0x9888), 0x08311880 },
+	{ _MMIO(0x9888), 0x10310000 },
+	{ _MMIO(0x9888), 0x0e334000 },
+	{ _MMIO(0x9888), 0x16330080 },
+	{ _MMIO(0x9888), 0x0abf1180 },
+	{ _MMIO(0x9888), 0x10bf0000 },
+	{ _MMIO(0x9888), 0x0ada8000 },
+	{ _MMIO(0x9888), 0x0a9d8000 },
+	{ _MMIO(0x9888), 0x109f0002 },
+	{ _MMIO(0x9888), 0x0ab94000 },
+	{ _MMIO(0x9888), 0x0d888000 },
+	{ _MMIO(0x9888), 0x038a0380 },
+	{ _MMIO(0x9888), 0x058a000e },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8a00a0 },
+	{ _MMIO(0x9888), 0x078a0000 },
+	{ _MMIO(0x9888), 0x098a0000 },
+	{ _MMIO(0x9888), 0x238b2820 },
+	{ _MMIO(0x9888), 0x258b2550 },
+	{ _MMIO(0x9888), 0x198c1000 },
+	{ _MMIO(0x9888), 0x0b8d8000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa0 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x0d831021 },
+	{ _MMIO(0x9888), 0x0f83572f },
+	{ _MMIO(0x9888), 0x01835680 },
+	{ _MMIO(0x9888), 0x0383002c },
+	{ _MMIO(0x9888), 0x11830000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830001 },
+	{ _MMIO(0x9888), 0x05830000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x05844000 },
+	{ _MMIO(0x9888), 0x1b80c137 },
+	{ _MMIO(0x9888), 0x1d80c147 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x15804000 },
+	{ _MMIO(0x9888), 0x4d801110 },
+	{ _MMIO(0x9888), 0x4f800331 },
+	{ _MMIO(0x9888), 0x43800802 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45801465 },
+	{ _MMIO(0x9888), 0x53801111 },
+	{ _MMIO(0x9888), 0x478014a5 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800ca5 },
+	{ _MMIO(0x9888), 0x41800003 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic_1_slices_0x02[] = {
+	{ _MMIO(0x9888), 0x143f000f },
+	{ _MMIO(0x9888), 0x14bf000f },
+	{ _MMIO(0x9888), 0x14910014 },
+	{ _MMIO(0x9888), 0x14b10014 },
+	{ _MMIO(0x9888), 0x118a0317 },
+	{ _MMIO(0x9888), 0x13837be0 },
+	{ _MMIO(0x9888), 0x3b800060 },
+	{ _MMIO(0x9888), 0x3d800005 },
+	{ _MMIO(0x9888), 0x0a3f0023 },
+	{ _MMIO(0x9888), 0x103f0000 },
+	{ _MMIO(0x9888), 0x0a5a4000 },
+	{ _MMIO(0x9888), 0x0a1d4000 },
+	{ _MMIO(0x9888), 0x0e1f8000 },
+	{ _MMIO(0x9888), 0x0a391000 },
+	{ _MMIO(0x9888), 0x00dc4000 },
+	{ _MMIO(0x9888), 0x06dc8000 },
+	{ _MMIO(0x9888), 0x08dcc000 },
+	{ _MMIO(0x9888), 0x00bd8000 },
+	{ _MMIO(0x9888), 0x18bd0800 },
+	{ _MMIO(0x9888), 0x0abf1180 },
+	{ _MMIO(0x9888), 0x10bf0000 },
+	{ _MMIO(0x9888), 0x00d84000 },
+	{ _MMIO(0x9888), 0x08d84000 },
+	{ _MMIO(0x9888), 0x0ada8000 },
+	{ _MMIO(0x9888), 0x00db4000 },
+	{ _MMIO(0x9888), 0x0edb8000 },
+	{ _MMIO(0x9888), 0x18db2400 },
+	{ _MMIO(0x9888), 0x0a9d8000 },
+	{ _MMIO(0x9888), 0x0c9f0800 },
+	{ _MMIO(0x9888), 0x0e9f2a00 },
+	{ _MMIO(0x9888), 0x109f0002 },
+	{ _MMIO(0x9888), 0x00b84000 },
+	{ _MMIO(0x9888), 0x0eb84000 },
+	{ _MMIO(0x9888), 0x16b84000 },
+	{ _MMIO(0x9888), 0x18b80001 },
+	{ _MMIO(0x9888), 0x00b92000 },
+	{ _MMIO(0x9888), 0x06b98000 },
+	{ _MMIO(0x9888), 0x08b9a000 },
+	{ _MMIO(0x9888), 0x0ab94000 },
+	{ _MMIO(0x9888), 0x00904000 },
+	{ _MMIO(0x9888), 0x08904000 },
+	{ _MMIO(0x9888), 0x00910030 },
+	{ _MMIO(0x9888), 0x08910031 },
+	{ _MMIO(0x9888), 0x10910000 },
+	{ _MMIO(0x9888), 0x00934000 },
+	{ _MMIO(0x9888), 0x16930020 },
+	{ _MMIO(0x9888), 0x06b08000 },
+	{ _MMIO(0x9888), 0x08b08000 },
+	{ _MMIO(0x9888), 0x06b11800 },
+	{ _MMIO(0x9888), 0x08b11880 },
+	{ _MMIO(0x9888), 0x10b10000 },
+	{ _MMIO(0x9888), 0x0eb34000 },
+	{ _MMIO(0x9888), 0x16b30080 },
+	{ _MMIO(0x9888), 0x01888000 },
+	{ _MMIO(0x9888), 0x0d88b800 },
+	{ _MMIO(0x9888), 0x038a0380 },
+	{ _MMIO(0x9888), 0x058a000e },
+	{ _MMIO(0x9888), 0x1b8a0080 },
+	{ _MMIO(0x9888), 0x078a0000 },
+	{ _MMIO(0x9888), 0x098a0000 },
+	{ _MMIO(0x9888), 0x238b2840 },
+	{ _MMIO(0x9888), 0x258b26a0 },
+	{ _MMIO(0x9888), 0x018c4000 },
+	{ _MMIO(0x9888), 0x0f8c4000 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c1100 },
+	{ _MMIO(0x9888), 0x018d2000 },
+	{ _MMIO(0x9888), 0x078d8000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8d8000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa0 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x0d831021 },
+	{ _MMIO(0x9888), 0x0f83572f },
+	{ _MMIO(0x9888), 0x01835680 },
+	{ _MMIO(0x9888), 0x0383002c },
+	{ _MMIO(0x9888), 0x11830000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830001 },
+	{ _MMIO(0x9888), 0x05830000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x05844000 },
+	{ _MMIO(0x9888), 0x1b80c137 },
+	{ _MMIO(0x9888), 0x1d80c147 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x15804000 },
+	{ _MMIO(0x9888), 0x4d801550 },
+	{ _MMIO(0x9888), 0x4f800331 },
+	{ _MMIO(0x9888), 0x43800802 },
+	{ _MMIO(0x9888), 0x51800400 },
+	{ _MMIO(0x9888), 0x458004a1 },
+	{ _MMIO(0x9888), 0x53805555 },
+	{ _MMIO(0x9888), 0x47800421 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f801421 },
+	{ _MMIO(0x9888), 0x41800845 },
+};
+
+static int
+get_render_basic_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 2);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 2);
+
+	if (INTEL_INFO(dev_priv)->sseu.slice_mask & 0x01) {
+		regs[n] = mux_config_render_basic_0_slices_0x01;
+		lens[n] = ARRAY_SIZE(mux_config_render_basic_0_slices_0x01);
+		n++;
+	}
+	if (INTEL_INFO(dev_priv)->sseu.slice_mask & 0x02) {
+		regs[n] = mux_config_render_basic_1_slices_0x02;
+		lens[n] = ARRAY_SIZE(mux_config_render_basic_1_slices_0x02);
+		n++;
+	}
+
+	return n;
+}
+
+int i915_oa_select_metric_set_bdw(struct drm_i915_private *dev_priv)
+{
+	dev_priv->perf.oa.n_mux_configs = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_basic_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "b541bd57-0e0f-4154-b4c0-5858010a2bf7",
+	.attrs =  attrs_render_basic,
+};
+
+int
+i915_perf_register_sysfs_bdw(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+	int ret = 0;
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+		if (ret)
+			goto error_render_basic;
+	}
+
+	return 0;
+
+error_render_basic:
+	return ret;
+}
+
+void
+i915_perf_unregister_sysfs_bdw(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+}
diff --git a/drivers/gpu/drm/i915/i915_oa_bdw.h b/drivers/gpu/drm/i915/i915_oa_bdw.h
new file mode 100644
index 000000000000..6363ff9f64c0
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_bdw.h
@@ -0,0 +1,40 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_OA_BDW_H__
+#define __I915_OA_BDW_H__
+
+extern int i915_oa_n_builtin_metric_sets_bdw;
+
+extern int i915_oa_select_metric_set_bdw(struct drm_i915_private *dev_priv);
+
+extern int i915_perf_register_sysfs_bdw(struct drm_i915_private *dev_priv);
+
+extern void i915_perf_unregister_sysfs_bdw(struct drm_i915_private *dev_priv);
+
+#endif
diff --git a/drivers/gpu/drm/i915/i915_oa_bxt.c b/drivers/gpu/drm/i915/i915_oa_bxt.c
new file mode 100644
index 000000000000..345ec1d3faa7
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_bxt.c
@@ -0,0 +1,248 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/sysfs.h>
+
+#include "i915_drv.h"
+#include "i915_oa_bxt.h"
+
+enum metric_set_id {
+	METRIC_SET_ID_RENDER_BASIC = 1,
+};
+
+int i915_oa_n_builtin_metric_sets_bxt = 1;
+
+static const struct i915_oa_reg b_counter_config_render_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic_0_sku_gte_0x03[] = {
+	{ _MMIO(0x9888), 0x166c00f0 },
+	{ _MMIO(0x9888), 0x12120280 },
+	{ _MMIO(0x9888), 0x12320280 },
+	{ _MMIO(0x9888), 0x11930317 },
+	{ _MMIO(0x9888), 0x159303df },
+	{ _MMIO(0x9888), 0x3f900c00 },
+	{ _MMIO(0x9888), 0x419000a0 },
+	{ _MMIO(0x9888), 0x002d1000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x082d5000 },
+	{ _MMIO(0x9888), 0x0a2d1000 },
+	{ _MMIO(0x9888), 0x0c2e0800 },
+	{ _MMIO(0x9888), 0x0e2e5900 },
+	{ _MMIO(0x9888), 0x0a4c8000 },
+	{ _MMIO(0x9888), 0x0c4c8000 },
+	{ _MMIO(0x9888), 0x0e4c4000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e2000 },
+	{ _MMIO(0x9888), 0x1c4f0010 },
+	{ _MMIO(0x9888), 0x0a6c0053 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1a0fcc00 },
+	{ _MMIO(0x9888), 0x1c0f0002 },
+	{ _MMIO(0x9888), 0x1c2c0040 },
+	{ _MMIO(0x9888), 0x00101000 },
+	{ _MMIO(0x9888), 0x04101000 },
+	{ _MMIO(0x9888), 0x00114000 },
+	{ _MMIO(0x9888), 0x08114000 },
+	{ _MMIO(0x9888), 0x00120020 },
+	{ _MMIO(0x9888), 0x08120021 },
+	{ _MMIO(0x9888), 0x00141000 },
+	{ _MMIO(0x9888), 0x08141000 },
+	{ _MMIO(0x9888), 0x02308000 },
+	{ _MMIO(0x9888), 0x04302000 },
+	{ _MMIO(0x9888), 0x06318000 },
+	{ _MMIO(0x9888), 0x08318000 },
+	{ _MMIO(0x9888), 0x06320800 },
+	{ _MMIO(0x9888), 0x08320840 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x06344000 },
+	{ _MMIO(0x9888), 0x08344000 },
+	{ _MMIO(0x9888), 0x0d931831 },
+	{ _MMIO(0x9888), 0x0f939f3f },
+	{ _MMIO(0x9888), 0x01939e80 },
+	{ _MMIO(0x9888), 0x039303bc },
+	{ _MMIO(0x9888), 0x0593000e },
+	{ _MMIO(0x9888), 0x1993002a },
+	{ _MMIO(0x9888), 0x07930000 },
+	{ _MMIO(0x9888), 0x09930000 },
+	{ _MMIO(0x9888), 0x1d900177 },
+	{ _MMIO(0x9888), 0x1f900187 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25904000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x53901110 },
+	{ _MMIO(0x9888), 0x43900423 },
+	{ _MMIO(0x9888), 0x55900111 },
+	{ _MMIO(0x9888), 0x47900c02 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900020 },
+	{ _MMIO(0x9888), 0x59901111 },
+	{ _MMIO(0x9888), 0x4b900421 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900001 },
+	{ _MMIO(0x9888), 0x45900821 },
+};
+
+static int
+get_render_basic_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	if (dev_priv->drm.pdev->revision >= 0x03) {
+		regs[n] = mux_config_render_basic_0_sku_gte_0x03;
+		lens[n] = ARRAY_SIZE(mux_config_render_basic_0_sku_gte_0x03);
+		n++;
+	}
+
+	return n;
+}
+
+int i915_oa_select_metric_set_bxt(struct drm_i915_private *dev_priv)
+{
+	dev_priv->perf.oa.n_mux_configs = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_basic_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "22b9519a-e9ba-4c41-8b54-f4f8ca14fa0a",
+	.attrs =  attrs_render_basic,
+};
+
+int
+i915_perf_register_sysfs_bxt(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+	int ret = 0;
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+		if (ret)
+			goto error_render_basic;
+	}
+
+	return 0;
+
+error_render_basic:
+	return ret;
+}
+
+void
+i915_perf_unregister_sysfs_bxt(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+}
diff --git a/drivers/gpu/drm/i915/i915_oa_bxt.h b/drivers/gpu/drm/i915/i915_oa_bxt.h
new file mode 100644
index 000000000000..6cf7ba746e7e
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_bxt.h
@@ -0,0 +1,40 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_OA_BXT_H__
+#define __I915_OA_BXT_H__
+
+extern int i915_oa_n_builtin_metric_sets_bxt;
+
+extern int i915_oa_select_metric_set_bxt(struct drm_i915_private *dev_priv);
+
+extern int i915_perf_register_sysfs_bxt(struct drm_i915_private *dev_priv);
+
+extern void i915_perf_unregister_sysfs_bxt(struct drm_i915_private *dev_priv);
+
+#endif
diff --git a/drivers/gpu/drm/i915/i915_oa_chv.c b/drivers/gpu/drm/i915/i915_oa_chv.c
new file mode 100644
index 000000000000..b15f6c980d11
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_chv.c
@@ -0,0 +1,238 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/sysfs.h>
+
+#include "i915_drv.h"
+#include "i915_oa_chv.h"
+
+enum metric_set_id {
+	METRIC_SET_ID_RENDER_BASIC = 1,
+};
+
+int i915_oa_n_builtin_metric_sets_chv = 1;
+
+static const struct i915_oa_reg b_counter_config_render_basic[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic[] = {
+	{ _MMIO(0x9888), 0x59800000 },
+	{ _MMIO(0x9888), 0x59800001 },
+	{ _MMIO(0x9888), 0x285a0006 },
+	{ _MMIO(0x9888), 0x2c110014 },
+	{ _MMIO(0x9888), 0x2e110000 },
+	{ _MMIO(0x9888), 0x2c310014 },
+	{ _MMIO(0x9888), 0x2e310000 },
+	{ _MMIO(0x9888), 0x2b8303df },
+	{ _MMIO(0x9888), 0x3580024f },
+	{ _MMIO(0x9888), 0x00580888 },
+	{ _MMIO(0x9888), 0x1e5a0015 },
+	{ _MMIO(0x9888), 0x205a0014 },
+	{ _MMIO(0x9888), 0x045a0000 },
+	{ _MMIO(0x9888), 0x025a0000 },
+	{ _MMIO(0x9888), 0x02180500 },
+	{ _MMIO(0x9888), 0x00190555 },
+	{ _MMIO(0x9888), 0x021d0500 },
+	{ _MMIO(0x9888), 0x021f0a00 },
+	{ _MMIO(0x9888), 0x00380444 },
+	{ _MMIO(0x9888), 0x02390500 },
+	{ _MMIO(0x9888), 0x003a0666 },
+	{ _MMIO(0x9888), 0x00100111 },
+	{ _MMIO(0x9888), 0x06110030 },
+	{ _MMIO(0x9888), 0x0a110031 },
+	{ _MMIO(0x9888), 0x0e110046 },
+	{ _MMIO(0x9888), 0x04110000 },
+	{ _MMIO(0x9888), 0x00110000 },
+	{ _MMIO(0x9888), 0x00130111 },
+	{ _MMIO(0x9888), 0x00300444 },
+	{ _MMIO(0x9888), 0x08310030 },
+	{ _MMIO(0x9888), 0x0c310031 },
+	{ _MMIO(0x9888), 0x10310046 },
+	{ _MMIO(0x9888), 0x04310000 },
+	{ _MMIO(0x9888), 0x00310000 },
+	{ _MMIO(0x9888), 0x00330444 },
+	{ _MMIO(0x9888), 0x038a0a00 },
+	{ _MMIO(0x9888), 0x018b0fff },
+	{ _MMIO(0x9888), 0x038b0a00 },
+	{ _MMIO(0x9888), 0x01855000 },
+	{ _MMIO(0x9888), 0x03850055 },
+	{ _MMIO(0x9888), 0x13830021 },
+	{ _MMIO(0x9888), 0x15830020 },
+	{ _MMIO(0x9888), 0x1783002f },
+	{ _MMIO(0x9888), 0x1983002e },
+	{ _MMIO(0x9888), 0x1b83002d },
+	{ _MMIO(0x9888), 0x1d83002c },
+	{ _MMIO(0x9888), 0x05830000 },
+	{ _MMIO(0x9888), 0x01840555 },
+	{ _MMIO(0x9888), 0x03840500 },
+	{ _MMIO(0x9888), 0x23800074 },
+	{ _MMIO(0x9888), 0x2580007d },
+	{ _MMIO(0x9888), 0x05800000 },
+	{ _MMIO(0x9888), 0x01805000 },
+	{ _MMIO(0x9888), 0x03800055 },
+	{ _MMIO(0x9888), 0x01865000 },
+	{ _MMIO(0x9888), 0x03860055 },
+	{ _MMIO(0x9888), 0x01875000 },
+	{ _MMIO(0x9888), 0x03870055 },
+	{ _MMIO(0x9888), 0x418000aa },
+	{ _MMIO(0x9888), 0x4380000a },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x4780000a },
+	{ _MMIO(0x9888), 0x49800000 },
+	{ _MMIO(0x9888), 0x4b800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x55800000 },
+	{ _MMIO(0x9888), 0x57800000 },
+	{ _MMIO(0x9888), 0x59800000 },
+};
+
+static int
+get_render_basic_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_render_basic;
+	lens[n] = ARRAY_SIZE(mux_config_render_basic);
+	n++;
+
+	return n;
+}
+
+int i915_oa_select_metric_set_chv(struct drm_i915_private *dev_priv)
+{
+	dev_priv->perf.oa.n_mux_configs = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_basic_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "9d8a3af5-c02c-4a4a-b947-f1672469e0fb",
+	.attrs =  attrs_render_basic,
+};
+
+int
+i915_perf_register_sysfs_chv(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+	int ret = 0;
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+		if (ret)
+			goto error_render_basic;
+	}
+
+	return 0;
+
+error_render_basic:
+	return ret;
+}
+
+void
+i915_perf_unregister_sysfs_chv(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+}
diff --git a/drivers/gpu/drm/i915/i915_oa_chv.h b/drivers/gpu/drm/i915/i915_oa_chv.h
new file mode 100644
index 000000000000..8b8bdc26d726
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_chv.h
@@ -0,0 +1,40 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_OA_CHV_H__
+#define __I915_OA_CHV_H__
+
+extern int i915_oa_n_builtin_metric_sets_chv;
+
+extern int i915_oa_select_metric_set_chv(struct drm_i915_private *dev_priv);
+
+extern int i915_perf_register_sysfs_chv(struct drm_i915_private *dev_priv);
+
+extern void i915_perf_unregister_sysfs_chv(struct drm_i915_private *dev_priv);
+
+#endif
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt2.c b/drivers/gpu/drm/i915/i915_oa_sklgt2.c
new file mode 100644
index 000000000000..9ab9d21ec335
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt2.c
@@ -0,0 +1,238 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/sysfs.h>
+
+#include "i915_drv.h"
+#include "i915_oa_sklgt2.h"
+
+enum metric_set_id {
+	METRIC_SET_ID_RENDER_BASIC = 1,
+};
+
+int i915_oa_n_builtin_metric_sets_sklgt2 = 1;
+
+static const struct i915_oa_reg b_counter_config_render_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic_1_sku_gte_0x02[] = {
+	{ _MMIO(0x9888), 0x166c01e0 },
+	{ _MMIO(0x9888), 0x12170280 },
+	{ _MMIO(0x9888), 0x12370280 },
+	{ _MMIO(0x9888), 0x11930317 },
+	{ _MMIO(0x9888), 0x159303df },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x1a4e0080 },
+	{ _MMIO(0x9888), 0x0a6c0053 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x0a1b4000 },
+	{ _MMIO(0x9888), 0x1c1c0001 },
+	{ _MMIO(0x9888), 0x002f1000 },
+	{ _MMIO(0x9888), 0x042f1000 },
+	{ _MMIO(0x9888), 0x004c4000 },
+	{ _MMIO(0x9888), 0x0a4c8400 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0d2000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f6600 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x162c2200 },
+	{ _MMIO(0x9888), 0x062d8000 },
+	{ _MMIO(0x9888), 0x082d8000 },
+	{ _MMIO(0x9888), 0x00133000 },
+	{ _MMIO(0x9888), 0x08133000 },
+	{ _MMIO(0x9888), 0x00170020 },
+	{ _MMIO(0x9888), 0x08170021 },
+	{ _MMIO(0x9888), 0x10170000 },
+	{ _MMIO(0x9888), 0x0633c000 },
+	{ _MMIO(0x9888), 0x0833c000 },
+	{ _MMIO(0x9888), 0x06370800 },
+	{ _MMIO(0x9888), 0x08370840 },
+	{ _MMIO(0x9888), 0x10370000 },
+	{ _MMIO(0x9888), 0x0d933031 },
+	{ _MMIO(0x9888), 0x0f933e3f },
+	{ _MMIO(0x9888), 0x01933d00 },
+	{ _MMIO(0x9888), 0x0393073c },
+	{ _MMIO(0x9888), 0x0593000e },
+	{ _MMIO(0x9888), 0x1d930000 },
+	{ _MMIO(0x9888), 0x19930000 },
+	{ _MMIO(0x9888), 0x1b930000 },
+	{ _MMIO(0x9888), 0x1d900157 },
+	{ _MMIO(0x9888), 0x1f900158 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d908000 },
+	{ _MMIO(0x9888), 0x2f908000 },
+	{ _MMIO(0x9888), 0x31908000 },
+	{ _MMIO(0x9888), 0x15908000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1190001f },
+	{ _MMIO(0x9888), 0x51904400 },
+	{ _MMIO(0x9888), 0x41900020 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c21 },
+	{ _MMIO(0x9888), 0x47900061 },
+	{ _MMIO(0x9888), 0x57904440 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900004 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x53904444 },
+};
+
+static int
+get_render_basic_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	if (dev_priv->drm.pdev->revision >= 0x02) {
+		regs[n] = mux_config_render_basic_1_sku_gte_0x02;
+		lens[n] = ARRAY_SIZE(mux_config_render_basic_1_sku_gte_0x02);
+		n++;
+	}
+
+	return n;
+}
+
+int i915_oa_select_metric_set_sklgt2(struct drm_i915_private *dev_priv)
+{
+	dev_priv->perf.oa.n_mux_configs = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_basic_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "f519e481-24d2-4d42-87c9-3fdd12c00202",
+	.attrs =  attrs_render_basic,
+};
+
+int
+i915_perf_register_sysfs_sklgt2(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+	int ret = 0;
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+		if (ret)
+			goto error_render_basic;
+	}
+
+	return 0;
+
+error_render_basic:
+	return ret;
+}
+
+void
+i915_perf_unregister_sysfs_sklgt2(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+}
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt2.h b/drivers/gpu/drm/i915/i915_oa_sklgt2.h
new file mode 100644
index 000000000000..f4397baf3328
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt2.h
@@ -0,0 +1,40 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_OA_SKLGT2_H__
+#define __I915_OA_SKLGT2_H__
+
+extern int i915_oa_n_builtin_metric_sets_sklgt2;
+
+extern int i915_oa_select_metric_set_sklgt2(struct drm_i915_private *dev_priv);
+
+extern int i915_perf_register_sysfs_sklgt2(struct drm_i915_private *dev_priv);
+
+extern void i915_perf_unregister_sysfs_sklgt2(struct drm_i915_private *dev_priv);
+
+#endif
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt3.c b/drivers/gpu/drm/i915/i915_oa_sklgt3.c
new file mode 100644
index 000000000000..e32d3b3ad77a
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt3.c
@@ -0,0 +1,249 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/sysfs.h>
+
+#include "i915_drv.h"
+#include "i915_oa_sklgt3.h"
+
+enum metric_set_id {
+	METRIC_SET_ID_RENDER_BASIC = 1,
+};
+
+int i915_oa_n_builtin_metric_sets_sklgt3 = 1;
+
+static const struct i915_oa_reg b_counter_config_render_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic[] = {
+	{ _MMIO(0x9888), 0x166c01e0 },
+	{ _MMIO(0x9888), 0x12170280 },
+	{ _MMIO(0x9888), 0x12370280 },
+	{ _MMIO(0x9888), 0x16ec01e0 },
+	{ _MMIO(0x9888), 0x11930317 },
+	{ _MMIO(0x9888), 0x159303df },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x1a4e0380 },
+	{ _MMIO(0x9888), 0x0a6c0053 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x0a1b4000 },
+	{ _MMIO(0x9888), 0x1c1c0001 },
+	{ _MMIO(0x9888), 0x002f1000 },
+	{ _MMIO(0x9888), 0x042f1000 },
+	{ _MMIO(0x9888), 0x004c4000 },
+	{ _MMIO(0x9888), 0x0a4c8400 },
+	{ _MMIO(0x9888), 0x0c4c0002 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f6600 },
+	{ _MMIO(0x9888), 0x100f0001 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x162ca200 },
+	{ _MMIO(0x9888), 0x062d8000 },
+	{ _MMIO(0x9888), 0x082d8000 },
+	{ _MMIO(0x9888), 0x00133000 },
+	{ _MMIO(0x9888), 0x08133000 },
+	{ _MMIO(0x9888), 0x00170020 },
+	{ _MMIO(0x9888), 0x08170021 },
+	{ _MMIO(0x9888), 0x10170000 },
+	{ _MMIO(0x9888), 0x0633c000 },
+	{ _MMIO(0x9888), 0x0833c000 },
+	{ _MMIO(0x9888), 0x06370800 },
+	{ _MMIO(0x9888), 0x08370840 },
+	{ _MMIO(0x9888), 0x10370000 },
+	{ _MMIO(0x9888), 0x1ace0200 },
+	{ _MMIO(0x9888), 0x0aec5300 },
+	{ _MMIO(0x9888), 0x10ec0000 },
+	{ _MMIO(0x9888), 0x1cec0000 },
+	{ _MMIO(0x9888), 0x0a9b8000 },
+	{ _MMIO(0x9888), 0x1c9c0002 },
+	{ _MMIO(0x9888), 0x0ccc0002 },
+	{ _MMIO(0x9888), 0x0a8d8000 },
+	{ _MMIO(0x9888), 0x108f0001 },
+	{ _MMIO(0x9888), 0x16ac8000 },
+	{ _MMIO(0x9888), 0x0d933031 },
+	{ _MMIO(0x9888), 0x0f933e3f },
+	{ _MMIO(0x9888), 0x01933d00 },
+	{ _MMIO(0x9888), 0x0393073c },
+	{ _MMIO(0x9888), 0x0593000e },
+	{ _MMIO(0x9888), 0x1d930000 },
+	{ _MMIO(0x9888), 0x19930000 },
+	{ _MMIO(0x9888), 0x1b930000 },
+	{ _MMIO(0x9888), 0x1d900157 },
+	{ _MMIO(0x9888), 0x1f900158 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d908000 },
+	{ _MMIO(0x9888), 0x2f908000 },
+	{ _MMIO(0x9888), 0x31908000 },
+	{ _MMIO(0x9888), 0x15908000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1190003f },
+	{ _MMIO(0x9888), 0x51907710 },
+	{ _MMIO(0x9888), 0x419020a0 },
+	{ _MMIO(0x9888), 0x55901515 },
+	{ _MMIO(0x9888), 0x45900529 },
+	{ _MMIO(0x9888), 0x47901025 },
+	{ _MMIO(0x9888), 0x57907770 },
+	{ _MMIO(0x9888), 0x49902100 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900108 },
+	{ _MMIO(0x9888), 0x59900007 },
+	{ _MMIO(0x9888), 0x43902108 },
+	{ _MMIO(0x9888), 0x53907777 },
+};
+
+static int
+get_render_basic_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_render_basic;
+	lens[n] = ARRAY_SIZE(mux_config_render_basic);
+	n++;
+
+	return n;
+}
+
+int i915_oa_select_metric_set_sklgt3(struct drm_i915_private *dev_priv)
+{
+	dev_priv->perf.oa.n_mux_configs = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_basic_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "4616d450-2393-4836-8146-53c5ed84d359",
+	.attrs =  attrs_render_basic,
+};
+
+int
+i915_perf_register_sysfs_sklgt3(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+	int ret = 0;
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+		if (ret)
+			goto error_render_basic;
+	}
+
+	return 0;
+
+error_render_basic:
+	return ret;
+}
+
+void
+i915_perf_unregister_sysfs_sklgt3(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+}
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt3.h b/drivers/gpu/drm/i915/i915_oa_sklgt3.h
new file mode 100644
index 000000000000..c0accb1f9b74
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt3.h
@@ -0,0 +1,40 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_OA_SKLGT3_H__
+#define __I915_OA_SKLGT3_H__
+
+extern int i915_oa_n_builtin_metric_sets_sklgt3;
+
+extern int i915_oa_select_metric_set_sklgt3(struct drm_i915_private *dev_priv);
+
+extern int i915_perf_register_sysfs_sklgt3(struct drm_i915_private *dev_priv);
+
+extern void i915_perf_unregister_sysfs_sklgt3(struct drm_i915_private *dev_priv);
+
+#endif
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt4.c b/drivers/gpu/drm/i915/i915_oa_sklgt4.c
new file mode 100644
index 000000000000..ed034f190a6c
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt4.c
@@ -0,0 +1,260 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/sysfs.h>
+
+#include "i915_drv.h"
+#include "i915_oa_sklgt4.h"
+
+enum metric_set_id {
+	METRIC_SET_ID_RENDER_BASIC = 1,
+};
+
+int i915_oa_n_builtin_metric_sets_sklgt4 = 1;
+
+static const struct i915_oa_reg b_counter_config_render_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic[] = {
+	{ _MMIO(0x9888), 0x166c01e0 },
+	{ _MMIO(0x9888), 0x12170280 },
+	{ _MMIO(0x9888), 0x12370280 },
+	{ _MMIO(0x9888), 0x16ec01e0 },
+	{ _MMIO(0x9888), 0x176c01e0 },
+	{ _MMIO(0x9888), 0x11930317 },
+	{ _MMIO(0x9888), 0x159303df },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x1a4e03b0 },
+	{ _MMIO(0x9888), 0x0a6c0053 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x0a1b4000 },
+	{ _MMIO(0x9888), 0x1c1c0001 },
+	{ _MMIO(0x9888), 0x002f1000 },
+	{ _MMIO(0x9888), 0x042f1000 },
+	{ _MMIO(0x9888), 0x004c4000 },
+	{ _MMIO(0x9888), 0x0a4ca400 },
+	{ _MMIO(0x9888), 0x0c4c0002 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f5600 },
+	{ _MMIO(0x9888), 0x100f0001 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x162caa00 },
+	{ _MMIO(0x9888), 0x062d8000 },
+	{ _MMIO(0x9888), 0x00133000 },
+	{ _MMIO(0x9888), 0x08133000 },
+	{ _MMIO(0x9888), 0x00170020 },
+	{ _MMIO(0x9888), 0x08170021 },
+	{ _MMIO(0x9888), 0x10170000 },
+	{ _MMIO(0x9888), 0x0633c000 },
+	{ _MMIO(0x9888), 0x06370800 },
+	{ _MMIO(0x9888), 0x10370000 },
+	{ _MMIO(0x9888), 0x1ace0230 },
+	{ _MMIO(0x9888), 0x0aec5300 },
+	{ _MMIO(0x9888), 0x10ec0000 },
+	{ _MMIO(0x9888), 0x1cec0000 },
+	{ _MMIO(0x9888), 0x0a9b8000 },
+	{ _MMIO(0x9888), 0x1c9c0002 },
+	{ _MMIO(0x9888), 0x0acc2000 },
+	{ _MMIO(0x9888), 0x0ccc0002 },
+	{ _MMIO(0x9888), 0x088d8000 },
+	{ _MMIO(0x9888), 0x0a8d8000 },
+	{ _MMIO(0x9888), 0x0e8f1000 },
+	{ _MMIO(0x9888), 0x108f0001 },
+	{ _MMIO(0x9888), 0x16ac8800 },
+	{ _MMIO(0x9888), 0x1b4e0020 },
+	{ _MMIO(0x9888), 0x096c5300 },
+	{ _MMIO(0x9888), 0x116c0000 },
+	{ _MMIO(0x9888), 0x1d6c0000 },
+	{ _MMIO(0x9888), 0x091b8000 },
+	{ _MMIO(0x9888), 0x1b1c8000 },
+	{ _MMIO(0x9888), 0x0b4c2000 },
+	{ _MMIO(0x9888), 0x090d8000 },
+	{ _MMIO(0x9888), 0x0f0f1000 },
+	{ _MMIO(0x9888), 0x172c0800 },
+	{ _MMIO(0x9888), 0x0d933031 },
+	{ _MMIO(0x9888), 0x0f933e3f },
+	{ _MMIO(0x9888), 0x01933d00 },
+	{ _MMIO(0x9888), 0x0393073c },
+	{ _MMIO(0x9888), 0x0593000e },
+	{ _MMIO(0x9888), 0x1d930000 },
+	{ _MMIO(0x9888), 0x19930000 },
+	{ _MMIO(0x9888), 0x1b930000 },
+	{ _MMIO(0x9888), 0x1d900157 },
+	{ _MMIO(0x9888), 0x1f900158 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d908000 },
+	{ _MMIO(0x9888), 0x2f908000 },
+	{ _MMIO(0x9888), 0x31908000 },
+	{ _MMIO(0x9888), 0x15908000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1190003f },
+	{ _MMIO(0x9888), 0x5190ff30 },
+	{ _MMIO(0x9888), 0x41900060 },
+	{ _MMIO(0x9888), 0x55903033 },
+	{ _MMIO(0x9888), 0x45901421 },
+	{ _MMIO(0x9888), 0x47900803 },
+	{ _MMIO(0x9888), 0x5790fff1 },
+	{ _MMIO(0x9888), 0x49900001 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x5990000f },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x5390ffff },
+};
+
+static int
+get_render_basic_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_render_basic;
+	lens[n] = ARRAY_SIZE(mux_config_render_basic);
+	n++;
+
+	return n;
+}
+
+int i915_oa_select_metric_set_sklgt4(struct drm_i915_private *dev_priv)
+{
+	dev_priv->perf.oa.n_mux_configs = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_basic_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "bad77c24-cc64-480d-99bf-e7b740713800",
+	.attrs =  attrs_render_basic,
+};
+
+int
+i915_perf_register_sysfs_sklgt4(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+	int ret = 0;
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+		if (ret)
+			goto error_render_basic;
+	}
+
+	return 0;
+
+error_render_basic:
+	return ret;
+}
+
+void
+i915_perf_unregister_sysfs_sklgt4(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+}
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt4.h b/drivers/gpu/drm/i915/i915_oa_sklgt4.h
new file mode 100644
index 000000000000..1b718f15f62e
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt4.h
@@ -0,0 +1,40 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_OA_SKLGT4_H__
+#define __I915_OA_SKLGT4_H__
+
+extern int i915_oa_n_builtin_metric_sets_sklgt4;
+
+extern int i915_oa_select_metric_set_sklgt4(struct drm_i915_private *dev_priv);
+
+extern int i915_perf_register_sysfs_sklgt4(struct drm_i915_private *dev_priv);
+
+extern void i915_perf_unregister_sysfs_sklgt4(struct drm_i915_private *dev_priv);
+
+#endif
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v14 06/14] drm/i915/perf: Add OA unit support for Gen 8+
  2017-05-26 11:56   ` [PATCH v14 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                       ` (4 preceding siblings ...)
  2017-05-26 11:56     ` [PATCH v14 05/14] drm/i915/perf: Add 'render basic' Gen8+ OA unit configs Lionel Landwerlin
@ 2017-05-26 11:56     ` Lionel Landwerlin
  2017-05-26 11:56     ` [PATCH v14 07/14] drm/i915/perf: Add more OA configs for BDW, CHV, SKL + BXT Lionel Landwerlin
                       ` (7 subsequent siblings)
  13 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-26 11:56 UTC (permalink / raw)
  To: intel-gfx

From: Robert Bragg <robert@sixbynine.org>

Enables access to OA unit metrics for BDW, CHV, SKL and BXT which all
share (more-or-less) the same OA unit design.

Of particular note in comparison to Haswell: some OA unit HW config
state has become per-context state and as a consequence it is somewhat
more complicated to manage synchronous state changes from the cpu while
there's no guarantee of what context (if any) is currently actively
running on the gpu.

The periodic sampling frequency which can be particularly useful for
system-wide analysis (as opposed to command stream synchronised
MI_REPORT_PERF_COUNT commands) is perhaps the most surprising state to
have become per-context save and restored (while the OABUFFER
destination is still a shared, system-wide resource).

This support for gen8+ takes care to consider a number of timing
challenges involved in synchronously updating per-context state
primarily by programming all config state from the cpu and updating all
current and saved contexts synchronously while the OA unit is still
disabled.

The driver intentionally avoids depending on command streamer
programming to update OA state considering the lack of synchronization
between the automatic loading of OACTXCONTROL state (that includes the
periodic sampling state and enable state) on context restore and the
parsing of any general purpose BB the driver can control. I.e. this
implementation is careful to avoid the possibility of a context restore
temporarily enabling any out-of-date periodic sampling state. In
addition to the risk of transiently-out-of-date state being loaded
automatically; there are also internal HW latencies involved in the
loading of MUX configurations which would be difficult to account for
from the command streamer (and we only want to enable the unit when once
the MUX configuration is complete).

Since the Gen8+ OA unit design no longer supports clock gating the unit
off for a single given context (which effectively stopped any progress
of counters while any other context was running) and instead supports
tagging OA reports with a context ID for filtering on the CPU, it means
we can no longer hide the system-wide progress of counters from a
non-privileged application only interested in metrics for its own
context. Although we could theoretically try and subtract the progress
of other contexts before forwarding reports via read() we aren't in a
position to filter reports captured via MI_REPORT_PERF_COUNT commands.
As a result, for Gen8+, we always require the
dev.i915.perf_stream_paranoid to be unset for any access to OA metrics
if not root.

v5: Drain submitted requests when enabling metric set to ensure no
    lite-restore erases the context image we just updated (Lionel)

v6: In addition to drain, switch to kernel context & update all
    context in place (Chris)

v7: Add missing mutex_unlock() if switching to kernel context fails
    (Matthew)

v8: Simplify OA period/flex-eu-counters programming by using the
    batchbuffer instead of modifying ctx-image (Lionel)

v9: Back to updating the context image (due to erroneous testing,
    batchbuffer programming the OA unit doesn't actually work)
    (Lionel)
    Pin context before updating context image (Chris)
    Drop MMIO programming now that we switch to a kernel context with
    right values in initial context image (Chris)

v10: Just pin_map the contexts we want to modify or let the
     configuration happen on first use (Chris)

v11: Update kernel context OA config through the batchbuffer rather
     than on the fly ctx-image update (Lionel)

v12: Rework OA context registers update again by swithing away from
     user contexts and reconfiguring the kernel context through the
     batchbuffer and updating all the other contexts' context image.
     Also take care to lock slice/subslice configuration when OA is
     on. (Lionel)

v13: Request rpcs updates on all engine when updating the OA config
     (Lionel)

v14: Drop any kind of rpcs management now that we monitor sseu
     configuration changes in a later patch (Lionel)
     Remove usleep after programming the NOA configs on Gen8+, this
     doesn't seem to be needed (Lionel)

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com> \o/
---
 drivers/gpu/drm/i915/i915_drv.h  |   45 +-
 drivers/gpu/drm/i915/i915_perf.c | 1005 +++++++++++++++++++++++++++++++++++---
 drivers/gpu/drm/i915/i915_reg.h  |   22 +
 drivers/gpu/drm/i915/intel_lrc.c |    2 +
 drivers/gpu/drm/i915/intel_lrc.h |    2 +
 include/uapi/drm/i915_drm.h      |   19 +-
 6 files changed, 1000 insertions(+), 95 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index db0378b71941..84bde667bb45 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1997,9 +1997,17 @@ struct i915_oa_ops {
 	void (*init_oa_buffer)(struct drm_i915_private *dev_priv);
 
 	/**
-	 * @enable_metric_set: Applies any MUX configuration to set up the
-	 * Boolean and Custom (B/C) counters that are part of the counter
-	 * reports being sampled. May apply system constraints such as
+	 * @select_metric_set: The auto generated code that checks whether a
+	 * requested OA config is applicable to the system and if so sets up
+	 * the mux, oa and flex eu register config pointers according to the
+	 * current dev_priv->perf.oa.metrics_set.
+	 */
+	int (*select_metric_set)(struct drm_i915_private *dev_priv);
+
+	/**
+	 * @enable_metric_set: Selects and applies any MUX configuration to set
+	 * up the Boolean and Custom (B/C) counters that are part of the
+	 * counter reports being sampled. May apply system constraints such as
 	 * disabling EU clock gating as required.
 	 */
 	int (*enable_metric_set)(struct drm_i915_private *dev_priv);
@@ -2030,20 +2038,13 @@ struct i915_oa_ops {
 		    size_t *offset);
 
 	/**
-	 * @oa_buffer_check: Check for OA buffer data + update tail
-	 *
-	 * This is either called via fops or the poll check hrtimer (atomic
-	 * ctx) without any locks taken.
+	 * @oa_hw_tail_read: read the OA tail pointer register
 	 *
-	 * It's safe to read OA config state here unlocked, assuming that this
-	 * is only called while the stream is enabled, while the global OA
-	 * configuration can't be modified.
-	 *
-	 * Efficiency is more important than avoiding some false positives
-	 * here, which will be handled gracefully - likely resulting in an
-	 * %EAGAIN error for userspace.
+	 * In particular this enables us to share all the fiddly code for
+	 * handling the OA unit tail pointer race that affects multiple
+	 * generations.
 	 */
-	bool (*oa_buffer_check)(struct drm_i915_private *dev_priv);
+	u32 (*oa_hw_tail_read)(struct drm_i915_private *dev_priv);
 };
 
 struct intel_cdclk_state {
@@ -2408,6 +2409,7 @@ struct drm_i915_private {
 			struct {
 				struct i915_vma *vma;
 				u8 *vaddr;
+				u32 last_ctx_id;
 				int format;
 				int format_size;
 
@@ -2477,6 +2479,14 @@ struct drm_i915_private {
 			} oa_buffer;
 
 			u32 gen7_latched_oastatus1;
+			u32 ctx_oactxctrl_offset;
+			u32 ctx_flexeu0_offset;
+
+			/* The RPT_ID/reason field for Gen8+ includes a bit
+			 * to determine if the CTX ID in the report is valid
+			 * but the specific bit differs between Gen 8 and 9
+			 */
+			u32 gen8_valid_ctx_bit;
 
 			struct i915_oa_ops ops;
 			const struct i915_oa_format *oa_formats;
@@ -2787,6 +2797,8 @@ intel_info(const struct drm_i915_private *dev_priv)
 #define IS_KBL_ULX(dev_priv)	(INTEL_DEVID(dev_priv) == 0x590E || \
 				 INTEL_DEVID(dev_priv) == 0x5915 || \
 				 INTEL_DEVID(dev_priv) == 0x591E)
+#define IS_SKL_GT2(dev_priv)	(IS_SKYLAKE(dev_priv) && \
+				 (INTEL_DEVID(dev_priv) & 0x00F0) == 0x0010)
 #define IS_SKL_GT3(dev_priv)	(IS_SKYLAKE(dev_priv) && \
 				 (INTEL_DEVID(dev_priv) & 0x00F0) == 0x0020)
 #define IS_SKL_GT4(dev_priv)	(IS_SKYLAKE(dev_priv) && \
@@ -3517,6 +3529,9 @@ i915_gem_context_lookup_timeline(struct i915_gem_context *ctx,
 
 int i915_perf_open_ioctl(struct drm_device *dev, void *data,
 			 struct drm_file *file);
+void i915_oa_init_reg_state(struct intel_engine_cs *engine,
+			    struct i915_gem_context *ctx,
+			    uint32_t *reg_state);
 
 /* i915_gem_evict.c */
 int __must_check i915_gem_evict_something(struct i915_address_space *vm,
diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 7e56b895fd34..9f524a0ec727 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -196,6 +196,12 @@
 
 #include "i915_drv.h"
 #include "i915_oa_hsw.h"
+#include "i915_oa_bdw.h"
+#include "i915_oa_chv.h"
+#include "i915_oa_sklgt2.h"
+#include "i915_oa_sklgt3.h"
+#include "i915_oa_sklgt4.h"
+#include "i915_oa_bxt.h"
 
 /* HW requires this to be a power of two, between 128k and 16M, though driver
  * is currently generally designed assuming the largest 16M size is used such
@@ -215,7 +221,7 @@
  *
  * Although this can be observed explicitly while copying reports to userspace
  * by checking for a zeroed report-id field in tail reports, we want to account
- * for this earlier, as part of the _oa_buffer_check to avoid lots of redundant
+ * for this earlier, as part of the oa_buffer_check to avoid lots of redundant
  * read() attempts.
  *
  * In effect we define a tail pointer for reading that lags the real tail
@@ -237,7 +243,7 @@
  * indicates that an updated tail pointer is needed.
  *
  * Most of the implementation details for this workaround are in
- * gen7_oa_buffer_check_unlocked() and gen7_appand_oa_reports()
+ * oa_buffer_check_unlocked() and _append_oa_reports()
  *
  * Note for posterity: previously the driver used to define an effective tail
  * pointer that lagged the real pointer by a 'tail margin' measured in bytes
@@ -272,6 +278,13 @@ static u32 i915_perf_stream_paranoid = true;
 
 #define INVALID_CTX_ID 0xffffffff
 
+/* On Gen8+ automatically triggered OA reports include a 'reason' field... */
+#define OAREPORT_REASON_MASK           0x3f
+#define OAREPORT_REASON_SHIFT          19
+#define OAREPORT_REASON_TIMER          (1<<0)
+#define OAREPORT_REASON_CTX_SWITCH     (1<<3)
+#define OAREPORT_REASON_CLK_RATIO      (1<<5)
+
 
 /* For sysctl proc_dointvec_minmax of i915_oa_max_sample_rate
  *
@@ -303,6 +316,13 @@ static struct i915_oa_format hsw_oa_formats[I915_OA_FORMAT_MAX] = {
 	[I915_OA_FORMAT_C4_B8]	    = { 7, 64 },
 };
 
+static struct i915_oa_format gen8_plus_oa_formats[I915_OA_FORMAT_MAX] = {
+	[I915_OA_FORMAT_A12]		    = { 0, 64 },
+	[I915_OA_FORMAT_A12_B8_C8]	    = { 2, 128 },
+	[I915_OA_FORMAT_A32u40_A4u32_B8_C8] = { 5, 256 },
+	[I915_OA_FORMAT_C4_B8]		    = { 7, 64 },
+};
+
 #define SAMPLE_OA_REPORT      (1<<0)
 
 /**
@@ -332,8 +352,20 @@ struct perf_open_properties {
 	int oa_period_exponent;
 };
 
+static u32 gen8_oa_hw_tail_read(struct drm_i915_private *dev_priv)
+{
+	return I915_READ(GEN8_OATAILPTR) & GEN8_OATAILPTR_MASK;
+}
+
+static u32 gen7_oa_hw_tail_read(struct drm_i915_private *dev_priv)
+{
+	u32 oastatus1 = I915_READ(GEN7_OASTATUS1);
+
+	return oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
+}
+
 /**
- * gen7_oa_buffer_check_unlocked - check for data and update tail ptr state
+ * oa_buffer_check_unlocked - check for data and update tail ptr state
  * @dev_priv: i915 device instance
  *
  * This is either called via fops (for blocking reads in user ctx) or the poll
@@ -356,12 +388,11 @@ struct perf_open_properties {
  *
  * Returns: %true if the OA buffer contains data, else %false
  */
-static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
+static bool oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
 {
 	int report_size = dev_priv->perf.oa.oa_buffer.format_size;
 	unsigned long flags;
 	unsigned int aged_idx;
-	u32 oastatus1;
 	u32 head, hw_tail, aged_tail, aging_tail;
 	u64 now;
 
@@ -381,8 +412,7 @@ static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
 	aged_tail = dev_priv->perf.oa.oa_buffer.tails[aged_idx].offset;
 	aging_tail = dev_priv->perf.oa.oa_buffer.tails[!aged_idx].offset;
 
-	oastatus1 = I915_READ(GEN7_OASTATUS1);
-	hw_tail = oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
+	hw_tail = dev_priv->perf.oa.ops.oa_hw_tail_read(dev_priv);
 
 	/* The tail pointer increases in 64 byte increments,
 	 * not in report_size steps...
@@ -404,6 +434,7 @@ static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
 	if (aging_tail != INVALID_TAIL_PTR &&
 	    ((now - dev_priv->perf.oa.oa_buffer.aging_timestamp) >
 	     OA_TAIL_MARGIN_NSEC)) {
+
 		aged_idx ^= 1;
 		dev_priv->perf.oa.oa_buffer.aged_tail_idx = aged_idx;
 
@@ -553,6 +584,287 @@ static int append_oa_sample(struct i915_perf_stream *stream,
  *
  * Returns: 0 on success, negative error code on failure.
  */
+static int gen8_append_oa_reports(struct i915_perf_stream *stream,
+				  char __user *buf,
+				  size_t count,
+				  size_t *offset)
+{
+	struct drm_i915_private *dev_priv = stream->dev_priv;
+	int report_size = dev_priv->perf.oa.oa_buffer.format_size;
+	u8 *oa_buf_base = dev_priv->perf.oa.oa_buffer.vaddr;
+	u32 gtt_offset = i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
+	u32 mask = (OA_BUFFER_SIZE - 1);
+	size_t start_offset = *offset;
+	unsigned long flags;
+	unsigned int aged_tail_idx;
+	u32 head, tail;
+	u32 taken;
+	int ret = 0;
+
+	if (WARN_ON(!stream->enabled))
+		return -EIO;
+
+	spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	head = dev_priv->perf.oa.oa_buffer.head;
+	aged_tail_idx = dev_priv->perf.oa.oa_buffer.aged_tail_idx;
+	tail = dev_priv->perf.oa.oa_buffer.tails[aged_tail_idx].offset;
+
+	spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	/* An invalid tail pointer here means we're still waiting for the poll
+	 * hrtimer callback to give us a pointer
+	 */
+	if (tail == INVALID_TAIL_PTR)
+		return -EAGAIN;
+
+	/* NB: oa_buffer.head/tail include the gtt_offset which we don't want
+	 * while indexing relative to oa_buf_base.
+	 */
+	head -= gtt_offset;
+	tail -= gtt_offset;
+
+	/* An out of bounds or misaligned head or tail pointer implies a driver
+	 * bug since we validate + align the tail pointers we read from the
+	 * hardware and we are in full control of the head pointer which should
+	 * only be incremented by multiples of the report size (notably also
+	 * all a power of two).
+	 */
+	if (WARN_ONCE(head > OA_BUFFER_SIZE || head % report_size ||
+		      tail > OA_BUFFER_SIZE || tail % report_size,
+		      "Inconsistent OA buffer pointers: head = %u, tail = %u\n",
+		      head, tail))
+		return -EIO;
+
+
+	for (/* none */;
+	     (taken = OA_TAKEN(tail, head));
+	     head = (head + report_size) & mask) {
+		u8 *report = oa_buf_base + head;
+		u32 *report32 = (void *)report;
+		u32 ctx_id;
+		u32 reason;
+
+		/* All the report sizes factor neatly into the buffer
+		 * size so we never expect to see a report split
+		 * between the beginning and end of the buffer.
+		 *
+		 * Given the initial alignment check a misalignment
+		 * here would imply a driver bug that would result
+		 * in an overrun.
+		 */
+		if (WARN_ON((OA_BUFFER_SIZE - head) < report_size)) {
+			DRM_ERROR("Spurious OA head ptr: non-integral report offset\n");
+			break;
+		}
+
+		/* The reason field includes flags identifying what
+		 * triggered this specific report (mostly timer
+		 * triggered or e.g. due to a context switch).
+		 *
+		 * This field is never expected to be zero so we can
+		 * check that the report isn't invalid before copying
+		 * it to userspace...
+		 */
+		reason = ((report32[0] >> OAREPORT_REASON_SHIFT) &
+			  OAREPORT_REASON_MASK);
+		if (reason == 0) {
+			if (__ratelimit(&dev_priv->perf.oa.spurious_report_rs))
+				DRM_NOTE("Skipping spurious, invalid OA report\n");
+			continue;
+		}
+
+		/* XXX: Just keep the lower 21 bits for now since I'm not
+		 * entirely sure if the HW touches any of the higher bits in
+		 * this field
+		 */
+		ctx_id = report32[2] & 0x1fffff;
+
+		/* Squash whatever is in the CTX_ID field if it's marked as
+		 * invalid to be sure we avoid false-positive, single-context
+		 * filtering below...
+		 *
+		 * Note: that we don't clear the valid_ctx_bit so userspace can
+		 * understand that the ID has been squashed by the kernel.
+		 */
+		if (!(report32[0] & dev_priv->perf.oa.gen8_valid_ctx_bit))
+			ctx_id = report32[2] = INVALID_CTX_ID;
+
+		/* NB: For Gen 8 the OA unit no longer supports clock gating
+		 * off for a specific context and the kernel can't securely
+		 * stop the counters from updating as system-wide / global
+		 * values.
+		 *
+		 * Automatic reports now include a context ID so reports can be
+		 * filtered on the cpu but it's not worth trying to
+		 * automatically subtract/hide counter progress for other
+		 * contexts while filtering since we can't stop userspace
+		 * issuing MI_REPORT_PERF_COUNT commands which would still
+		 * provide a side-band view of the real values.
+		 *
+		 * To allow userspace (such as Mesa/GL_INTEL_performance_query)
+		 * to normalize counters for a single filtered context then it
+		 * needs be forwarded bookend context-switch reports so that it
+		 * can track switches in between MI_REPORT_PERF_COUNT commands
+		 * and can itself subtract/ignore the progress of counters
+		 * associated with other contexts. Note that the hardware
+		 * automatically triggers reports when switching to a new
+		 * context which are tagged with the ID of the newly active
+		 * context. To avoid the complexity (and likely fragility) of
+		 * reading ahead while parsing reports to try and minimize
+		 * forwarding redundant context switch reports (i.e. between
+		 * other, unrelated contexts) we simply elect to forward them
+		 * all.
+		 *
+		 * We don't rely solely on the reason field to identify context
+		 * switches since it's not-uncommon for periodic samples to
+		 * identify a switch before any 'context switch' report.
+		 */
+		if (!dev_priv->perf.oa.exclusive_stream->ctx ||
+		    dev_priv->perf.oa.specific_ctx_id == ctx_id ||
+		    (dev_priv->perf.oa.oa_buffer.last_ctx_id ==
+		     dev_priv->perf.oa.specific_ctx_id) ||
+		    reason & OAREPORT_REASON_CTX_SWITCH) {
+
+			/* While filtering for a single context we avoid
+			 * leaking the IDs of other contexts.
+			 */
+			if (dev_priv->perf.oa.exclusive_stream->ctx &&
+			    dev_priv->perf.oa.specific_ctx_id != ctx_id) {
+				report32[2] = INVALID_CTX_ID;
+			}
+
+			ret = append_oa_sample(stream, buf, count, offset,
+					       report);
+			if (ret)
+				break;
+
+			dev_priv->perf.oa.oa_buffer.last_ctx_id = ctx_id;
+		}
+
+		/* The above reason field sanity check is based on
+		 * the assumption that the OA buffer is initially
+		 * zeroed and we reset the field after copying so the
+		 * check is still meaningful once old reports start
+		 * being overwritten.
+		 */
+		report32[0] = 0;
+	}
+
+	if (start_offset != *offset) {
+		spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+		/* We removed the gtt_offset for the copy loop above, indexing
+		 * relative to oa_buf_base so put back here...
+		 */
+		head += gtt_offset;
+
+		I915_WRITE(GEN8_OAHEADPTR, head & GEN8_OAHEADPTR_MASK);
+		dev_priv->perf.oa.oa_buffer.head = head;
+
+		spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+	}
+
+	return ret;
+}
+
+/**
+ * gen8_oa_read - copy status records then buffered OA reports
+ * @stream: An i915-perf stream opened for OA metrics
+ * @buf: destination buffer given by userspace
+ * @count: the number of bytes userspace wants to read
+ * @offset: (inout): the current position for writing into @buf
+ *
+ * Checks OA unit status registers and if necessary appends corresponding
+ * status records for userspace (such as for a buffer full condition) and then
+ * initiate appending any buffered OA reports.
+ *
+ * Updates @offset according to the number of bytes successfully copied into
+ * the userspace buffer.
+ *
+ * NB: some data may be successfully copied to the userspace buffer
+ * even if an error is returned, and this is reflected in the
+ * updated @offset.
+ *
+ * Returns: zero on success or a negative error code
+ */
+static int gen8_oa_read(struct i915_perf_stream *stream,
+			char __user *buf,
+			size_t count,
+			size_t *offset)
+{
+	struct drm_i915_private *dev_priv = stream->dev_priv;
+	u32 oastatus;
+	int ret;
+
+	if (WARN_ON(!dev_priv->perf.oa.oa_buffer.vaddr))
+		return -EIO;
+
+	oastatus = I915_READ(GEN8_OASTATUS);
+
+	/* We treat OABUFFER_OVERFLOW as a significant error:
+	 *
+	 * Although theoretically we could handle this more gracefully
+	 * sometimes, some Gens don't correctly suppress certain
+	 * automatically triggered reports in this condition and so we
+	 * have to assume that old reports are now being trampled
+	 * over.
+	 *
+	 * Considering how we don't currently give userspace control
+	 * over the OA buffer size and always configure a large 16MB
+	 * buffer, then a buffer overflow does anyway likely indicate
+	 * that something has gone quite badly wrong.
+	 */
+	if (oastatus & GEN8_OASTATUS_OABUFFER_OVERFLOW) {
+		ret = append_oa_status(stream, buf, count, offset,
+				       DRM_I915_PERF_RECORD_OA_BUFFER_LOST);
+		if (ret)
+			return ret;
+
+		DRM_DEBUG("OA buffer overflow (exponent = %d): force restart\n",
+			  dev_priv->perf.oa.period_exponent);
+
+		dev_priv->perf.oa.ops.oa_disable(dev_priv);
+		dev_priv->perf.oa.ops.oa_enable(dev_priv);
+
+		/* Note: .oa_enable() is expected to re-init the oabuffer
+		 * and reset GEN8_OASTATUS for us
+		 */
+		oastatus = I915_READ(GEN8_OASTATUS);
+	}
+
+	if (oastatus & GEN8_OASTATUS_REPORT_LOST) {
+		ret = append_oa_status(stream, buf, count, offset,
+				       DRM_I915_PERF_RECORD_OA_REPORT_LOST);
+		if (ret)
+			return ret;
+		I915_WRITE(GEN8_OASTATUS,
+			   oastatus & ~GEN8_OASTATUS_REPORT_LOST);
+	}
+
+	return gen8_append_oa_reports(stream, buf, count, offset);
+}
+
+/**
+ * Copies all buffered OA reports into userspace read() buffer.
+ * @stream: An i915-perf stream opened for OA metrics
+ * @buf: destination buffer given by userspace
+ * @count: the number of bytes userspace wants to read
+ * @offset: (inout): the current position for writing into @buf
+ *
+ * Notably any error condition resulting in a short read (-%ENOSPC or
+ * -%EFAULT) will be returned even though one or more records may
+ * have been successfully copied. In this case it's up to the caller
+ * to decide if the error should be squashed before returning to
+ * userspace.
+ *
+ * Note: reports are consumed from the head, and appended to the
+ * tail, so the tail chases the head?... If you think that's mad
+ * and back-to-front you're not alone, but this follows the
+ * Gen PRM naming convention.
+ *
+ * Returns: 0 on success, negative error code on failure.
+ */
 static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 				  char __user *buf,
 				  size_t count,
@@ -732,7 +1044,8 @@ static int gen7_oa_read(struct i915_perf_stream *stream,
 		if (ret)
 			return ret;
 
-		DRM_DEBUG("OA buffer overflow: force restart\n");
+		DRM_DEBUG("OA buffer overflow (exponent = %d): force restart\n",
+			  dev_priv->perf.oa.period_exponent);
 
 		dev_priv->perf.oa.ops.oa_disable(dev_priv);
 		dev_priv->perf.oa.ops.oa_enable(dev_priv);
@@ -775,7 +1088,7 @@ static int i915_oa_wait_unlocked(struct i915_perf_stream *stream)
 		return -EIO;
 
 	return wait_event_interruptible(dev_priv->perf.oa.poll_wq,
-					dev_priv->perf.oa.ops.oa_buffer_check(dev_priv));
+					oa_buffer_check_unlocked(dev_priv));
 }
 
 /**
@@ -832,30 +1145,43 @@ static int i915_oa_read(struct i915_perf_stream *stream,
 static int oa_get_render_ctx_id(struct i915_perf_stream *stream)
 {
 	struct drm_i915_private *dev_priv = stream->dev_priv;
-	struct intel_engine_cs *engine = dev_priv->engine[RCS];
-	struct intel_ring *ring;
-	int ret;
 
-	ret = i915_mutex_lock_interruptible(&dev_priv->drm);
-	if (ret)
-		return ret;
+	if (i915.enable_execlists)
+		dev_priv->perf.oa.specific_ctx_id = stream->ctx->hw_id;
+	else {
+		struct intel_engine_cs *engine = dev_priv->engine[RCS];
+		struct intel_ring *ring;
+		int ret;
 
-	/* As the ID is the gtt offset of the context's vma we pin
-	 * the vma to ensure the ID remains fixed.
-	 *
-	 * NB: implied RCS engine...
-	 */
-	ring = engine->context_pin(engine, stream->ctx);
-	mutex_unlock(&dev_priv->drm.struct_mutex);
-	if (IS_ERR(ring))
-		return PTR_ERR(ring);
+		ret = i915_mutex_lock_interruptible(&dev_priv->drm);
+		if (ret)
+			return ret;
 
-	/* Explicitly track the ID (instead of calling i915_ggtt_offset()
-	 * on the fly) considering the difference with gen8+ and
-	 * execlists
-	 */
-	dev_priv->perf.oa.specific_ctx_id =
-		i915_ggtt_offset(stream->ctx->engine[engine->id].state);
+		/* As the ID is the gtt offset of the context's vma we
+		 * pin the vma to ensure the ID remains fixed.
+		 *
+		 * NB: implied RCS engine...
+		 */
+		ring = engine->context_pin(engine, stream->ctx);
+		mutex_unlock(&dev_priv->drm.struct_mutex);
+		if (IS_ERR(ring))
+			return PTR_ERR(ring);
+
+		/* As the ID is the gtt offset of the context's vma we pin
+		 * the vma to ensure the ID remains fixed.
+		 */
+		ring = engine->context_pin(engine, stream->ctx);
+		mutex_unlock(&dev_priv->drm.struct_mutex);
+		if (IS_ERR(ring))
+			return PTR_ERR(ring);
+
+		/* Explicitly track the ID (instead of calling
+		 * i915_ggtt_offset() on the fly) considering the difference
+		 * with gen8+ and execlists
+		 */
+		dev_priv->perf.oa.specific_ctx_id =
+			i915_ggtt_offset(stream->ctx->engine[engine->id].state);
+	}
 
 	return 0;
 }
@@ -870,14 +1196,19 @@ static int oa_get_render_ctx_id(struct i915_perf_stream *stream)
 static void oa_put_render_ctx_id(struct i915_perf_stream *stream)
 {
 	struct drm_i915_private *dev_priv = stream->dev_priv;
-	struct intel_engine_cs *engine = dev_priv->engine[RCS];
 
-	mutex_lock(&dev_priv->drm.struct_mutex);
+	if (i915.enable_execlists) {
+		dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
+	} else {
+		struct intel_engine_cs *engine = dev_priv->engine[RCS];
 
-	dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
-	engine->context_unpin(engine, stream->ctx);
+		mutex_lock(&dev_priv->drm.struct_mutex);
 
-	mutex_unlock(&dev_priv->drm.struct_mutex);
+		dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
+		engine->context_unpin(engine, stream->ctx);
+
+		mutex_unlock(&dev_priv->drm.struct_mutex);
+	}
 }
 
 static void
@@ -901,6 +1232,11 @@ static void i915_oa_stream_destroy(struct i915_perf_stream *stream)
 
 	BUG_ON(stream != dev_priv->perf.oa.exclusive_stream);
 
+	/* Unset exclusive_stream first, it might be checked while
+	 * disabling the metric set on gen8+.
+	 */
+	dev_priv->perf.oa.exclusive_stream = NULL;
+
 	dev_priv->perf.oa.ops.disable_metric_set(dev_priv);
 
 	free_oa_buffer(dev_priv);
@@ -911,8 +1247,6 @@ static void i915_oa_stream_destroy(struct i915_perf_stream *stream)
 	if (stream->ctx)
 		oa_put_render_ctx_id(stream);
 
-	dev_priv->perf.oa.exclusive_stream = NULL;
-
 	if (dev_priv->perf.oa.spurious_report_rs.missed) {
 		DRM_NOTE("%d spurious OA report notices suppressed due to ratelimiting\n",
 			 dev_priv->perf.oa.spurious_report_rs.missed);
@@ -967,6 +1301,61 @@ static void gen7_init_oa_buffer(struct drm_i915_private *dev_priv)
 	dev_priv->perf.oa.pollin = false;
 }
 
+static void gen8_init_oa_buffer(struct drm_i915_private *dev_priv)
+{
+	u32 gtt_offset = i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
+	unsigned long flags;
+
+	spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	I915_WRITE(GEN8_OASTATUS, 0);
+	I915_WRITE(GEN8_OAHEADPTR, gtt_offset);
+	dev_priv->perf.oa.oa_buffer.head = gtt_offset;
+
+	I915_WRITE(GEN8_OABUFFER_UDW, 0);
+
+	/* PRM says:
+	 *
+	 *  "This MMIO must be set before the OATAILPTR
+	 *  register and after the OAHEADPTR register. This is
+	 *  to enable proper functionality of the overflow
+	 *  bit."
+	 */
+	I915_WRITE(GEN8_OABUFFER, gtt_offset |
+		   OABUFFER_SIZE_16M | OA_MEM_SELECT_GGTT);
+	I915_WRITE(GEN8_OATAILPTR, gtt_offset & GEN8_OATAILPTR_MASK);
+
+	/* Mark that we need updated tail pointers to read from... */
+	dev_priv->perf.oa.oa_buffer.tails[0].offset = INVALID_TAIL_PTR;
+	dev_priv->perf.oa.oa_buffer.tails[1].offset = INVALID_TAIL_PTR;
+
+	/* Reset state used to recognise context switches, affecting which
+	 * reports we will forward to userspace while filtering for a single
+	 * context.
+	 */
+	dev_priv->perf.oa.oa_buffer.last_ctx_id = INVALID_CTX_ID;
+
+	spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	/* NB: although the OA buffer will initially be allocated
+	 * zeroed via shmfs (and so this memset is redundant when
+	 * first allocating), we may re-init the OA buffer, either
+	 * when re-enabling a stream or in error/reset paths.
+	 *
+	 * The reason we clear the buffer for each re-init is for the
+	 * sanity check in gen8_append_oa_reports() that looks at the
+	 * reason field to make sure it's non-zero which relies on
+	 * the assumption that new reports are being written to zeroed
+	 * memory...
+	 */
+	memset(dev_priv->perf.oa.oa_buffer.vaddr, 0, OA_BUFFER_SIZE);
+
+	/* Maybe make ->pollin per-stream state if we support multiple
+	 * concurrent streams in the future.
+	 */
+	dev_priv->perf.oa.pollin = false;
+}
+
 static int alloc_oa_buffer(struct drm_i915_private *dev_priv)
 {
 	struct drm_i915_gem_object *bo;
@@ -1114,6 +1503,311 @@ static void hsw_disable_metric_set(struct drm_i915_private *dev_priv)
 				      ~GT_NOA_ENABLE));
 }
 
+/* NB: It must always remain pointer safe to run this even if the OA unit
+ * has been disabled.
+ *
+ * It's fine to put out-of-date values into these per-context registers
+ * in the case that the OA unit has been disabled.
+ */
+static void gen8_update_reg_state_unlocked(struct i915_gem_context *ctx,
+					   u32 *reg_state)
+{
+	struct drm_i915_private *dev_priv = ctx->i915;
+	const struct i915_oa_reg *flex_regs = dev_priv->perf.oa.flex_regs;
+	int n_flex_regs = dev_priv->perf.oa.flex_regs_len;
+	u32 ctx_oactxctrl = dev_priv->perf.oa.ctx_oactxctrl_offset;
+	u32 ctx_flexeu0 = dev_priv->perf.oa.ctx_flexeu0_offset;
+	/* The MMIO offsets for Flex EU registers aren't contiguous */
+	u32 flex_mmio[] = {
+		i915_mmio_reg_offset(EU_PERF_CNTL0),
+		i915_mmio_reg_offset(EU_PERF_CNTL1),
+		i915_mmio_reg_offset(EU_PERF_CNTL2),
+		i915_mmio_reg_offset(EU_PERF_CNTL3),
+		i915_mmio_reg_offset(EU_PERF_CNTL4),
+		i915_mmio_reg_offset(EU_PERF_CNTL5),
+		i915_mmio_reg_offset(EU_PERF_CNTL6),
+	};
+	int i;
+
+	reg_state[ctx_oactxctrl] = i915_mmio_reg_offset(GEN8_OACTXCONTROL);
+	reg_state[ctx_oactxctrl+1] = (dev_priv->perf.oa.period_exponent <<
+				      GEN8_OA_TIMER_PERIOD_SHIFT) |
+				     (dev_priv->perf.oa.periodic ?
+				      GEN8_OA_TIMER_ENABLE : 0) |
+				     GEN8_OA_COUNTER_RESUME;
+
+	for (i = 0; i < ARRAY_SIZE(flex_mmio); i++) {
+		u32 state_offset = ctx_flexeu0 + i * 2;
+		u32 mmio = flex_mmio[i];
+
+		/* This arbitrary default will select the 'EU FPU0 Pipeline
+		 * Active' event. In the future it's anticipated that there
+		 * will be an explicit 'No Event' we can select, but not yet...
+		 */
+		u32 value = 0;
+		int j;
+
+		for (j = 0; j < n_flex_regs; j++) {
+			if (i915_mmio_reg_offset(flex_regs[j].addr) == mmio) {
+				value = flex_regs[j].value;
+				break;
+			}
+		}
+
+		reg_state[state_offset] = mmio;
+		reg_state[state_offset+1] = value;
+	}
+}
+
+/* Same as gen8_update_reg_state_unlocked only through the batchbuffer. This
+ * is only used by the kernel context. */
+static int gen8_emit_oa_config(struct drm_i915_gem_request *req)
+{
+	struct drm_i915_private *dev_priv = req->i915;
+	const struct i915_oa_reg *flex_regs = dev_priv->perf.oa.flex_regs;
+	int n_flex_regs = dev_priv->perf.oa.flex_regs_len;
+	/* The MMIO offsets for Flex EU registers aren't contiguous */
+	u32 flex_mmio[] = {
+		i915_mmio_reg_offset(EU_PERF_CNTL0),
+		i915_mmio_reg_offset(EU_PERF_CNTL1),
+		i915_mmio_reg_offset(EU_PERF_CNTL2),
+		i915_mmio_reg_offset(EU_PERF_CNTL3),
+		i915_mmio_reg_offset(EU_PERF_CNTL4),
+		i915_mmio_reg_offset(EU_PERF_CNTL5),
+		i915_mmio_reg_offset(EU_PERF_CNTL6),
+	};
+	u32 *cs;
+	int i;
+
+	cs = intel_ring_begin(req, n_flex_regs * 2 + 4);
+	if (IS_ERR(cs))
+		return PTR_ERR(cs);
+
+	*cs++ = MI_LOAD_REGISTER_IMM(n_flex_regs + 1);
+
+	*cs++ = i915_mmio_reg_offset(GEN8_OACTXCONTROL);
+	*cs++ = (dev_priv->perf.oa.period_exponent << GEN8_OA_TIMER_PERIOD_SHIFT) |
+		(dev_priv->perf.oa.periodic ? GEN8_OA_TIMER_ENABLE : 0) |
+		GEN8_OA_COUNTER_RESUME;
+
+	for (i = 0; i < ARRAY_SIZE(flex_mmio); i++) {
+		u32 mmio = flex_mmio[i];
+
+		/* This arbitrary default will select the 'EU FPU0 Pipeline
+		 * Active' event. In the future it's anticipated that there
+		 * will be an explicit 'No Event' we can select, but not yet...
+		 */
+		u32 value = 0;
+		int j;
+
+		for (j = 0; j < n_flex_regs; j++) {
+			if (i915_mmio_reg_offset(flex_regs[j].addr) == mmio) {
+				value = flex_regs[j].value;
+				break;
+			}
+		}
+
+		*cs++ = mmio;
+		*cs++ = value;
+	}
+
+	*cs++ = MI_NOOP;
+	intel_ring_advance(req, cs);
+
+	return 0;
+}
+
+static int gen8_switch_to_updated_kernel_context(struct drm_i915_private *dev_priv)
+{
+	struct intel_engine_cs *engine = dev_priv->engine[RCS];
+	struct i915_gem_timeline *timeline;
+	struct drm_i915_gem_request *req;
+	int ret;
+
+	lockdep_assert_held(&dev_priv->drm.struct_mutex);
+
+	i915_gem_retire_requests(dev_priv);
+
+	req = i915_gem_request_alloc(engine, dev_priv->kernel_context);
+	if (IS_ERR(req))
+		return PTR_ERR(req);
+
+	ret = gen8_emit_oa_config(req);
+	if (ret)
+		return ret;
+
+	/* Queue this switch after all other activity */
+	list_for_each_entry(timeline, &dev_priv->gt.timelines, link) {
+		struct drm_i915_gem_request *prev;
+		struct intel_timeline *tl;
+
+		tl = &timeline->engine[engine->id];
+		prev = i915_gem_active_raw(&tl->last_request,
+					   &dev_priv->drm.struct_mutex);
+		if (prev)
+			i915_sw_fence_await_sw_fence_gfp(&req->submit,
+							 &prev->submit,
+							 GFP_KERNEL);
+	}
+
+	ret = i915_switch_context(req);
+	i915_add_request(req);
+
+	return ret;
+}
+
+/* Manages updating the per-context aspects of the OA stream
+ * configuration across all contexts.
+ *
+ * The awkward consideration here is that OACTXCONTROL controls the
+ * exponent for periodic sampling which is primarily used for system
+ * wide profiling where we'd like a consistent sampling period even in
+ * the face of context switches.
+ *
+ * Our approach of updating the register state context (as opposed to
+ * say using a workaround batch buffer) ensures that the hardware
+ * won't automatically reload an out-of-date timer exponent even
+ * transiently before a WA BB could be parsed.
+ *
+ * This function needs to:
+ * - Ensure the currently running context's per-context OA state is
+ *   updated
+ * - Ensure that all existing contexts will have the correct per-context
+ *   OA state if they are scheduled for use.
+ * - Ensure any new contexts will be initialized with the correct
+ *   per-context OA state.
+ *
+ * Note: it's only the RCS/Render context that has any OA state.
+ */
+static int gen8_configure_all_contexts(struct drm_i915_private *dev_priv,
+				       bool interruptible)
+{
+	struct i915_gem_context *ctx;
+	int ret;
+	unsigned int wait_flags = I915_WAIT_LOCKED;
+
+	if (interruptible) {
+		ret = i915_mutex_lock_interruptible(&dev_priv->drm);
+		if (ret)
+			return ret;
+
+		wait_flags |= I915_WAIT_INTERRUPTIBLE;
+	} else {
+		mutex_lock(&dev_priv->drm.struct_mutex);
+	}
+
+	/* Switch away from any user context. */
+	ret = gen8_switch_to_updated_kernel_context(dev_priv);
+	if (ret)
+		goto out;
+
+	/* The OA register config is setup through the context image. This image
+	 * might be written to by the GPU on context switch (in particular on
+	 * lite-restore). This means we can't safely update a context's image,
+	 * if this context is scheduled/submitted to run on the GPU.
+	 *
+	 * We could emit the OA register config through the batch buffer but
+	 * this might leave small interval of time where the OA unit is
+	 * configured at an invalid sampling period.
+	 *
+	 * So far the best way to work around this issue seems to be draining
+	 * the GPU from any submitted work.
+	 */
+	ret = i915_gem_wait_for_idle(dev_priv, wait_flags);
+	if (ret)
+		goto out;
+
+	/* Update all contexts now that we've stalled the submission. */
+	list_for_each_entry(ctx, &dev_priv->context_list, link) {
+		struct intel_context *ce = &ctx->engine[RCS];
+		u32 *regs;
+
+		/* OA settings will be set upon first use */
+		if (!ce->state)
+			continue;
+
+		regs = i915_gem_object_pin_map(ce->state->obj, I915_MAP_WB);
+		if (IS_ERR(regs)) {
+			ret = PTR_ERR(regs);
+			goto out;
+		}
+
+		ce->state->obj->mm.dirty = true;
+		regs += LRC_STATE_PN * PAGE_SIZE / sizeof(*regs);
+
+		gen8_update_reg_state_unlocked(ctx, regs);
+
+		i915_gem_object_unpin_map(ce->state->obj);
+	}
+
+ out:
+	mutex_unlock(&dev_priv->drm.struct_mutex);
+
+	return ret;
+}
+
+static int gen8_enable_metric_set(struct drm_i915_private *dev_priv)
+{
+	int ret = dev_priv->perf.oa.ops.select_metric_set(dev_priv);
+	int i;
+
+	if (ret)
+		return ret;
+
+	/* We disable slice/unslice clock ratio change reports on SKL since
+	 * they are too noisy. The HW generates a lot of redundant reports
+	 * where the ratio hasn't really changed causing a lot of redundant
+	 * work to processes and increasing the chances we'll hit buffer
+	 * overruns.
+	 *
+	 * Although we don't currently use the 'disable overrun' OABUFFER
+	 * feature it's worth noting that clock ratio reports have to be
+	 * disabled before considering to use that feature since the HW doesn't
+	 * correctly block these reports.
+	 *
+	 * Currently none of the high-level metrics we have depend on knowing
+	 * this ratio to normalize.
+	 *
+	 * Note: This register is not power context saved and restored, but
+	 * that's OK considering that we disable RC6 while the OA unit is
+	 * enabled.
+	 *
+	 * The _INCLUDE_CLK_RATIO bit allows the slice/unslice frequency to
+	 * be read back from automatically triggered reports, as part of the
+	 * RPT_ID field.
+	 */
+	if (IS_SKYLAKE(dev_priv) || IS_BROXTON(dev_priv)) {
+		I915_WRITE(GEN8_OA_DEBUG,
+			   _MASKED_BIT_ENABLE(GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS |
+					      GEN9_OA_DEBUG_INCLUDE_CLK_RATIO));
+	}
+
+	/* Update all contexts prior writing the mux configurations as we need
+	 * to make sure all slices/subslices are ON before writing to NOA
+	 * registers. */
+	ret = gen8_configure_all_contexts(dev_priv, true);
+	if (ret)
+		return ret;
+
+	I915_WRITE(GDT_CHICKEN_BITS, 0xA0);
+	for (i = 0; i < dev_priv->perf.oa.n_mux_configs; i++) {
+		config_oa_regs(dev_priv, dev_priv->perf.oa.mux_regs[i],
+			       dev_priv->perf.oa.mux_regs_lens[i]);
+	}
+	I915_WRITE(GDT_CHICKEN_BITS, 0x80);
+
+	config_oa_regs(dev_priv, dev_priv->perf.oa.b_counter_regs,
+		       dev_priv->perf.oa.b_counter_regs_len);
+
+	return 0;
+}
+
+static void gen8_disable_metric_set(struct drm_i915_private *dev_priv)
+{
+	/* Reset all contexts' slices/subslices configurations. */
+	gen8_configure_all_contexts(dev_priv, false);
+}
+
 static void gen7_update_oacontrol_locked(struct drm_i915_private *dev_priv)
 {
 	lockdep_assert_held(&dev_priv->perf.hook_lock);
@@ -1158,6 +1852,29 @@ static void gen7_oa_enable(struct drm_i915_private *dev_priv)
 	spin_unlock_irqrestore(&dev_priv->perf.hook_lock, flags);
 }
 
+static void gen8_oa_enable(struct drm_i915_private *dev_priv)
+{
+	u32 report_format = dev_priv->perf.oa.oa_buffer.format;
+
+	/* Reset buf pointers so we don't forward reports from before now.
+	 *
+	 * Think carefully if considering trying to avoid this, since it
+	 * also ensures status flags and the buffer itself are cleared
+	 * in error paths, and we have checks for invalid reports based
+	 * on the assumption that certain fields are written to zeroed
+	 * memory which this helps maintains.
+	 */
+	gen8_init_oa_buffer(dev_priv);
+
+	/* Note: we don't rely on the hardware to perform single context
+	 * filtering and instead filter on the cpu based on the context-id
+	 * field of reports
+	 */
+	I915_WRITE(GEN8_OACONTROL, (report_format <<
+				    GEN8_OA_REPORT_FORMAT_SHIFT) |
+				   GEN8_OA_COUNTER_ENABLE);
+}
+
 /**
  * i915_oa_stream_enable - handle `I915_PERF_IOCTL_ENABLE` for OA stream
  * @stream: An i915 perf stream opened for OA metrics
@@ -1184,6 +1901,11 @@ static void gen7_oa_disable(struct drm_i915_private *dev_priv)
 	I915_WRITE(GEN7_OACONTROL, 0);
 }
 
+static void gen8_oa_disable(struct drm_i915_private *dev_priv)
+{
+	I915_WRITE(GEN8_OACONTROL, 0);
+}
+
 /**
  * i915_oa_stream_disable - handle `I915_PERF_IOCTL_DISABLE` for OA stream
  * @stream: An i915 perf stream opened for OA metrics
@@ -1362,6 +2084,21 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream,
 	return ret;
 }
 
+void i915_oa_init_reg_state(struct intel_engine_cs *engine,
+			    struct i915_gem_context *ctx,
+			    u32 *reg_state)
+{
+	struct drm_i915_private *dev_priv = engine->i915;
+
+	if (engine->id != RCS)
+		return;
+
+	if (!dev_priv->perf.initialized)
+		return;
+
+	gen8_update_reg_state_unlocked(ctx, reg_state);
+}
+
 /**
  * i915_perf_read_locked - &i915_perf_stream_ops->read with error normalisation
  * @stream: An i915 perf stream
@@ -1487,7 +2224,7 @@ static enum hrtimer_restart oa_poll_check_timer_cb(struct hrtimer *hrtimer)
 		container_of(hrtimer, typeof(*dev_priv),
 			     perf.oa.poll_check_timer);
 
-	if (dev_priv->perf.oa.ops.oa_buffer_check(dev_priv)) {
+	if (oa_buffer_check_unlocked(dev_priv)) {
 		dev_priv->perf.oa.pollin = true;
 		wake_up(&dev_priv->perf.oa.poll_wq);
 	}
@@ -1776,6 +2513,7 @@ i915_perf_open_ioctl_locked(struct drm_i915_private *dev_priv,
 	struct i915_gem_context *specific_ctx = NULL;
 	struct i915_perf_stream *stream = NULL;
 	unsigned long f_flags = 0;
+	bool privileged_op = true;
 	int stream_fd;
 	int ret;
 
@@ -1793,12 +2531,28 @@ i915_perf_open_ioctl_locked(struct drm_i915_private *dev_priv,
 		}
 	}
 
+	/* On Haswell the OA unit supports clock gating off for a specific
+	 * context and in this mode there's no visibility of metrics for the
+	 * rest of the system, which we consider acceptable for a
+	 * non-privileged client.
+	 *
+	 * For Gen8+ the OA unit no longer supports clock gating off for a
+	 * specific context and the kernel can't securely stop the counters
+	 * from updating as system-wide / global values. Even though we can
+	 * filter reports based on the included context ID we can't block
+	 * clients from seeing the raw / global counter values via
+	 * MI_REPORT_PERF_COUNT commands and so consider it a privileged op to
+	 * enable the OA unit by default.
+	 */
+	if (IS_HASWELL(dev_priv) && specific_ctx)
+		privileged_op = false;
+
 	/* Similar to perf's kernel.perf_paranoid_cpu sysctl option
 	 * we check a dev.i915.perf_stream_paranoid sysctl option
 	 * to determine if it's ok to access system wide OA counters
 	 * without CAP_SYS_ADMIN privileges.
 	 */
-	if (!specific_ctx &&
+	if (privileged_op &&
 	    i915_perf_stream_paranoid && !capable(CAP_SYS_ADMIN)) {
 		DRM_DEBUG("Insufficient privileges to open system-wide i915 perf stream\n");
 		ret = -EACCES;
@@ -2070,9 +2824,6 @@ int i915_perf_open_ioctl(struct drm_device *dev, void *data,
  */
 void i915_perf_register(struct drm_i915_private *dev_priv)
 {
-	if (!IS_HASWELL(dev_priv))
-		return;
-
 	if (!dev_priv->perf.initialized)
 		return;
 
@@ -2088,11 +2839,38 @@ void i915_perf_register(struct drm_i915_private *dev_priv)
 	if (!dev_priv->perf.metrics_kobj)
 		goto exit;
 
-	if (i915_perf_register_sysfs_hsw(dev_priv)) {
-		kobject_put(dev_priv->perf.metrics_kobj);
-		dev_priv->perf.metrics_kobj = NULL;
+	if (IS_HASWELL(dev_priv)) {
+		if (i915_perf_register_sysfs_hsw(dev_priv))
+			goto sysfs_error;
+	} else if (IS_BROADWELL(dev_priv)) {
+		if (i915_perf_register_sysfs_bdw(dev_priv))
+			goto sysfs_error;
+	} else if (IS_CHERRYVIEW(dev_priv)) {
+		if (i915_perf_register_sysfs_chv(dev_priv))
+			goto sysfs_error;
+	} else if (IS_SKYLAKE(dev_priv)) {
+		if (IS_SKL_GT2(dev_priv)) {
+			if (i915_perf_register_sysfs_sklgt2(dev_priv))
+				goto sysfs_error;
+		} else if (IS_SKL_GT3(dev_priv)) {
+			if (i915_perf_register_sysfs_sklgt3(dev_priv))
+				goto sysfs_error;
+		} else if (IS_SKL_GT4(dev_priv)) {
+			if (i915_perf_register_sysfs_sklgt4(dev_priv))
+				goto sysfs_error;
+		} else
+			goto sysfs_error;
+	} else if (IS_BROXTON(dev_priv)) {
+		if (i915_perf_register_sysfs_bxt(dev_priv))
+			goto sysfs_error;
 	}
 
+	goto exit;
+
+sysfs_error:
+	kobject_put(dev_priv->perf.metrics_kobj);
+	dev_priv->perf.metrics_kobj = NULL;
+
 exit:
 	mutex_unlock(&dev_priv->perf.lock);
 }
@@ -2108,13 +2886,24 @@ void i915_perf_register(struct drm_i915_private *dev_priv)
  */
 void i915_perf_unregister(struct drm_i915_private *dev_priv)
 {
-	if (!IS_HASWELL(dev_priv))
-		return;
-
 	if (!dev_priv->perf.metrics_kobj)
 		return;
 
-	i915_perf_unregister_sysfs_hsw(dev_priv);
+	if (IS_HASWELL(dev_priv))
+		i915_perf_unregister_sysfs_hsw(dev_priv);
+	else if (IS_BROADWELL(dev_priv))
+		i915_perf_unregister_sysfs_bdw(dev_priv);
+	else if (IS_CHERRYVIEW(dev_priv))
+		i915_perf_unregister_sysfs_chv(dev_priv);
+	else if (IS_SKYLAKE(dev_priv)) {
+		if (IS_SKL_GT2(dev_priv))
+			i915_perf_unregister_sysfs_sklgt2(dev_priv);
+		else if (IS_SKL_GT3(dev_priv))
+			i915_perf_unregister_sysfs_sklgt3(dev_priv);
+		else if (IS_SKL_GT4(dev_priv))
+			i915_perf_unregister_sysfs_sklgt4(dev_priv);
+	} else if (IS_BROXTON(dev_priv))
+		i915_perf_unregister_sysfs_bxt(dev_priv);
 
 	kobject_put(dev_priv->perf.metrics_kobj);
 	dev_priv->perf.metrics_kobj = NULL;
@@ -2173,36 +2962,105 @@ static struct ctl_table dev_root[] = {
  */
 void i915_perf_init(struct drm_i915_private *dev_priv)
 {
-	if (!IS_HASWELL(dev_priv))
-		return;
-
-	hrtimer_init(&dev_priv->perf.oa.poll_check_timer,
-		     CLOCK_MONOTONIC, HRTIMER_MODE_REL);
-	dev_priv->perf.oa.poll_check_timer.function = oa_poll_check_timer_cb;
-	init_waitqueue_head(&dev_priv->perf.oa.poll_wq);
+	dev_priv->perf.oa.n_builtin_sets = 0;
+
+	if (IS_HASWELL(dev_priv)) {
+		dev_priv->perf.oa.ops.init_oa_buffer = gen7_init_oa_buffer;
+		dev_priv->perf.oa.ops.enable_metric_set = hsw_enable_metric_set;
+		dev_priv->perf.oa.ops.disable_metric_set = hsw_disable_metric_set;
+		dev_priv->perf.oa.ops.oa_enable = gen7_oa_enable;
+		dev_priv->perf.oa.ops.oa_disable = gen7_oa_disable;
+		dev_priv->perf.oa.ops.read = gen7_oa_read;
+		dev_priv->perf.oa.ops.oa_hw_tail_read =
+			gen7_oa_hw_tail_read;
+
+		dev_priv->perf.oa.oa_formats = hsw_oa_formats;
+
+		dev_priv->perf.oa.n_builtin_sets =
+			i915_oa_n_builtin_metric_sets_hsw;
+	} else if (i915.enable_execlists) {
+		/* Note: that although we could theoretically also support the
+		 * legacy ringbuffer mode on BDW (and earlier iterations of
+		 * this driver, before upstreaming did this) it didn't seem
+		 * worth the complexity to maintain now that BDW+ enable
+		 * execlist mode by default.
+		 */
 
-	INIT_LIST_HEAD(&dev_priv->perf.streams);
-	mutex_init(&dev_priv->perf.lock);
-	spin_lock_init(&dev_priv->perf.hook_lock);
-	spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);
+		if (IS_GEN8(dev_priv)) {
+			dev_priv->perf.oa.ctx_oactxctrl_offset = 0x120;
+			dev_priv->perf.oa.ctx_flexeu0_offset = 0x2ce;
+			dev_priv->perf.oa.gen8_valid_ctx_bit = (1<<25);
+
+			if (IS_BROADWELL(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_bdw;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_bdw;
+			} else if (IS_CHERRYVIEW(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_chv;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_chv;
+			}
+		} else if (IS_GEN9(dev_priv)) {
+			dev_priv->perf.oa.ctx_oactxctrl_offset = 0x128;
+			dev_priv->perf.oa.ctx_flexeu0_offset = 0x3de;
+			dev_priv->perf.oa.gen8_valid_ctx_bit = (1<<16);
+
+			if (IS_SKL_GT2(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_sklgt2;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_sklgt2;
+			} else if (IS_SKL_GT3(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_sklgt3;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_sklgt3;
+			} else if (IS_SKL_GT4(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_sklgt4;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_sklgt4;
+			} else if (IS_BROXTON(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_bxt;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_bxt;
+			}
+		}
 
-	dev_priv->perf.oa.ops.init_oa_buffer = gen7_init_oa_buffer;
-	dev_priv->perf.oa.ops.enable_metric_set = hsw_enable_metric_set;
-	dev_priv->perf.oa.ops.disable_metric_set = hsw_disable_metric_set;
-	dev_priv->perf.oa.ops.oa_enable = gen7_oa_enable;
-	dev_priv->perf.oa.ops.oa_disable = gen7_oa_disable;
-	dev_priv->perf.oa.ops.read = gen7_oa_read;
-	dev_priv->perf.oa.ops.oa_buffer_check =
-		gen7_oa_buffer_check_unlocked;
+		if (dev_priv->perf.oa.n_builtin_sets) {
+			dev_priv->perf.oa.ops.init_oa_buffer = gen8_init_oa_buffer;
+			dev_priv->perf.oa.ops.enable_metric_set =
+				gen8_enable_metric_set;
+			dev_priv->perf.oa.ops.disable_metric_set =
+				gen8_disable_metric_set;
+			dev_priv->perf.oa.ops.oa_enable = gen8_oa_enable;
+			dev_priv->perf.oa.ops.oa_disable = gen8_oa_disable;
+			dev_priv->perf.oa.ops.read = gen8_oa_read;
+			dev_priv->perf.oa.ops.oa_hw_tail_read =
+				gen8_oa_hw_tail_read;
+
+			dev_priv->perf.oa.oa_formats = gen8_plus_oa_formats;
+		}
+	}
 
-	dev_priv->perf.oa.oa_formats = hsw_oa_formats;
+	if (dev_priv->perf.oa.n_builtin_sets) {
+		hrtimer_init(&dev_priv->perf.oa.poll_check_timer,
+				CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+		dev_priv->perf.oa.poll_check_timer.function = oa_poll_check_timer_cb;
+		init_waitqueue_head(&dev_priv->perf.oa.poll_wq);
 
-	dev_priv->perf.oa.n_builtin_sets =
-		i915_oa_n_builtin_metric_sets_hsw;
+		INIT_LIST_HEAD(&dev_priv->perf.streams);
+		mutex_init(&dev_priv->perf.lock);
+		spin_lock_init(&dev_priv->perf.hook_lock);
+		spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);
 
-	dev_priv->perf.sysctl_header = register_sysctl_table(dev_root);
+		dev_priv->perf.sysctl_header = register_sysctl_table(dev_root);
 
-	dev_priv->perf.initialized = true;
+		dev_priv->perf.initialized = true;
+	}
 }
 
 /**
@@ -2217,5 +3075,6 @@ void i915_perf_fini(struct drm_i915_private *dev_priv)
 	unregister_sysctl_table(dev_priv->perf.sysctl_header);
 
 	memset(&dev_priv->perf.oa.ops, 0, sizeof(dev_priv->perf.oa.ops));
+
 	dev_priv->perf.initialized = false;
 }
diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index 89888adb9af1..dd433e860566 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -653,6 +653,12 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 
 #define GEN8_OACTXID _MMIO(0x2364)
 
+#define GEN8_OA_DEBUG _MMIO(0x2B04)
+#define  GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS    (1<<5)
+#define  GEN9_OA_DEBUG_INCLUDE_CLK_RATIO	    (1<<6)
+#define  GEN9_OA_DEBUG_DISABLE_GO_1_0_REPORTS	    (1<<2)
+#define  GEN9_OA_DEBUG_DISABLE_CTX_SWITCH_REPORTS   (1<<1)
+
 #define GEN8_OACONTROL _MMIO(0x2B00)
 #define  GEN8_OA_REPORT_FORMAT_A12	    (0<<2)
 #define  GEN8_OA_REPORT_FORMAT_A12_B8_C8    (2<<2)
@@ -674,6 +680,7 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 #define  GEN7_OABUFFER_STOP_RESUME_ENABLE   (1<<1)
 #define  GEN7_OABUFFER_RESUME		    (1<<0)
 
+#define GEN8_OABUFFER_UDW _MMIO(0x23b4)
 #define GEN8_OABUFFER _MMIO(0x2b14)
 
 #define GEN7_OASTATUS1 _MMIO(0x2364)
@@ -692,7 +699,9 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 #define  GEN8_OASTATUS_REPORT_LOST	    (1<<0)
 
 #define GEN8_OAHEADPTR _MMIO(0x2B0C)
+#define GEN8_OAHEADPTR_MASK    0xffffffc0
 #define GEN8_OATAILPTR _MMIO(0x2B10)
+#define GEN8_OATAILPTR_MASK    0xffffffc0
 
 #define OABUFFER_SIZE_128K  (0<<3)
 #define OABUFFER_SIZE_256K  (1<<3)
@@ -705,7 +714,17 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 
 #define OA_MEM_SELECT_GGTT  (1<<0)
 
+/*
+ * Flexible, Aggregate EU Counter Registers.
+ * Note: these aren't contiguous
+ */
 #define EU_PERF_CNTL0	    _MMIO(0xe458)
+#define EU_PERF_CNTL1	    _MMIO(0xe558)
+#define EU_PERF_CNTL2	    _MMIO(0xe658)
+#define EU_PERF_CNTL3	    _MMIO(0xe758)
+#define EU_PERF_CNTL4	    _MMIO(0xe45c)
+#define EU_PERF_CNTL5	    _MMIO(0xe55c)
+#define EU_PERF_CNTL6	    _MMIO(0xe65c)
 
 #define GDT_CHICKEN_BITS    _MMIO(0x9840)
 #define GT_NOA_ENABLE	    0x00000080
@@ -2325,6 +2344,9 @@ enum skl_disp_power_wells {
 #define   GEN8_RC_SEMA_IDLE_MSG_DISABLE	(1 << 12)
 #define   GEN8_FF_DOP_CLOCK_GATE_DISABLE	(1<<10)
 
+#define GEN6_RCS_PWR_FSM _MMIO(0x22ac)
+#define GEN9_RCS_FE_FSM2 _MMIO(0x22a4)
+
 /* Fuse readout registers for GT */
 #define CHV_FUSE_GT			_MMIO(VLV_DISPLAY_BASE + 0x2168)
 #define   CHV_FGT_DISABLE_SS0		(1 << 10)
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 78fb1f452891..ad37e2b236b0 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1945,6 +1945,8 @@ static void execlists_init_reg_state(u32 *regs,
 		regs[CTX_LRI_HEADER_2] = MI_LOAD_REGISTER_IMM(1);
 		CTX_REG(regs, CTX_R_PWR_CLK_STATE, GEN8_R_PWR_CLK_STATE,
 			make_rpcs(&ctx->engine[engine->id].sseu));
+
+		i915_oa_init_reg_state(engine, ctx, regs);
 	}
 }
 
diff --git a/drivers/gpu/drm/i915/intel_lrc.h b/drivers/gpu/drm/i915/intel_lrc.h
index 52b3a1fd4059..eb7bb51c0a4a 100644
--- a/drivers/gpu/drm/i915/intel_lrc.h
+++ b/drivers/gpu/drm/i915/intel_lrc.h
@@ -62,6 +62,8 @@ enum {
 	INTEL_CONTEXT_SCHEDULE_OUT,
 };
 
+struct sseu_dev_info;
+
 /* Logical Rings */
 void intel_logical_ring_stop(struct intel_engine_cs *engine);
 void intel_logical_ring_cleanup(struct intel_engine_cs *engine);
diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
index f24a80d2d42e..d561b1f98b20 100644
--- a/include/uapi/drm/i915_drm.h
+++ b/include/uapi/drm/i915_drm.h
@@ -1308,13 +1308,18 @@ struct drm_i915_gem_context_param {
 };
 
 enum drm_i915_oa_format {
-	I915_OA_FORMAT_A13 = 1,
-	I915_OA_FORMAT_A29,
-	I915_OA_FORMAT_A13_B8_C8,
-	I915_OA_FORMAT_B4_C8,
-	I915_OA_FORMAT_A45_B8_C8,
-	I915_OA_FORMAT_B4_C8_A16,
-	I915_OA_FORMAT_C4_B8,
+	I915_OA_FORMAT_A13 = 1,	    /* HSW only */
+	I915_OA_FORMAT_A29,	    /* HSW only */
+	I915_OA_FORMAT_A13_B8_C8,   /* HSW only */
+	I915_OA_FORMAT_B4_C8,	    /* HSW only */
+	I915_OA_FORMAT_A45_B8_C8,   /* HSW only */
+	I915_OA_FORMAT_B4_C8_A16,   /* HSW only */
+	I915_OA_FORMAT_C4_B8,	    /* HSW+ */
+
+	/* Gen8+ */
+	I915_OA_FORMAT_A12,
+	I915_OA_FORMAT_A12_B8_C8,
+	I915_OA_FORMAT_A32u40_A4u32_B8_C8,
 
 	I915_OA_FORMAT_MAX	    /* non-ABI */
 };
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v14 07/14] drm/i915/perf: Add more OA configs for BDW, CHV, SKL + BXT
  2017-05-26 11:56   ` [PATCH v14 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                       ` (5 preceding siblings ...)
  2017-05-26 11:56     ` [PATCH v14 06/14] drm/i915/perf: Add OA unit support for Gen 8+ Lionel Landwerlin
@ 2017-05-26 11:56     ` Lionel Landwerlin
  2017-05-26 11:56     ` [PATCH v14 08/14] drm/i915/perf: per-gen timebase for checking sample freq Lionel Landwerlin
                       ` (6 subsequent siblings)
  13 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-26 11:56 UTC (permalink / raw)
  To: intel-gfx

From: Robert Bragg <robert@sixbynine.org>

These are auto generated from an XML description of metric sets,
currently maintained in gputop, ref:

 https://github.com/rib/gputop
 > gputop-data/oa-*.xml
 > scripts/i915-perf-kernelgen.py

 $ make -C gputop-data -f Makefile.xml

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h       |    4 +-
 drivers/gpu/drm/i915/i915_oa_bdw.c    | 5066 ++++++++++++++++++++++++++++++++-
 drivers/gpu/drm/i915/i915_oa_bxt.c    | 2464 +++++++++++++++-
 drivers/gpu/drm/i915/i915_oa_chv.c    | 2657 ++++++++++++++++-
 drivers/gpu/drm/i915/i915_oa_hsw.c    |   48 +
 drivers/gpu/drm/i915/i915_oa_sklgt2.c | 3323 ++++++++++++++++++++-
 drivers/gpu/drm/i915/i915_oa_sklgt3.c | 2872 ++++++++++++++++++-
 drivers/gpu/drm/i915/i915_oa_sklgt4.c | 2915 ++++++++++++++++++-
 8 files changed, 19161 insertions(+), 188 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 84bde667bb45..7741881e9605 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2397,8 +2397,8 @@ struct drm_i915_private {
 
 			int metrics_set;
 
-			const struct i915_oa_reg *mux_regs[2];
-			int mux_regs_lens[2];
+			const struct i915_oa_reg *mux_regs[6];
+			int mux_regs_lens[6];
 			int n_mux_configs;
 
 			const struct i915_oa_reg *b_counter_regs;
diff --git a/drivers/gpu/drm/i915/i915_oa_bdw.c b/drivers/gpu/drm/i915/i915_oa_bdw.c
index 9a11c03b4ecb..d4462c2aaaee 100644
--- a/drivers/gpu/drm/i915/i915_oa_bdw.c
+++ b/drivers/gpu/drm/i915/i915_oa_bdw.c
@@ -33,9 +33,30 @@
 
 enum metric_set_id {
 	METRIC_SET_ID_RENDER_BASIC = 1,
+	METRIC_SET_ID_COMPUTE_BASIC,
+	METRIC_SET_ID_RENDER_PIPE_PROFILE,
+	METRIC_SET_ID_MEMORY_READS,
+	METRIC_SET_ID_MEMORY_WRITES,
+	METRIC_SET_ID_COMPUTE_EXTENDED,
+	METRIC_SET_ID_COMPUTE_L3_CACHE,
+	METRIC_SET_ID_DATA_PORT_READS_COALESCING,
+	METRIC_SET_ID_DATA_PORT_WRITES_COALESCING,
+	METRIC_SET_ID_HDC_AND_SF,
+	METRIC_SET_ID_L3_1,
+	METRIC_SET_ID_L3_2,
+	METRIC_SET_ID_L3_3,
+	METRIC_SET_ID_L3_4,
+	METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND,
+	METRIC_SET_ID_SAMPLER_1,
+	METRIC_SET_ID_SAMPLER_2,
+	METRIC_SET_ID_TDL_1,
+	METRIC_SET_ID_TDL_2,
+	METRIC_SET_ID_COMPUTE_EXTRA,
+	METRIC_SET_ID_VME_PIPE,
+	METRIC_SET_ID_TEST_OA,
 };
 
-int i915_oa_n_builtin_metric_sets_bdw = 1;
+int i915_oa_n_builtin_metric_sets_bdw = 22;
 
 static const struct i915_oa_reg b_counter_config_render_basic[] = {
 	{ _MMIO(0x2710), 0x00000000 },
@@ -300,66 +321,4819 @@ get_render_basic_mux_config(struct drm_i915_private *dev_priv,
 	return n;
 }
 
+static const struct i915_oa_reg b_counter_config_compute_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic_0_slices_0x01[] = {
+	{ _MMIO(0x9888), 0x105c00e0 },
+	{ _MMIO(0x9888), 0x105800e0 },
+	{ _MMIO(0x9888), 0x103800e0 },
+	{ _MMIO(0x9888), 0x3580001a },
+	{ _MMIO(0x9888), 0x3b800060 },
+	{ _MMIO(0x9888), 0x3d800005 },
+	{ _MMIO(0x9888), 0x065c2100 },
+	{ _MMIO(0x9888), 0x0a5c0041 },
+	{ _MMIO(0x9888), 0x0c5c6600 },
+	{ _MMIO(0x9888), 0x005c6580 },
+	{ _MMIO(0x9888), 0x085c8000 },
+	{ _MMIO(0x9888), 0x0e5c8000 },
+	{ _MMIO(0x9888), 0x00580042 },
+	{ _MMIO(0x9888), 0x08582080 },
+	{ _MMIO(0x9888), 0x0c58004c },
+	{ _MMIO(0x9888), 0x0e582580 },
+	{ _MMIO(0x9888), 0x005b4000 },
+	{ _MMIO(0x9888), 0x185b1000 },
+	{ _MMIO(0x9888), 0x1a5b0104 },
+	{ _MMIO(0x9888), 0x0c1fa800 },
+	{ _MMIO(0x9888), 0x0e1faa00 },
+	{ _MMIO(0x9888), 0x101f02aa },
+	{ _MMIO(0x9888), 0x08380042 },
+	{ _MMIO(0x9888), 0x0a382080 },
+	{ _MMIO(0x9888), 0x0e38404c },
+	{ _MMIO(0x9888), 0x0238404b },
+	{ _MMIO(0x9888), 0x00384000 },
+	{ _MMIO(0x9888), 0x16380000 },
+	{ _MMIO(0x9888), 0x18381145 },
+	{ _MMIO(0x9888), 0x04380000 },
+	{ _MMIO(0x9888), 0x0039a000 },
+	{ _MMIO(0x9888), 0x06398000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0c39a000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x02392000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8aaaa0 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x238b02a0 },
+	{ _MMIO(0x9888), 0x258b5550 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x1f850a80 },
+	{ _MMIO(0x9888), 0x2185aaa0 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x03844000 },
+	{ _MMIO(0x9888), 0x17808137 },
+	{ _MMIO(0x9888), 0x1980c147 },
+	{ _MMIO(0x9888), 0x1b80c0e5 },
+	{ _MMIO(0x9888), 0x1d80c0e3 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x13804000 },
+	{ _MMIO(0x9888), 0x15800000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d801000 },
+	{ _MMIO(0x9888), 0x4f800111 },
+	{ _MMIO(0x9888), 0x43800062 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800062 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800062 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f801062 },
+	{ _MMIO(0x9888), 0x41801084 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic_2_slices_0x02[] = {
+	{ _MMIO(0x9888), 0x10dc00e0 },
+	{ _MMIO(0x9888), 0x10d800e0 },
+	{ _MMIO(0x9888), 0x10b800e0 },
+	{ _MMIO(0x9888), 0x3580001a },
+	{ _MMIO(0x9888), 0x3b800060 },
+	{ _MMIO(0x9888), 0x3d800005 },
+	{ _MMIO(0x9888), 0x06dc2100 },
+	{ _MMIO(0x9888), 0x0adc0041 },
+	{ _MMIO(0x9888), 0x0cdc6600 },
+	{ _MMIO(0x9888), 0x00dc6580 },
+	{ _MMIO(0x9888), 0x08dc8000 },
+	{ _MMIO(0x9888), 0x0edc8000 },
+	{ _MMIO(0x9888), 0x00d80042 },
+	{ _MMIO(0x9888), 0x08d82080 },
+	{ _MMIO(0x9888), 0x0cd8004c },
+	{ _MMIO(0x9888), 0x0ed82580 },
+	{ _MMIO(0x9888), 0x00db4000 },
+	{ _MMIO(0x9888), 0x18db1000 },
+	{ _MMIO(0x9888), 0x1adb0104 },
+	{ _MMIO(0x9888), 0x0c9fa800 },
+	{ _MMIO(0x9888), 0x0e9faa00 },
+	{ _MMIO(0x9888), 0x109f02aa },
+	{ _MMIO(0x9888), 0x08b80042 },
+	{ _MMIO(0x9888), 0x0ab82080 },
+	{ _MMIO(0x9888), 0x0eb8404c },
+	{ _MMIO(0x9888), 0x02b8404b },
+	{ _MMIO(0x9888), 0x00b84000 },
+	{ _MMIO(0x9888), 0x16b80000 },
+	{ _MMIO(0x9888), 0x18b81145 },
+	{ _MMIO(0x9888), 0x04b80000 },
+	{ _MMIO(0x9888), 0x00b9a000 },
+	{ _MMIO(0x9888), 0x06b98000 },
+	{ _MMIO(0x9888), 0x08b9a000 },
+	{ _MMIO(0x9888), 0x0ab9a000 },
+	{ _MMIO(0x9888), 0x0cb9a000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x02b92000 },
+	{ _MMIO(0x9888), 0x01888000 },
+	{ _MMIO(0x9888), 0x0d88f800 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x238b0540 },
+	{ _MMIO(0x9888), 0x258baaa0 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x018c4000 },
+	{ _MMIO(0x9888), 0x0f8c4000 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c5500 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x018da000 },
+	{ _MMIO(0x9888), 0x078d8000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x038d2000 },
+	{ _MMIO(0x9888), 0x1f850a80 },
+	{ _MMIO(0x9888), 0x2185aaa0 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x03844000 },
+	{ _MMIO(0x9888), 0x17808137 },
+	{ _MMIO(0x9888), 0x1980c147 },
+	{ _MMIO(0x9888), 0x1b80c0e5 },
+	{ _MMIO(0x9888), 0x1d80c0e3 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x13804000 },
+	{ _MMIO(0x9888), 0x15800000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d805000 },
+	{ _MMIO(0x9888), 0x4f800555 },
+	{ _MMIO(0x9888), 0x43800062 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800062 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800062 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800062 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static int
+get_compute_basic_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 2);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 2);
+
+	if (INTEL_INFO(dev_priv)->sseu.slice_mask & 0x01) {
+		regs[n] = mux_config_compute_basic_0_slices_0x01;
+		lens[n] = ARRAY_SIZE(mux_config_compute_basic_0_slices_0x01);
+		n++;
+	}
+	if (INTEL_INFO(dev_priv)->sseu.slice_mask & 0x02) {
+		regs[n] = mux_config_compute_basic_2_slices_0x02;
+		lens[n] = ARRAY_SIZE(mux_config_compute_basic_2_slices_0x02);
+		n++;
+	}
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_render_pipe_profile[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ffea },
+	{ _MMIO(0x2774), 0x00007ffc },
+	{ _MMIO(0x2778), 0x0007affa },
+	{ _MMIO(0x277c), 0x0000f5fd },
+	{ _MMIO(0x2780), 0x00079ffa },
+	{ _MMIO(0x2784), 0x0000f3fb },
+	{ _MMIO(0x2788), 0x0007bf7a },
+	{ _MMIO(0x278c), 0x0000f7e7 },
+	{ _MMIO(0x2790), 0x0007fefa },
+	{ _MMIO(0x2794), 0x0000f7cf },
+	{ _MMIO(0x2798), 0x00077ffa },
+	{ _MMIO(0x279c), 0x0000efdf },
+	{ _MMIO(0x27a0), 0x0006fffa },
+	{ _MMIO(0x27a4), 0x0000cfbf },
+	{ _MMIO(0x27a8), 0x0003fffa },
+	{ _MMIO(0x27ac), 0x00005f7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_pipe_profile[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile[] = {
+	{ _MMIO(0x9888), 0x0a1e0000 },
+	{ _MMIO(0x9888), 0x0c1f000f },
+	{ _MMIO(0x9888), 0x10176800 },
+	{ _MMIO(0x9888), 0x1191001f },
+	{ _MMIO(0x9888), 0x0b880320 },
+	{ _MMIO(0x9888), 0x01890c40 },
+	{ _MMIO(0x9888), 0x118a1c00 },
+	{ _MMIO(0x9888), 0x118d7c00 },
+	{ _MMIO(0x9888), 0x118e0020 },
+	{ _MMIO(0x9888), 0x118f4c00 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x13900001 },
+	{ _MMIO(0x9888), 0x065c4000 },
+	{ _MMIO(0x9888), 0x0c3d8000 },
+	{ _MMIO(0x9888), 0x06584000 },
+	{ _MMIO(0x9888), 0x0c5b4000 },
+	{ _MMIO(0x9888), 0x081e0040 },
+	{ _MMIO(0x9888), 0x0e1e0000 },
+	{ _MMIO(0x9888), 0x021f5400 },
+	{ _MMIO(0x9888), 0x001f0000 },
+	{ _MMIO(0x9888), 0x101f0010 },
+	{ _MMIO(0x9888), 0x0e1f0080 },
+	{ _MMIO(0x9888), 0x0c384000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x0c13c000 },
+	{ _MMIO(0x9888), 0x06164000 },
+	{ _MMIO(0x9888), 0x06170012 },
+	{ _MMIO(0x9888), 0x00170000 },
+	{ _MMIO(0x9888), 0x01910005 },
+	{ _MMIO(0x9888), 0x07880002 },
+	{ _MMIO(0x9888), 0x01880c00 },
+	{ _MMIO(0x9888), 0x0f880000 },
+	{ _MMIO(0x9888), 0x0d880000 },
+	{ _MMIO(0x9888), 0x05880000 },
+	{ _MMIO(0x9888), 0x09890032 },
+	{ _MMIO(0x9888), 0x078a0800 },
+	{ _MMIO(0x9888), 0x0f8a0a00 },
+	{ _MMIO(0x9888), 0x198a4000 },
+	{ _MMIO(0x9888), 0x1b8a2000 },
+	{ _MMIO(0x9888), 0x1d8a0000 },
+	{ _MMIO(0x9888), 0x038a4000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x238b54c0 },
+	{ _MMIO(0x9888), 0x258baa55 },
+	{ _MMIO(0x9888), 0x278b0019 },
+	{ _MMIO(0x9888), 0x198c0100 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x0f8d0015 },
+	{ _MMIO(0x9888), 0x018d1000 },
+	{ _MMIO(0x9888), 0x098d8000 },
+	{ _MMIO(0x9888), 0x0b8df000 },
+	{ _MMIO(0x9888), 0x0d8d3000 },
+	{ _MMIO(0x9888), 0x038de000 },
+	{ _MMIO(0x9888), 0x058d3000 },
+	{ _MMIO(0x9888), 0x0d8e0004 },
+	{ _MMIO(0x9888), 0x058e000c },
+	{ _MMIO(0x9888), 0x098e0000 },
+	{ _MMIO(0x9888), 0x078e0000 },
+	{ _MMIO(0x9888), 0x038e0000 },
+	{ _MMIO(0x9888), 0x0b8f0020 },
+	{ _MMIO(0x9888), 0x198f0c00 },
+	{ _MMIO(0x9888), 0x078f8000 },
+	{ _MMIO(0x9888), 0x098f4000 },
+	{ _MMIO(0x9888), 0x0b900980 },
+	{ _MMIO(0x9888), 0x03900d80 },
+	{ _MMIO(0x9888), 0x01900000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d801111 },
+	{ _MMIO(0x9888), 0x3d800800 },
+	{ _MMIO(0x9888), 0x4f801011 },
+	{ _MMIO(0x9888), 0x43800443 },
+	{ _MMIO(0x9888), 0x51801111 },
+	{ _MMIO(0x9888), 0x45800422 },
+	{ _MMIO(0x9888), 0x53801111 },
+	{ _MMIO(0x9888), 0x47800c60 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800422 },
+	{ _MMIO(0x9888), 0x41800021 },
+};
+
+static int
+get_render_pipe_profile_mux_config(struct drm_i915_private *dev_priv,
+				   const struct i915_oa_reg **regs,
+				   int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_render_pipe_profile;
+	lens[n] = ARRAY_SIZE(mux_config_render_pipe_profile);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_reads[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f872 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_reads[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads[] = {
+	{ _MMIO(0x9888), 0x198b0343 },
+	{ _MMIO(0x9888), 0x13845800 },
+	{ _MMIO(0x9888), 0x15840018 },
+	{ _MMIO(0x9888), 0x3580001a },
+	{ _MMIO(0x9888), 0x038b6300 },
+	{ _MMIO(0x9888), 0x058b6b62 },
+	{ _MMIO(0x9888), 0x078b006a },
+	{ _MMIO(0x9888), 0x118b0000 },
+	{ _MMIO(0x9888), 0x238b0000 },
+	{ _MMIO(0x9888), 0x258b0000 },
+	{ _MMIO(0x9888), 0x1f85a080 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385000a },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x01840018 },
+	{ _MMIO(0x9888), 0x07844c80 },
+	{ _MMIO(0x9888), 0x09840d9a },
+	{ _MMIO(0x9888), 0x0b840e9c },
+	{ _MMIO(0x9888), 0x0d840f9e },
+	{ _MMIO(0x9888), 0x0f840010 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x03848000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x2f8000e5 },
+	{ _MMIO(0x9888), 0x138080e3 },
+	{ _MMIO(0x9888), 0x1580c0e1 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x11804000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f804000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3d800800 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800842 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800842 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47801042 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800084 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static int
+get_memory_reads_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_memory_reads;
+	lens[n] = ARRAY_SIZE(mux_config_memory_reads);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_writes[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f822 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_writes[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes[] = {
+	{ _MMIO(0x9888), 0x198b0343 },
+	{ _MMIO(0x9888), 0x13845400 },
+	{ _MMIO(0x9888), 0x3580001a },
+	{ _MMIO(0x9888), 0x3d800805 },
+	{ _MMIO(0x9888), 0x038b6300 },
+	{ _MMIO(0x9888), 0x058b6b62 },
+	{ _MMIO(0x9888), 0x078b006a },
+	{ _MMIO(0x9888), 0x118b0000 },
+	{ _MMIO(0x9888), 0x238b0000 },
+	{ _MMIO(0x9888), 0x258b0000 },
+	{ _MMIO(0x9888), 0x1f85a080 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x23850002 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x01840010 },
+	{ _MMIO(0x9888), 0x07844880 },
+	{ _MMIO(0x9888), 0x09840992 },
+	{ _MMIO(0x9888), 0x0b840a94 },
+	{ _MMIO(0x9888), 0x0d840b96 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x03848000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x2d800147 },
+	{ _MMIO(0x9888), 0x2f8000e5 },
+	{ _MMIO(0x9888), 0x138080e3 },
+	{ _MMIO(0x9888), 0x1580c0e1 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x11804000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f800000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800842 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800842 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47801082 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800084 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static int
+get_memory_writes_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_memory_writes;
+	lens[n] = ARRAY_SIZE(mux_config_memory_writes);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extended[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fc2a },
+	{ _MMIO(0x2774), 0x0000bf00 },
+	{ _MMIO(0x2778), 0x0007fc6a },
+	{ _MMIO(0x277c), 0x0000bf00 },
+	{ _MMIO(0x2780), 0x0007fc92 },
+	{ _MMIO(0x2784), 0x0000bf00 },
+	{ _MMIO(0x2788), 0x0007fca2 },
+	{ _MMIO(0x278c), 0x0000bf00 },
+	{ _MMIO(0x2790), 0x0007fc32 },
+	{ _MMIO(0x2794), 0x0000bf00 },
+	{ _MMIO(0x2798), 0x0007fc9a },
+	{ _MMIO(0x279c), 0x0000bf00 },
+	{ _MMIO(0x27a0), 0x0007fe6a },
+	{ _MMIO(0x27a4), 0x0000bf00 },
+	{ _MMIO(0x27a8), 0x0007fe7a },
+	{ _MMIO(0x27ac), 0x0000bf00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extended[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended_0_subslices_0x01[] = {
+	{ _MMIO(0x9888), 0x143d0160 },
+	{ _MMIO(0x9888), 0x163d2800 },
+	{ _MMIO(0x9888), 0x183d0120 },
+	{ _MMIO(0x9888), 0x105800e0 },
+	{ _MMIO(0x9888), 0x005cc000 },
+	{ _MMIO(0x9888), 0x065c8000 },
+	{ _MMIO(0x9888), 0x085cc000 },
+	{ _MMIO(0x9888), 0x0a5cc000 },
+	{ _MMIO(0x9888), 0x0c5cc000 },
+	{ _MMIO(0x9888), 0x0e5cc000 },
+	{ _MMIO(0x9888), 0x025cc000 },
+	{ _MMIO(0x9888), 0x045cc000 },
+	{ _MMIO(0x9888), 0x003d0011 },
+	{ _MMIO(0x9888), 0x063d0900 },
+	{ _MMIO(0x9888), 0x083d0a13 },
+	{ _MMIO(0x9888), 0x0a3d0b15 },
+	{ _MMIO(0x9888), 0x0c3d2317 },
+	{ _MMIO(0x9888), 0x043d21b7 },
+	{ _MMIO(0x9888), 0x103d0000 },
+	{ _MMIO(0x9888), 0x0e3d0000 },
+	{ _MMIO(0x9888), 0x1a3d0000 },
+	{ _MMIO(0x9888), 0x0e5825c1 },
+	{ _MMIO(0x9888), 0x00586100 },
+	{ _MMIO(0x9888), 0x0258204c },
+	{ _MMIO(0x9888), 0x06588000 },
+	{ _MMIO(0x9888), 0x0858c000 },
+	{ _MMIO(0x9888), 0x0a58c000 },
+	{ _MMIO(0x9888), 0x0c58c000 },
+	{ _MMIO(0x9888), 0x0458c000 },
+	{ _MMIO(0x9888), 0x005b4000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x185b5400 },
+	{ _MMIO(0x9888), 0x1a5b0155 },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x045b4000 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x085b4000 },
+	{ _MMIO(0x9888), 0x0a5b4000 },
+	{ _MMIO(0x9888), 0x0c1fa800 },
+	{ _MMIO(0x9888), 0x0e1faa2a },
+	{ _MMIO(0x9888), 0x101f02aa },
+	{ _MMIO(0x9888), 0x00384000 },
+	{ _MMIO(0x9888), 0x0e384000 },
+	{ _MMIO(0x9888), 0x16384000 },
+	{ _MMIO(0x9888), 0x18381555 },
+	{ _MMIO(0x9888), 0x02384000 },
+	{ _MMIO(0x9888), 0x04384000 },
+	{ _MMIO(0x9888), 0x06384000 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0039a000 },
+	{ _MMIO(0x9888), 0x06398000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0c39a000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8aaaa0 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x238b2aa0 },
+	{ _MMIO(0x9888), 0x258b5551 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa2 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800420 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800421 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended_2_subslices_0x02[] = {
+	{ _MMIO(0x9888), 0x105c00e0 },
+	{ _MMIO(0x9888), 0x145b0160 },
+	{ _MMIO(0x9888), 0x165b2800 },
+	{ _MMIO(0x9888), 0x185b0120 },
+	{ _MMIO(0x9888), 0x0e5c25c1 },
+	{ _MMIO(0x9888), 0x005c6100 },
+	{ _MMIO(0x9888), 0x025c204c },
+	{ _MMIO(0x9888), 0x065c8000 },
+	{ _MMIO(0x9888), 0x085cc000 },
+	{ _MMIO(0x9888), 0x0a5cc000 },
+	{ _MMIO(0x9888), 0x0c5cc000 },
+	{ _MMIO(0x9888), 0x045cc000 },
+	{ _MMIO(0x9888), 0x005b0011 },
+	{ _MMIO(0x9888), 0x065b0900 },
+	{ _MMIO(0x9888), 0x085b0a13 },
+	{ _MMIO(0x9888), 0x0a5b0b15 },
+	{ _MMIO(0x9888), 0x0c5b2317 },
+	{ _MMIO(0x9888), 0x045b21b7 },
+	{ _MMIO(0x9888), 0x105b0000 },
+	{ _MMIO(0x9888), 0x0e5b0000 },
+	{ _MMIO(0x9888), 0x1a5b0000 },
+	{ _MMIO(0x9888), 0x0c1fa800 },
+	{ _MMIO(0x9888), 0x0e1faa2a },
+	{ _MMIO(0x9888), 0x101f02aa },
+	{ _MMIO(0x9888), 0x00384000 },
+	{ _MMIO(0x9888), 0x0e384000 },
+	{ _MMIO(0x9888), 0x16384000 },
+	{ _MMIO(0x9888), 0x18381555 },
+	{ _MMIO(0x9888), 0x02384000 },
+	{ _MMIO(0x9888), 0x04384000 },
+	{ _MMIO(0x9888), 0x06384000 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0039a000 },
+	{ _MMIO(0x9888), 0x06398000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0c39a000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8aaaa0 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x238b2aa0 },
+	{ _MMIO(0x9888), 0x258b5551 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa2 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800420 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800421 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended_4_subslices_0x04[] = {
+	{ _MMIO(0x9888), 0x103800e0 },
+	{ _MMIO(0x9888), 0x143a0160 },
+	{ _MMIO(0x9888), 0x163a2800 },
+	{ _MMIO(0x9888), 0x183a0120 },
+	{ _MMIO(0x9888), 0x0c1fa800 },
+	{ _MMIO(0x9888), 0x0e1faa2a },
+	{ _MMIO(0x9888), 0x101f02aa },
+	{ _MMIO(0x9888), 0x0e38a5c1 },
+	{ _MMIO(0x9888), 0x0038a100 },
+	{ _MMIO(0x9888), 0x0238204c },
+	{ _MMIO(0x9888), 0x16388000 },
+	{ _MMIO(0x9888), 0x183802aa },
+	{ _MMIO(0x9888), 0x04380000 },
+	{ _MMIO(0x9888), 0x06380000 },
+	{ _MMIO(0x9888), 0x08388000 },
+	{ _MMIO(0x9888), 0x0a388000 },
+	{ _MMIO(0x9888), 0x0039a000 },
+	{ _MMIO(0x9888), 0x06398000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0c39a000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x003a0011 },
+	{ _MMIO(0x9888), 0x063a0900 },
+	{ _MMIO(0x9888), 0x083a0a13 },
+	{ _MMIO(0x9888), 0x0a3a0b15 },
+	{ _MMIO(0x9888), 0x0c3a2317 },
+	{ _MMIO(0x9888), 0x043a21b7 },
+	{ _MMIO(0x9888), 0x103a0000 },
+	{ _MMIO(0x9888), 0x0e3a0000 },
+	{ _MMIO(0x9888), 0x1a3a0000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8aaaa0 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x238b2aa0 },
+	{ _MMIO(0x9888), 0x258b5551 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa2 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800420 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800421 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended_1_subslices_0x08[] = {
+	{ _MMIO(0x9888), 0x14bd0160 },
+	{ _MMIO(0x9888), 0x16bd2800 },
+	{ _MMIO(0x9888), 0x18bd0120 },
+	{ _MMIO(0x9888), 0x10d800e0 },
+	{ _MMIO(0x9888), 0x00dcc000 },
+	{ _MMIO(0x9888), 0x06dc8000 },
+	{ _MMIO(0x9888), 0x08dcc000 },
+	{ _MMIO(0x9888), 0x0adcc000 },
+	{ _MMIO(0x9888), 0x0cdcc000 },
+	{ _MMIO(0x9888), 0x0edcc000 },
+	{ _MMIO(0x9888), 0x02dcc000 },
+	{ _MMIO(0x9888), 0x04dcc000 },
+	{ _MMIO(0x9888), 0x00bd0011 },
+	{ _MMIO(0x9888), 0x06bd0900 },
+	{ _MMIO(0x9888), 0x08bd0a13 },
+	{ _MMIO(0x9888), 0x0abd0b15 },
+	{ _MMIO(0x9888), 0x0cbd2317 },
+	{ _MMIO(0x9888), 0x04bd21b7 },
+	{ _MMIO(0x9888), 0x10bd0000 },
+	{ _MMIO(0x9888), 0x0ebd0000 },
+	{ _MMIO(0x9888), 0x1abd0000 },
+	{ _MMIO(0x9888), 0x0ed825c1 },
+	{ _MMIO(0x9888), 0x00d86100 },
+	{ _MMIO(0x9888), 0x02d8204c },
+	{ _MMIO(0x9888), 0x06d88000 },
+	{ _MMIO(0x9888), 0x08d8c000 },
+	{ _MMIO(0x9888), 0x0ad8c000 },
+	{ _MMIO(0x9888), 0x0cd8c000 },
+	{ _MMIO(0x9888), 0x04d8c000 },
+	{ _MMIO(0x9888), 0x00db4000 },
+	{ _MMIO(0x9888), 0x0edb4000 },
+	{ _MMIO(0x9888), 0x18db5400 },
+	{ _MMIO(0x9888), 0x1adb0155 },
+	{ _MMIO(0x9888), 0x02db4000 },
+	{ _MMIO(0x9888), 0x04db4000 },
+	{ _MMIO(0x9888), 0x06db4000 },
+	{ _MMIO(0x9888), 0x08db4000 },
+	{ _MMIO(0x9888), 0x0adb4000 },
+	{ _MMIO(0x9888), 0x0c9fa800 },
+	{ _MMIO(0x9888), 0x0e9faa2a },
+	{ _MMIO(0x9888), 0x109f02aa },
+	{ _MMIO(0x9888), 0x00b84000 },
+	{ _MMIO(0x9888), 0x0eb84000 },
+	{ _MMIO(0x9888), 0x16b84000 },
+	{ _MMIO(0x9888), 0x18b81555 },
+	{ _MMIO(0x9888), 0x02b84000 },
+	{ _MMIO(0x9888), 0x04b84000 },
+	{ _MMIO(0x9888), 0x06b84000 },
+	{ _MMIO(0x9888), 0x08b84000 },
+	{ _MMIO(0x9888), 0x0ab84000 },
+	{ _MMIO(0x9888), 0x00b9a000 },
+	{ _MMIO(0x9888), 0x06b98000 },
+	{ _MMIO(0x9888), 0x08b9a000 },
+	{ _MMIO(0x9888), 0x0ab9a000 },
+	{ _MMIO(0x9888), 0x0cb9a000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x02b9a000 },
+	{ _MMIO(0x9888), 0x04b9a000 },
+	{ _MMIO(0x9888), 0x01888000 },
+	{ _MMIO(0x9888), 0x0d88f800 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x238b5540 },
+	{ _MMIO(0x9888), 0x258baaa2 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x018c4000 },
+	{ _MMIO(0x9888), 0x0f8c4000 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c5500 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x018da000 },
+	{ _MMIO(0x9888), 0x078d8000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa2 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800420 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800421 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended_3_subslices_0x10[] = {
+	{ _MMIO(0x9888), 0x10dc00e0 },
+	{ _MMIO(0x9888), 0x14db0160 },
+	{ _MMIO(0x9888), 0x16db2800 },
+	{ _MMIO(0x9888), 0x18db0120 },
+	{ _MMIO(0x9888), 0x0edc25c1 },
+	{ _MMIO(0x9888), 0x00dc6100 },
+	{ _MMIO(0x9888), 0x02dc204c },
+	{ _MMIO(0x9888), 0x06dc8000 },
+	{ _MMIO(0x9888), 0x08dcc000 },
+	{ _MMIO(0x9888), 0x0adcc000 },
+	{ _MMIO(0x9888), 0x0cdcc000 },
+	{ _MMIO(0x9888), 0x04dcc000 },
+	{ _MMIO(0x9888), 0x00db0011 },
+	{ _MMIO(0x9888), 0x06db0900 },
+	{ _MMIO(0x9888), 0x08db0a13 },
+	{ _MMIO(0x9888), 0x0adb0b15 },
+	{ _MMIO(0x9888), 0x0cdb2317 },
+	{ _MMIO(0x9888), 0x04db21b7 },
+	{ _MMIO(0x9888), 0x10db0000 },
+	{ _MMIO(0x9888), 0x0edb0000 },
+	{ _MMIO(0x9888), 0x1adb0000 },
+	{ _MMIO(0x9888), 0x0c9fa800 },
+	{ _MMIO(0x9888), 0x0e9faa2a },
+	{ _MMIO(0x9888), 0x109f02aa },
+	{ _MMIO(0x9888), 0x00b84000 },
+	{ _MMIO(0x9888), 0x0eb84000 },
+	{ _MMIO(0x9888), 0x16b84000 },
+	{ _MMIO(0x9888), 0x18b81555 },
+	{ _MMIO(0x9888), 0x02b84000 },
+	{ _MMIO(0x9888), 0x04b84000 },
+	{ _MMIO(0x9888), 0x06b84000 },
+	{ _MMIO(0x9888), 0x08b84000 },
+	{ _MMIO(0x9888), 0x0ab84000 },
+	{ _MMIO(0x9888), 0x00b9a000 },
+	{ _MMIO(0x9888), 0x06b98000 },
+	{ _MMIO(0x9888), 0x08b9a000 },
+	{ _MMIO(0x9888), 0x0ab9a000 },
+	{ _MMIO(0x9888), 0x0cb9a000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x02b9a000 },
+	{ _MMIO(0x9888), 0x04b9a000 },
+	{ _MMIO(0x9888), 0x01888000 },
+	{ _MMIO(0x9888), 0x0d88f800 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x238b5540 },
+	{ _MMIO(0x9888), 0x258baaa2 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x018c4000 },
+	{ _MMIO(0x9888), 0x0f8c4000 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c5500 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x018da000 },
+	{ _MMIO(0x9888), 0x078d8000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa2 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800420 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800421 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended_5_subslices_0x20[] = {
+	{ _MMIO(0x9888), 0x10b800e0 },
+	{ _MMIO(0x9888), 0x14ba0160 },
+	{ _MMIO(0x9888), 0x16ba2800 },
+	{ _MMIO(0x9888), 0x18ba0120 },
+	{ _MMIO(0x9888), 0x0c9fa800 },
+	{ _MMIO(0x9888), 0x0e9faa2a },
+	{ _MMIO(0x9888), 0x109f02aa },
+	{ _MMIO(0x9888), 0x0eb8a5c1 },
+	{ _MMIO(0x9888), 0x00b8a100 },
+	{ _MMIO(0x9888), 0x02b8204c },
+	{ _MMIO(0x9888), 0x16b88000 },
+	{ _MMIO(0x9888), 0x18b802aa },
+	{ _MMIO(0x9888), 0x04b80000 },
+	{ _MMIO(0x9888), 0x06b80000 },
+	{ _MMIO(0x9888), 0x08b88000 },
+	{ _MMIO(0x9888), 0x0ab88000 },
+	{ _MMIO(0x9888), 0x00b9a000 },
+	{ _MMIO(0x9888), 0x06b98000 },
+	{ _MMIO(0x9888), 0x08b9a000 },
+	{ _MMIO(0x9888), 0x0ab9a000 },
+	{ _MMIO(0x9888), 0x0cb9a000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x02b9a000 },
+	{ _MMIO(0x9888), 0x04b9a000 },
+	{ _MMIO(0x9888), 0x00ba0011 },
+	{ _MMIO(0x9888), 0x06ba0900 },
+	{ _MMIO(0x9888), 0x08ba0a13 },
+	{ _MMIO(0x9888), 0x0aba0b15 },
+	{ _MMIO(0x9888), 0x0cba2317 },
+	{ _MMIO(0x9888), 0x04ba21b7 },
+	{ _MMIO(0x9888), 0x10ba0000 },
+	{ _MMIO(0x9888), 0x0eba0000 },
+	{ _MMIO(0x9888), 0x1aba0000 },
+	{ _MMIO(0x9888), 0x01888000 },
+	{ _MMIO(0x9888), 0x0d88f800 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x238b5540 },
+	{ _MMIO(0x9888), 0x258baaa2 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x018c4000 },
+	{ _MMIO(0x9888), 0x0f8c4000 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c5500 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x018da000 },
+	{ _MMIO(0x9888), 0x078d8000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa2 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800420 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800421 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static int
+get_compute_extended_mux_config(struct drm_i915_private *dev_priv,
+				const struct i915_oa_reg **regs,
+				int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 6);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 6);
+
+	if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x01) {
+		regs[n] = mux_config_compute_extended_0_subslices_0x01;
+		lens[n] = ARRAY_SIZE(mux_config_compute_extended_0_subslices_0x01);
+		n++;
+	}
+	if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x08) {
+		regs[n] = mux_config_compute_extended_1_subslices_0x08;
+		lens[n] = ARRAY_SIZE(mux_config_compute_extended_1_subslices_0x08);
+		n++;
+	}
+	if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x02) {
+		regs[n] = mux_config_compute_extended_2_subslices_0x02;
+		lens[n] = ARRAY_SIZE(mux_config_compute_extended_2_subslices_0x02);
+		n++;
+	}
+	if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x10) {
+		regs[n] = mux_config_compute_extended_3_subslices_0x10;
+		lens[n] = ARRAY_SIZE(mux_config_compute_extended_3_subslices_0x10);
+		n++;
+	}
+	if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x04) {
+		regs[n] = mux_config_compute_extended_4_subslices_0x04;
+		lens[n] = ARRAY_SIZE(mux_config_compute_extended_4_subslices_0x04);
+		n++;
+	}
+	if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x20) {
+		regs[n] = mux_config_compute_extended_5_subslices_0x20;
+		lens[n] = ARRAY_SIZE(mux_config_compute_extended_5_subslices_0x20);
+		n++;
+	}
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_l3_cache[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fffa },
+	{ _MMIO(0x2774), 0x0000fefe },
+	{ _MMIO(0x2778), 0x0007fffa },
+	{ _MMIO(0x277c), 0x0000fefd },
+	{ _MMIO(0x2790), 0x0007fffa },
+	{ _MMIO(0x2794), 0x0000fbef },
+	{ _MMIO(0x2798), 0x0007fffa },
+	{ _MMIO(0x279c), 0x0000fbdf },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_l3_cache[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00101100 },
+	{ _MMIO(0xe45c), 0x00201200 },
+	{ _MMIO(0xe55c), 0x00301300 },
+	{ _MMIO(0xe65c), 0x00401400 },
+};
+
+static const struct i915_oa_reg mux_config_compute_l3_cache[] = {
+	{ _MMIO(0x9888), 0x143f00b3 },
+	{ _MMIO(0x9888), 0x14bf00b3 },
+	{ _MMIO(0x9888), 0x138303c0 },
+	{ _MMIO(0x9888), 0x3b800060 },
+	{ _MMIO(0x9888), 0x3d800805 },
+	{ _MMIO(0x9888), 0x003f0029 },
+	{ _MMIO(0x9888), 0x063f1400 },
+	{ _MMIO(0x9888), 0x083f1225 },
+	{ _MMIO(0x9888), 0x0e3f1327 },
+	{ _MMIO(0x9888), 0x103f0000 },
+	{ _MMIO(0x9888), 0x005a4000 },
+	{ _MMIO(0x9888), 0x065a8000 },
+	{ _MMIO(0x9888), 0x085ac000 },
+	{ _MMIO(0x9888), 0x0e5ac000 },
+	{ _MMIO(0x9888), 0x001d4000 },
+	{ _MMIO(0x9888), 0x061d8000 },
+	{ _MMIO(0x9888), 0x081dc000 },
+	{ _MMIO(0x9888), 0x0e1dc000 },
+	{ _MMIO(0x9888), 0x0c1f0800 },
+	{ _MMIO(0x9888), 0x0e1f2a00 },
+	{ _MMIO(0x9888), 0x101f0280 },
+	{ _MMIO(0x9888), 0x00391000 },
+	{ _MMIO(0x9888), 0x06394000 },
+	{ _MMIO(0x9888), 0x08395000 },
+	{ _MMIO(0x9888), 0x0e395000 },
+	{ _MMIO(0x9888), 0x0abf1429 },
+	{ _MMIO(0x9888), 0x0cbf1225 },
+	{ _MMIO(0x9888), 0x00bf1380 },
+	{ _MMIO(0x9888), 0x02bf0026 },
+	{ _MMIO(0x9888), 0x10bf0000 },
+	{ _MMIO(0x9888), 0x0adac000 },
+	{ _MMIO(0x9888), 0x0cdac000 },
+	{ _MMIO(0x9888), 0x00da8000 },
+	{ _MMIO(0x9888), 0x02da4000 },
+	{ _MMIO(0x9888), 0x0a9dc000 },
+	{ _MMIO(0x9888), 0x0c9dc000 },
+	{ _MMIO(0x9888), 0x009d8000 },
+	{ _MMIO(0x9888), 0x029d4000 },
+	{ _MMIO(0x9888), 0x0e9f8000 },
+	{ _MMIO(0x9888), 0x109f002a },
+	{ _MMIO(0x9888), 0x0c9fa000 },
+	{ _MMIO(0x9888), 0x0ab95000 },
+	{ _MMIO(0x9888), 0x0cb95000 },
+	{ _MMIO(0x9888), 0x00b94000 },
+	{ _MMIO(0x9888), 0x02b91000 },
+	{ _MMIO(0x9888), 0x0d88c000 },
+	{ _MMIO(0x9888), 0x0f880003 },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8a8020 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x238b0520 },
+	{ _MMIO(0x9888), 0x258ba950 },
+	{ _MMIO(0x9888), 0x278b0016 },
+	{ _MMIO(0x9888), 0x198c5400 },
+	{ _MMIO(0x9888), 0x1b8c0001 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x018d8000 },
+	{ _MMIO(0x9888), 0x038d2000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa0 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x03835180 },
+	{ _MMIO(0x9888), 0x05834022 },
+	{ _MMIO(0x9888), 0x11830000 },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x07830000 },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x05844000 },
+	{ _MMIO(0x9888), 0x1b80c137 },
+	{ _MMIO(0x9888), 0x1d80c147 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x15804000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d801000 },
+	{ _MMIO(0x9888), 0x4f800111 },
+	{ _MMIO(0x9888), 0x43800842 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800840 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800800 },
+	{ _MMIO(0x9888), 0x418014a2 },
+};
+
+static int
+get_compute_l3_cache_mux_config(struct drm_i915_private *dev_priv,
+				const struct i915_oa_reg **regs,
+				int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_l3_cache;
+	lens[n] = ARRAY_SIZE(mux_config_compute_l3_cache);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_data_port_reads_coalescing[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0xba98ba98 },
+	{ _MMIO(0x2748), 0xba98ba98 },
+	{ _MMIO(0x2744), 0x00003377 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fff2 },
+	{ _MMIO(0x2774), 0x00007ff0 },
+	{ _MMIO(0x2778), 0x0007ffe2 },
+	{ _MMIO(0x277c), 0x00007ff0 },
+	{ _MMIO(0x2780), 0x0007ffc2 },
+	{ _MMIO(0x2784), 0x00007ff0 },
+	{ _MMIO(0x2788), 0x0007ff82 },
+	{ _MMIO(0x278c), 0x00007ff0 },
+	{ _MMIO(0x2790), 0x0007fffa },
+	{ _MMIO(0x2794), 0x0000bfef },
+	{ _MMIO(0x2798), 0x0007fffa },
+	{ _MMIO(0x279c), 0x0000bfdf },
+	{ _MMIO(0x27a0), 0x0007fffa },
+	{ _MMIO(0x27a4), 0x0000bfbf },
+	{ _MMIO(0x27a8), 0x0007fffa },
+	{ _MMIO(0x27ac), 0x0000bf7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_data_port_reads_coalescing[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_data_port_reads_coalescing_0_subslices_0x01[] = {
+	{ _MMIO(0x9888), 0x103d0005 },
+	{ _MMIO(0x9888), 0x163d240b },
+	{ _MMIO(0x9888), 0x1058022f },
+	{ _MMIO(0x9888), 0x185b5520 },
+	{ _MMIO(0x9888), 0x198b0003 },
+	{ _MMIO(0x9888), 0x005cc000 },
+	{ _MMIO(0x9888), 0x065cc000 },
+	{ _MMIO(0x9888), 0x085cc000 },
+	{ _MMIO(0x9888), 0x0a5cc000 },
+	{ _MMIO(0x9888), 0x0c5cc000 },
+	{ _MMIO(0x9888), 0x0e5cc000 },
+	{ _MMIO(0x9888), 0x025c4000 },
+	{ _MMIO(0x9888), 0x045c8000 },
+	{ _MMIO(0x9888), 0x003d0000 },
+	{ _MMIO(0x9888), 0x063d00b0 },
+	{ _MMIO(0x9888), 0x083d0182 },
+	{ _MMIO(0x9888), 0x0a3d10a0 },
+	{ _MMIO(0x9888), 0x0c3d11a2 },
+	{ _MMIO(0x9888), 0x0e3d0000 },
+	{ _MMIO(0x9888), 0x183d0000 },
+	{ _MMIO(0x9888), 0x1a3d0000 },
+	{ _MMIO(0x9888), 0x0e582242 },
+	{ _MMIO(0x9888), 0x00586700 },
+	{ _MMIO(0x9888), 0x0258004f },
+	{ _MMIO(0x9888), 0x0658c000 },
+	{ _MMIO(0x9888), 0x0858c000 },
+	{ _MMIO(0x9888), 0x0a58c000 },
+	{ _MMIO(0x9888), 0x0c58c000 },
+	{ _MMIO(0x9888), 0x045b6300 },
+	{ _MMIO(0x9888), 0x105b0000 },
+	{ _MMIO(0x9888), 0x005b4000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x1a5b0155 },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x0a5b0000 },
+	{ _MMIO(0x9888), 0x0c5b4000 },
+	{ _MMIO(0x9888), 0x0c1fa800 },
+	{ _MMIO(0x9888), 0x0e1faaa0 },
+	{ _MMIO(0x9888), 0x101f02aa },
+	{ _MMIO(0x9888), 0x00384000 },
+	{ _MMIO(0x9888), 0x0e384000 },
+	{ _MMIO(0x9888), 0x16384000 },
+	{ _MMIO(0x9888), 0x18381555 },
+	{ _MMIO(0x9888), 0x02384000 },
+	{ _MMIO(0x9888), 0x04384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0c384000 },
+	{ _MMIO(0x9888), 0x0039a000 },
+	{ _MMIO(0x9888), 0x0639a000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0c39a000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x02392000 },
+	{ _MMIO(0x9888), 0x04398000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8aaaa0 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x038b6300 },
+	{ _MMIO(0x9888), 0x058b0062 },
+	{ _MMIO(0x9888), 0x118b0000 },
+	{ _MMIO(0x9888), 0x238b02a0 },
+	{ _MMIO(0x9888), 0x258b5555 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d801000 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f800001 },
+	{ _MMIO(0x9888), 0x43800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800420 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800421 },
+	{ _MMIO(0x9888), 0x41800041 },
+};
+
+static int
+get_data_port_reads_coalescing_mux_config(struct drm_i915_private *dev_priv,
+					  const struct i915_oa_reg **regs,
+					  int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x01) {
+		regs[n] = mux_config_data_port_reads_coalescing_0_subslices_0x01;
+		lens[n] = ARRAY_SIZE(mux_config_data_port_reads_coalescing_0_subslices_0x01);
+		n++;
+	}
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_data_port_writes_coalescing[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0xba98ba98 },
+	{ _MMIO(0x2748), 0xba98ba98 },
+	{ _MMIO(0x2744), 0x00003377 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ff72 },
+	{ _MMIO(0x2774), 0x0000bfd0 },
+	{ _MMIO(0x2778), 0x0007ff62 },
+	{ _MMIO(0x277c), 0x0000bfd0 },
+	{ _MMIO(0x2780), 0x0007ff42 },
+	{ _MMIO(0x2784), 0x0000bfd0 },
+	{ _MMIO(0x2788), 0x0007ff02 },
+	{ _MMIO(0x278c), 0x0000bfd0 },
+	{ _MMIO(0x2790), 0x0005fff2 },
+	{ _MMIO(0x2794), 0x0000bfd0 },
+	{ _MMIO(0x2798), 0x0005ffe2 },
+	{ _MMIO(0x279c), 0x0000bfd0 },
+	{ _MMIO(0x27a0), 0x0005ffc2 },
+	{ _MMIO(0x27a4), 0x0000bfd0 },
+	{ _MMIO(0x27a8), 0x0005ff82 },
+	{ _MMIO(0x27ac), 0x0000bfd0 },
+};
+
+static const struct i915_oa_reg flex_eu_config_data_port_writes_coalescing[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_data_port_writes_coalescing_0_subslices_0x01[] = {
+	{ _MMIO(0x9888), 0x103d0005 },
+	{ _MMIO(0x9888), 0x143d0120 },
+	{ _MMIO(0x9888), 0x163d2400 },
+	{ _MMIO(0x9888), 0x1058022f },
+	{ _MMIO(0x9888), 0x105b0000 },
+	{ _MMIO(0x9888), 0x198b0003 },
+	{ _MMIO(0x9888), 0x005cc000 },
+	{ _MMIO(0x9888), 0x065cc000 },
+	{ _MMIO(0x9888), 0x085cc000 },
+	{ _MMIO(0x9888), 0x0a5cc000 },
+	{ _MMIO(0x9888), 0x0e5cc000 },
+	{ _MMIO(0x9888), 0x025c4000 },
+	{ _MMIO(0x9888), 0x045c8000 },
+	{ _MMIO(0x9888), 0x003d0000 },
+	{ _MMIO(0x9888), 0x063d0094 },
+	{ _MMIO(0x9888), 0x083d0182 },
+	{ _MMIO(0x9888), 0x0a3d1814 },
+	{ _MMIO(0x9888), 0x0e3d0000 },
+	{ _MMIO(0x9888), 0x183d0000 },
+	{ _MMIO(0x9888), 0x1a3d0000 },
+	{ _MMIO(0x9888), 0x0c3d0000 },
+	{ _MMIO(0x9888), 0x0e582242 },
+	{ _MMIO(0x9888), 0x00586700 },
+	{ _MMIO(0x9888), 0x0258004f },
+	{ _MMIO(0x9888), 0x0658c000 },
+	{ _MMIO(0x9888), 0x0858c000 },
+	{ _MMIO(0x9888), 0x0a58c000 },
+	{ _MMIO(0x9888), 0x045b6a80 },
+	{ _MMIO(0x9888), 0x005b4000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x185b5400 },
+	{ _MMIO(0x9888), 0x1a5b0141 },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x0a5b0000 },
+	{ _MMIO(0x9888), 0x0c5b4000 },
+	{ _MMIO(0x9888), 0x0c1fa800 },
+	{ _MMIO(0x9888), 0x0e1faaa0 },
+	{ _MMIO(0x9888), 0x101f0282 },
+	{ _MMIO(0x9888), 0x00384000 },
+	{ _MMIO(0x9888), 0x0e384000 },
+	{ _MMIO(0x9888), 0x16384000 },
+	{ _MMIO(0x9888), 0x18381415 },
+	{ _MMIO(0x9888), 0x02384000 },
+	{ _MMIO(0x9888), 0x04384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0c384000 },
+	{ _MMIO(0x9888), 0x0039a000 },
+	{ _MMIO(0x9888), 0x0639a000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x02392000 },
+	{ _MMIO(0x9888), 0x04398000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8a82a0 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x038b6300 },
+	{ _MMIO(0x9888), 0x058b0062 },
+	{ _MMIO(0x9888), 0x118b0000 },
+	{ _MMIO(0x9888), 0x238b02a0 },
+	{ _MMIO(0x9888), 0x258b1555 },
+	{ _MMIO(0x9888), 0x278b0014 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x21852aaa },
+	{ _MMIO(0x9888), 0x23850028 },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830141 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d801000 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f800001 },
+	{ _MMIO(0x9888), 0x43800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800420 },
+	{ _MMIO(0x9888), 0x3f800421 },
+	{ _MMIO(0x9888), 0x41800041 },
+};
+
+static int
+get_data_port_writes_coalescing_mux_config(struct drm_i915_private *dev_priv,
+					   const struct i915_oa_reg **regs,
+					   int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x01) {
+		regs[n] = mux_config_data_port_writes_coalescing_0_subslices_0x01;
+		lens[n] = ARRAY_SIZE(mux_config_data_port_writes_coalescing_0_subslices_0x01);
+		n++;
+	}
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_hdc_and_sf[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x10800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fff7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_hdc_and_sf[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_hdc_and_sf[] = {
+	{ _MMIO(0x9888), 0x105c0232 },
+	{ _MMIO(0x9888), 0x10580232 },
+	{ _MMIO(0x9888), 0x10380232 },
+	{ _MMIO(0x9888), 0x10dc0232 },
+	{ _MMIO(0x9888), 0x10d80232 },
+	{ _MMIO(0x9888), 0x10b80232 },
+	{ _MMIO(0x9888), 0x118e4400 },
+	{ _MMIO(0x9888), 0x025c6080 },
+	{ _MMIO(0x9888), 0x045c004b },
+	{ _MMIO(0x9888), 0x005c8000 },
+	{ _MMIO(0x9888), 0x00582080 },
+	{ _MMIO(0x9888), 0x0258004b },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x045b4000 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f00aa },
+	{ _MMIO(0x9888), 0x04386080 },
+	{ _MMIO(0x9888), 0x0638404b },
+	{ _MMIO(0x9888), 0x02384000 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a380000 },
+	{ _MMIO(0x9888), 0x0c380000 },
+	{ _MMIO(0x9888), 0x00398000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x0cdc25c1 },
+	{ _MMIO(0x9888), 0x0adcc000 },
+	{ _MMIO(0x9888), 0x0ad825c1 },
+	{ _MMIO(0x9888), 0x18db4000 },
+	{ _MMIO(0x9888), 0x1adb0001 },
+	{ _MMIO(0x9888), 0x0e9f8000 },
+	{ _MMIO(0x9888), 0x109f02aa },
+	{ _MMIO(0x9888), 0x0eb825c1 },
+	{ _MMIO(0x9888), 0x18b80154 },
+	{ _MMIO(0x9888), 0x0ab9a000 },
+	{ _MMIO(0x9888), 0x0cb9a000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x0d88c000 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x258baa05 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x198c5400 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x098dc000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x098e05c0 },
+	{ _MMIO(0x9888), 0x058e0000 },
+	{ _MMIO(0x9888), 0x198f0020 },
+	{ _MMIO(0x9888), 0x2185aa0a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x19835000 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x09848000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x19808000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x51800040 },
+	{ _MMIO(0x9888), 0x43800400 },
+	{ _MMIO(0x9888), 0x45800800 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800c62 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f801042 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x418014a4 },
+};
+
+static int
+get_hdc_and_sf_mux_config(struct drm_i915_private *dev_priv,
+			  const struct i915_oa_reg **regs,
+			  int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_hdc_and_sf;
+	lens[n] = ARRAY_SIZE(mux_config_hdc_and_sf);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1[] = {
+	{ _MMIO(0x9888), 0x10bf03da },
+	{ _MMIO(0x9888), 0x14bf0001 },
+	{ _MMIO(0x9888), 0x12980340 },
+	{ _MMIO(0x9888), 0x12990340 },
+	{ _MMIO(0x9888), 0x0cbf1187 },
+	{ _MMIO(0x9888), 0x0ebf1205 },
+	{ _MMIO(0x9888), 0x00bf0500 },
+	{ _MMIO(0x9888), 0x02bf042b },
+	{ _MMIO(0x9888), 0x04bf002c },
+	{ _MMIO(0x9888), 0x0cdac000 },
+	{ _MMIO(0x9888), 0x0edac000 },
+	{ _MMIO(0x9888), 0x00da8000 },
+	{ _MMIO(0x9888), 0x02dac000 },
+	{ _MMIO(0x9888), 0x04da4000 },
+	{ _MMIO(0x9888), 0x04983400 },
+	{ _MMIO(0x9888), 0x10980000 },
+	{ _MMIO(0x9888), 0x06990034 },
+	{ _MMIO(0x9888), 0x10990000 },
+	{ _MMIO(0x9888), 0x0c9dc000 },
+	{ _MMIO(0x9888), 0x0e9dc000 },
+	{ _MMIO(0x9888), 0x009d8000 },
+	{ _MMIO(0x9888), 0x029dc000 },
+	{ _MMIO(0x9888), 0x049d4000 },
+	{ _MMIO(0x9888), 0x109f02a8 },
+	{ _MMIO(0x9888), 0x0c9fa000 },
+	{ _MMIO(0x9888), 0x0e9f00ba },
+	{ _MMIO(0x9888), 0x0cb88000 },
+	{ _MMIO(0x9888), 0x0cb95000 },
+	{ _MMIO(0x9888), 0x0eb95000 },
+	{ _MMIO(0x9888), 0x00b94000 },
+	{ _MMIO(0x9888), 0x02b95000 },
+	{ _MMIO(0x9888), 0x04b91000 },
+	{ _MMIO(0x9888), 0x06b92000 },
+	{ _MMIO(0x9888), 0x0cba4000 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x258b800a },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b5500 },
+	{ _MMIO(0x9888), 0x198c4000 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x018d8000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2185800a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x1b830154 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x47800000 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800060 },
+};
+
+static int
+get_l3_1_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_1;
+	lens[n] = ARRAY_SIZE(mux_config_l3_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_2[] = {
+	{ _MMIO(0x9888), 0x103f03da },
+	{ _MMIO(0x9888), 0x143f0001 },
+	{ _MMIO(0x9888), 0x12180340 },
+	{ _MMIO(0x9888), 0x12190340 },
+	{ _MMIO(0x9888), 0x0c3f1187 },
+	{ _MMIO(0x9888), 0x0e3f1205 },
+	{ _MMIO(0x9888), 0x003f0500 },
+	{ _MMIO(0x9888), 0x023f042b },
+	{ _MMIO(0x9888), 0x043f002c },
+	{ _MMIO(0x9888), 0x0c5ac000 },
+	{ _MMIO(0x9888), 0x0e5ac000 },
+	{ _MMIO(0x9888), 0x005a8000 },
+	{ _MMIO(0x9888), 0x025ac000 },
+	{ _MMIO(0x9888), 0x045a4000 },
+	{ _MMIO(0x9888), 0x04183400 },
+	{ _MMIO(0x9888), 0x10180000 },
+	{ _MMIO(0x9888), 0x06190034 },
+	{ _MMIO(0x9888), 0x10190000 },
+	{ _MMIO(0x9888), 0x0c1dc000 },
+	{ _MMIO(0x9888), 0x0e1dc000 },
+	{ _MMIO(0x9888), 0x001d8000 },
+	{ _MMIO(0x9888), 0x021dc000 },
+	{ _MMIO(0x9888), 0x041d4000 },
+	{ _MMIO(0x9888), 0x101f02a8 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f00ba },
+	{ _MMIO(0x9888), 0x0c388000 },
+	{ _MMIO(0x9888), 0x0c395000 },
+	{ _MMIO(0x9888), 0x0e395000 },
+	{ _MMIO(0x9888), 0x00394000 },
+	{ _MMIO(0x9888), 0x02395000 },
+	{ _MMIO(0x9888), 0x04391000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x0c3a4000 },
+	{ _MMIO(0x9888), 0x1b8aa800 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x258b4005 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x2185800a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x1b830154 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x47800000 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800060 },
+};
+
+static int
+get_l3_2_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_2;
+	lens[n] = ARRAY_SIZE(mux_config_l3_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_3[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_3[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_3[] = {
+	{ _MMIO(0x9888), 0x121b0340 },
+	{ _MMIO(0x9888), 0x103f0274 },
+	{ _MMIO(0x9888), 0x123f0000 },
+	{ _MMIO(0x9888), 0x129b0340 },
+	{ _MMIO(0x9888), 0x10bf0274 },
+	{ _MMIO(0x9888), 0x12bf0000 },
+	{ _MMIO(0x9888), 0x041b3400 },
+	{ _MMIO(0x9888), 0x101b0000 },
+	{ _MMIO(0x9888), 0x045c8000 },
+	{ _MMIO(0x9888), 0x0a3d4000 },
+	{ _MMIO(0x9888), 0x003f0080 },
+	{ _MMIO(0x9888), 0x023f0793 },
+	{ _MMIO(0x9888), 0x043f0014 },
+	{ _MMIO(0x9888), 0x04588000 },
+	{ _MMIO(0x9888), 0x005a8000 },
+	{ _MMIO(0x9888), 0x025ac000 },
+	{ _MMIO(0x9888), 0x045a4000 },
+	{ _MMIO(0x9888), 0x0a5b4000 },
+	{ _MMIO(0x9888), 0x001d8000 },
+	{ _MMIO(0x9888), 0x021dc000 },
+	{ _MMIO(0x9888), 0x041d4000 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f002a },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x00394000 },
+	{ _MMIO(0x9888), 0x02395000 },
+	{ _MMIO(0x9888), 0x04399000 },
+	{ _MMIO(0x9888), 0x069b0034 },
+	{ _MMIO(0x9888), 0x109b0000 },
+	{ _MMIO(0x9888), 0x06dc4000 },
+	{ _MMIO(0x9888), 0x0cbd4000 },
+	{ _MMIO(0x9888), 0x0cbf0981 },
+	{ _MMIO(0x9888), 0x0ebf0a0f },
+	{ _MMIO(0x9888), 0x06d84000 },
+	{ _MMIO(0x9888), 0x0cdac000 },
+	{ _MMIO(0x9888), 0x0edac000 },
+	{ _MMIO(0x9888), 0x0cdb4000 },
+	{ _MMIO(0x9888), 0x0c9dc000 },
+	{ _MMIO(0x9888), 0x0e9dc000 },
+	{ _MMIO(0x9888), 0x109f02a8 },
+	{ _MMIO(0x9888), 0x0e9f0080 },
+	{ _MMIO(0x9888), 0x0cb84000 },
+	{ _MMIO(0x9888), 0x0cb95000 },
+	{ _MMIO(0x9888), 0x0eb95000 },
+	{ _MMIO(0x9888), 0x06b92000 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x258b8009 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x198c4000 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2185800a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x1b830154 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x45800c00 },
+	{ _MMIO(0x9888), 0x47800c63 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f8014a5 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800045 },
+};
+
+static int
+get_l3_3_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_3;
+	lens[n] = ARRAY_SIZE(mux_config_l3_3);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_4[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_4[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_4[] = {
+	{ _MMIO(0x9888), 0x121a0340 },
+	{ _MMIO(0x9888), 0x103f0017 },
+	{ _MMIO(0x9888), 0x123f0020 },
+	{ _MMIO(0x9888), 0x129a0340 },
+	{ _MMIO(0x9888), 0x10bf0017 },
+	{ _MMIO(0x9888), 0x12bf0020 },
+	{ _MMIO(0x9888), 0x041a3400 },
+	{ _MMIO(0x9888), 0x101a0000 },
+	{ _MMIO(0x9888), 0x043b8000 },
+	{ _MMIO(0x9888), 0x0a3e0010 },
+	{ _MMIO(0x9888), 0x003f0200 },
+	{ _MMIO(0x9888), 0x023f0113 },
+	{ _MMIO(0x9888), 0x043f0014 },
+	{ _MMIO(0x9888), 0x02592000 },
+	{ _MMIO(0x9888), 0x005a8000 },
+	{ _MMIO(0x9888), 0x025ac000 },
+	{ _MMIO(0x9888), 0x045a4000 },
+	{ _MMIO(0x9888), 0x0a1c8000 },
+	{ _MMIO(0x9888), 0x001d8000 },
+	{ _MMIO(0x9888), 0x021dc000 },
+	{ _MMIO(0x9888), 0x041d4000 },
+	{ _MMIO(0x9888), 0x0a1e8000 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f001a },
+	{ _MMIO(0x9888), 0x00394000 },
+	{ _MMIO(0x9888), 0x02395000 },
+	{ _MMIO(0x9888), 0x04391000 },
+	{ _MMIO(0x9888), 0x069a0034 },
+	{ _MMIO(0x9888), 0x109a0000 },
+	{ _MMIO(0x9888), 0x06bb4000 },
+	{ _MMIO(0x9888), 0x0abe0040 },
+	{ _MMIO(0x9888), 0x0cbf0984 },
+	{ _MMIO(0x9888), 0x0ebf0a02 },
+	{ _MMIO(0x9888), 0x02d94000 },
+	{ _MMIO(0x9888), 0x0cdac000 },
+	{ _MMIO(0x9888), 0x0edac000 },
+	{ _MMIO(0x9888), 0x0c9c0400 },
+	{ _MMIO(0x9888), 0x0c9dc000 },
+	{ _MMIO(0x9888), 0x0e9dc000 },
+	{ _MMIO(0x9888), 0x0c9e0400 },
+	{ _MMIO(0x9888), 0x109f02a8 },
+	{ _MMIO(0x9888), 0x0e9f0040 },
+	{ _MMIO(0x9888), 0x0cb95000 },
+	{ _MMIO(0x9888), 0x0eb95000 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x258b8009 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x198c4000 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2185800a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x1b830154 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x45800800 },
+	{ _MMIO(0x9888), 0x47800842 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f801084 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800044 },
+};
+
+static int
+get_l3_4_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_4;
+	lens[n] = ARRAY_SIZE(mux_config_l3_4);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00006000 },
+	{ _MMIO(0x2774), 0x0000f3ff },
+	{ _MMIO(0x2778), 0x00001800 },
+	{ _MMIO(0x277c), 0x0000fcff },
+	{ _MMIO(0x2780), 0x00000600 },
+	{ _MMIO(0x2784), 0x0000ff3f },
+	{ _MMIO(0x2788), 0x00000180 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000060 },
+	{ _MMIO(0x2794), 0x0000fff3 },
+	{ _MMIO(0x2798), 0x00000018 },
+	{ _MMIO(0x279c), 0x0000fffc },
+};
+
+static const struct i915_oa_reg flex_eu_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x9888), 0x143b000e },
+	{ _MMIO(0x9888), 0x043c55c0 },
+	{ _MMIO(0x9888), 0x0a1e0280 },
+	{ _MMIO(0x9888), 0x0c1e0408 },
+	{ _MMIO(0x9888), 0x10390000 },
+	{ _MMIO(0x9888), 0x12397a1f },
+	{ _MMIO(0x9888), 0x14bb000e },
+	{ _MMIO(0x9888), 0x04bc5000 },
+	{ _MMIO(0x9888), 0x0a9e0296 },
+	{ _MMIO(0x9888), 0x0c9e0008 },
+	{ _MMIO(0x9888), 0x10b90000 },
+	{ _MMIO(0x9888), 0x12b97a1f },
+	{ _MMIO(0x9888), 0x063b0042 },
+	{ _MMIO(0x9888), 0x103b0000 },
+	{ _MMIO(0x9888), 0x083c0000 },
+	{ _MMIO(0x9888), 0x0a3e0040 },
+	{ _MMIO(0x9888), 0x043f8000 },
+	{ _MMIO(0x9888), 0x02594000 },
+	{ _MMIO(0x9888), 0x045a8000 },
+	{ _MMIO(0x9888), 0x0c1c0400 },
+	{ _MMIO(0x9888), 0x041d8000 },
+	{ _MMIO(0x9888), 0x081e02c0 },
+	{ _MMIO(0x9888), 0x0e1e0000 },
+	{ _MMIO(0x9888), 0x0c1fa800 },
+	{ _MMIO(0x9888), 0x0e1f0260 },
+	{ _MMIO(0x9888), 0x101f0014 },
+	{ _MMIO(0x9888), 0x003905e0 },
+	{ _MMIO(0x9888), 0x06390bc0 },
+	{ _MMIO(0x9888), 0x02390018 },
+	{ _MMIO(0x9888), 0x04394000 },
+	{ _MMIO(0x9888), 0x04bb0042 },
+	{ _MMIO(0x9888), 0x10bb0000 },
+	{ _MMIO(0x9888), 0x02bc05c0 },
+	{ _MMIO(0x9888), 0x08bc0000 },
+	{ _MMIO(0x9888), 0x0abe0004 },
+	{ _MMIO(0x9888), 0x02bf8000 },
+	{ _MMIO(0x9888), 0x02d91000 },
+	{ _MMIO(0x9888), 0x02da8000 },
+	{ _MMIO(0x9888), 0x089c8000 },
+	{ _MMIO(0x9888), 0x029d8000 },
+	{ _MMIO(0x9888), 0x089e8000 },
+	{ _MMIO(0x9888), 0x0e9e0000 },
+	{ _MMIO(0x9888), 0x0e9fa806 },
+	{ _MMIO(0x9888), 0x109f0142 },
+	{ _MMIO(0x9888), 0x08b90617 },
+	{ _MMIO(0x9888), 0x0ab90be0 },
+	{ _MMIO(0x9888), 0x02b94000 },
+	{ _MMIO(0x9888), 0x0d88f000 },
+	{ _MMIO(0x9888), 0x0f88000c },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x1b8a2800 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x238b52a0 },
+	{ _MMIO(0x9888), 0x258b6a95 },
+	{ _MMIO(0x9888), 0x278b0029 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c1500 },
+	{ _MMIO(0x9888), 0x1b8c0014 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x038d8000 },
+	{ _MMIO(0x9888), 0x058d2000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x4d800444 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f804000 },
+	{ _MMIO(0x9888), 0x43801080 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800084 },
+	{ _MMIO(0x9888), 0x53800044 },
+	{ _MMIO(0x9888), 0x47801080 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800000 },
+	{ _MMIO(0x9888), 0x41800840 },
+};
+
+static int
+get_rasterizer_and_pixel_backend_mux_config(struct drm_i915_private *dev_priv,
+					    const struct i915_oa_reg **regs,
+					    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_rasterizer_and_pixel_backend;
+	lens[n] = ARRAY_SIZE(mux_config_rasterizer_and_pixel_backend);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler_1[] = {
+	{ _MMIO(0x9888), 0x18921400 },
+	{ _MMIO(0x9888), 0x149500ab },
+	{ _MMIO(0x9888), 0x18b21400 },
+	{ _MMIO(0x9888), 0x14b500ab },
+	{ _MMIO(0x9888), 0x18d21400 },
+	{ _MMIO(0x9888), 0x14d500ab },
+	{ _MMIO(0x9888), 0x0cdc8000 },
+	{ _MMIO(0x9888), 0x0edc4000 },
+	{ _MMIO(0x9888), 0x02dcc000 },
+	{ _MMIO(0x9888), 0x04dcc000 },
+	{ _MMIO(0x9888), 0x1abd00a0 },
+	{ _MMIO(0x9888), 0x0abd8000 },
+	{ _MMIO(0x9888), 0x0cd88000 },
+	{ _MMIO(0x9888), 0x0ed84000 },
+	{ _MMIO(0x9888), 0x04d88000 },
+	{ _MMIO(0x9888), 0x1adb0050 },
+	{ _MMIO(0x9888), 0x04db8000 },
+	{ _MMIO(0x9888), 0x06db8000 },
+	{ _MMIO(0x9888), 0x08db8000 },
+	{ _MMIO(0x9888), 0x0adb4000 },
+	{ _MMIO(0x9888), 0x109f02a0 },
+	{ _MMIO(0x9888), 0x0c9fa000 },
+	{ _MMIO(0x9888), 0x0e9f00aa },
+	{ _MMIO(0x9888), 0x18b82500 },
+	{ _MMIO(0x9888), 0x02b88000 },
+	{ _MMIO(0x9888), 0x04b84000 },
+	{ _MMIO(0x9888), 0x06b84000 },
+	{ _MMIO(0x9888), 0x08b84000 },
+	{ _MMIO(0x9888), 0x0ab84000 },
+	{ _MMIO(0x9888), 0x0cb88000 },
+	{ _MMIO(0x9888), 0x0cb98000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x00b98000 },
+	{ _MMIO(0x9888), 0x02b9a000 },
+	{ _MMIO(0x9888), 0x04b9a000 },
+	{ _MMIO(0x9888), 0x06b92000 },
+	{ _MMIO(0x9888), 0x1aba0200 },
+	{ _MMIO(0x9888), 0x02ba8000 },
+	{ _MMIO(0x9888), 0x0cba8000 },
+	{ _MMIO(0x9888), 0x04908000 },
+	{ _MMIO(0x9888), 0x04918000 },
+	{ _MMIO(0x9888), 0x04927300 },
+	{ _MMIO(0x9888), 0x10920000 },
+	{ _MMIO(0x9888), 0x1893000a },
+	{ _MMIO(0x9888), 0x0a934000 },
+	{ _MMIO(0x9888), 0x0a946000 },
+	{ _MMIO(0x9888), 0x0c959000 },
+	{ _MMIO(0x9888), 0x0e950098 },
+	{ _MMIO(0x9888), 0x10950000 },
+	{ _MMIO(0x9888), 0x04b04000 },
+	{ _MMIO(0x9888), 0x04b14000 },
+	{ _MMIO(0x9888), 0x04b20073 },
+	{ _MMIO(0x9888), 0x10b20000 },
+	{ _MMIO(0x9888), 0x04b38000 },
+	{ _MMIO(0x9888), 0x06b38000 },
+	{ _MMIO(0x9888), 0x08b34000 },
+	{ _MMIO(0x9888), 0x04b4c000 },
+	{ _MMIO(0x9888), 0x02b59890 },
+	{ _MMIO(0x9888), 0x10b50000 },
+	{ _MMIO(0x9888), 0x06d04000 },
+	{ _MMIO(0x9888), 0x06d14000 },
+	{ _MMIO(0x9888), 0x06d20073 },
+	{ _MMIO(0x9888), 0x10d20000 },
+	{ _MMIO(0x9888), 0x18d30020 },
+	{ _MMIO(0x9888), 0x02d38000 },
+	{ _MMIO(0x9888), 0x0cd34000 },
+	{ _MMIO(0x9888), 0x0ad48000 },
+	{ _MMIO(0x9888), 0x04d42000 },
+	{ _MMIO(0x9888), 0x0ed59000 },
+	{ _MMIO(0x9888), 0x00d59800 },
+	{ _MMIO(0x9888), 0x10d50000 },
+	{ _MMIO(0x9888), 0x0f88000e },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b5500 },
+	{ _MMIO(0x9888), 0x258b000a },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x0d8d8000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x018d8000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x2185000a },
+	{ _MMIO(0x9888), 0x1b830150 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d848000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d808000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47801021 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800c64 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800c02 },
+};
+
+static int
+get_sampler_1_mux_config(struct drm_i915_private *dev_priv,
+			 const struct i915_oa_reg **regs,
+			 int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_sampler_1;
+	lens[n] = ARRAY_SIZE(mux_config_sampler_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler_2[] = {
+	{ _MMIO(0x9888), 0x18121400 },
+	{ _MMIO(0x9888), 0x141500ab },
+	{ _MMIO(0x9888), 0x18321400 },
+	{ _MMIO(0x9888), 0x143500ab },
+	{ _MMIO(0x9888), 0x18521400 },
+	{ _MMIO(0x9888), 0x145500ab },
+	{ _MMIO(0x9888), 0x0c5c8000 },
+	{ _MMIO(0x9888), 0x0e5c4000 },
+	{ _MMIO(0x9888), 0x025cc000 },
+	{ _MMIO(0x9888), 0x045cc000 },
+	{ _MMIO(0x9888), 0x1a3d00a0 },
+	{ _MMIO(0x9888), 0x0a3d8000 },
+	{ _MMIO(0x9888), 0x0c588000 },
+	{ _MMIO(0x9888), 0x0e584000 },
+	{ _MMIO(0x9888), 0x04588000 },
+	{ _MMIO(0x9888), 0x1a5b0050 },
+	{ _MMIO(0x9888), 0x045b8000 },
+	{ _MMIO(0x9888), 0x065b8000 },
+	{ _MMIO(0x9888), 0x085b8000 },
+	{ _MMIO(0x9888), 0x0a5b4000 },
+	{ _MMIO(0x9888), 0x101f02a0 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f00aa },
+	{ _MMIO(0x9888), 0x18382500 },
+	{ _MMIO(0x9888), 0x02388000 },
+	{ _MMIO(0x9888), 0x04384000 },
+	{ _MMIO(0x9888), 0x06384000 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0c388000 },
+	{ _MMIO(0x9888), 0x0c398000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x00398000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x1a3a0200 },
+	{ _MMIO(0x9888), 0x023a8000 },
+	{ _MMIO(0x9888), 0x0c3a8000 },
+	{ _MMIO(0x9888), 0x04108000 },
+	{ _MMIO(0x9888), 0x04118000 },
+	{ _MMIO(0x9888), 0x04127300 },
+	{ _MMIO(0x9888), 0x10120000 },
+	{ _MMIO(0x9888), 0x1813000a },
+	{ _MMIO(0x9888), 0x0a134000 },
+	{ _MMIO(0x9888), 0x0a146000 },
+	{ _MMIO(0x9888), 0x0c159000 },
+	{ _MMIO(0x9888), 0x0e150098 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x04304000 },
+	{ _MMIO(0x9888), 0x04314000 },
+	{ _MMIO(0x9888), 0x04320073 },
+	{ _MMIO(0x9888), 0x10320000 },
+	{ _MMIO(0x9888), 0x04338000 },
+	{ _MMIO(0x9888), 0x06338000 },
+	{ _MMIO(0x9888), 0x08334000 },
+	{ _MMIO(0x9888), 0x0434c000 },
+	{ _MMIO(0x9888), 0x02359890 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x06504000 },
+	{ _MMIO(0x9888), 0x06514000 },
+	{ _MMIO(0x9888), 0x06520073 },
+	{ _MMIO(0x9888), 0x10520000 },
+	{ _MMIO(0x9888), 0x18530020 },
+	{ _MMIO(0x9888), 0x02538000 },
+	{ _MMIO(0x9888), 0x0c534000 },
+	{ _MMIO(0x9888), 0x0a548000 },
+	{ _MMIO(0x9888), 0x04542000 },
+	{ _MMIO(0x9888), 0x0e559000 },
+	{ _MMIO(0x9888), 0x00559800 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x1b8aa000 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x258b0005 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x2185000a },
+	{ _MMIO(0x9888), 0x1b830150 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d848000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d808000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47801021 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800c64 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800c02 },
+};
+
+static int
+get_sampler_2_mux_config(struct drm_i915_private *dev_priv,
+			 const struct i915_oa_reg **regs,
+			 int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_sampler_2;
+	lens[n] = ARRAY_SIZE(mux_config_sampler_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x0000fe7f },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000ffbf },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fff7 },
+	{ _MMIO(0x2798), 0x00000000 },
+	{ _MMIO(0x279c), 0x0000fff9 },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_1[] = {
+	{ _MMIO(0x9888), 0x16154d60 },
+	{ _MMIO(0x9888), 0x16352e60 },
+	{ _MMIO(0x9888), 0x16554d60 },
+	{ _MMIO(0x9888), 0x16950000 },
+	{ _MMIO(0x9888), 0x16b50000 },
+	{ _MMIO(0x9888), 0x16d50000 },
+	{ _MMIO(0x9888), 0x005c8000 },
+	{ _MMIO(0x9888), 0x045cc000 },
+	{ _MMIO(0x9888), 0x065c4000 },
+	{ _MMIO(0x9888), 0x083d8000 },
+	{ _MMIO(0x9888), 0x0a3d8000 },
+	{ _MMIO(0x9888), 0x0458c000 },
+	{ _MMIO(0x9888), 0x025b8000 },
+	{ _MMIO(0x9888), 0x085b4000 },
+	{ _MMIO(0x9888), 0x0a5b4000 },
+	{ _MMIO(0x9888), 0x0c5b8000 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f00aa },
+	{ _MMIO(0x9888), 0x02384000 },
+	{ _MMIO(0x9888), 0x04388000 },
+	{ _MMIO(0x9888), 0x06388000 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0c384000 },
+	{ _MMIO(0x9888), 0x00398000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x043a8000 },
+	{ _MMIO(0x9888), 0x063a8000 },
+	{ _MMIO(0x9888), 0x08138000 },
+	{ _MMIO(0x9888), 0x0a138000 },
+	{ _MMIO(0x9888), 0x06143000 },
+	{ _MMIO(0x9888), 0x0415cfc7 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x02338000 },
+	{ _MMIO(0x9888), 0x0c338000 },
+	{ _MMIO(0x9888), 0x04342000 },
+	{ _MMIO(0x9888), 0x06344000 },
+	{ _MMIO(0x9888), 0x0035c700 },
+	{ _MMIO(0x9888), 0x063500cf },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x04538000 },
+	{ _MMIO(0x9888), 0x06538000 },
+	{ _MMIO(0x9888), 0x0454c000 },
+	{ _MMIO(0x9888), 0x0255cfc7 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x06dc8000 },
+	{ _MMIO(0x9888), 0x08dc4000 },
+	{ _MMIO(0x9888), 0x0cdcc000 },
+	{ _MMIO(0x9888), 0x0edcc000 },
+	{ _MMIO(0x9888), 0x1abd00a8 },
+	{ _MMIO(0x9888), 0x0cd8c000 },
+	{ _MMIO(0x9888), 0x0ed84000 },
+	{ _MMIO(0x9888), 0x0edb8000 },
+	{ _MMIO(0x9888), 0x18db0800 },
+	{ _MMIO(0x9888), 0x1adb0254 },
+	{ _MMIO(0x9888), 0x0e9faa00 },
+	{ _MMIO(0x9888), 0x109f02aa },
+	{ _MMIO(0x9888), 0x0eb84000 },
+	{ _MMIO(0x9888), 0x16b84000 },
+	{ _MMIO(0x9888), 0x18b8156a },
+	{ _MMIO(0x9888), 0x06b98000 },
+	{ _MMIO(0x9888), 0x08b9a000 },
+	{ _MMIO(0x9888), 0x0ab9a000 },
+	{ _MMIO(0x9888), 0x0cb9a000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x18baa000 },
+	{ _MMIO(0x9888), 0x1aba0002 },
+	{ _MMIO(0x9888), 0x16934000 },
+	{ _MMIO(0x9888), 0x1893000a },
+	{ _MMIO(0x9888), 0x0a947000 },
+	{ _MMIO(0x9888), 0x0c95c5c1 },
+	{ _MMIO(0x9888), 0x0e9500c3 },
+	{ _MMIO(0x9888), 0x10950000 },
+	{ _MMIO(0x9888), 0x0eb38000 },
+	{ _MMIO(0x9888), 0x16b30040 },
+	{ _MMIO(0x9888), 0x18b30020 },
+	{ _MMIO(0x9888), 0x06b48000 },
+	{ _MMIO(0x9888), 0x08b41000 },
+	{ _MMIO(0x9888), 0x0ab48000 },
+	{ _MMIO(0x9888), 0x06b5c500 },
+	{ _MMIO(0x9888), 0x08b500c3 },
+	{ _MMIO(0x9888), 0x0eb5c100 },
+	{ _MMIO(0x9888), 0x10b50000 },
+	{ _MMIO(0x9888), 0x16d31500 },
+	{ _MMIO(0x9888), 0x08d4e000 },
+	{ _MMIO(0x9888), 0x08d5c100 },
+	{ _MMIO(0x9888), 0x0ad5c3c5 },
+	{ _MMIO(0x9888), 0x10d50000 },
+	{ _MMIO(0x9888), 0x0d88f800 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x258baaa5 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x0f8c4000 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c5500 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x078d8000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800c42 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800063 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800800 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f8014a4 },
+	{ _MMIO(0x9888), 0x41801042 },
+};
+
+static int
+get_tdl_1_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_1;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x0000fe7f },
+	{ _MMIO(0x2780), 0x00000000 },
+	{ _MMIO(0x2784), 0x0000ff9f },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000ffe7 },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fffb },
+	{ _MMIO(0x2798), 0x00000002 },
+	{ _MMIO(0x279c), 0x0000fffd },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_2[] = {
+	{ _MMIO(0x9888), 0x16150000 },
+	{ _MMIO(0x9888), 0x16350000 },
+	{ _MMIO(0x9888), 0x16550000 },
+	{ _MMIO(0x9888), 0x16952e60 },
+	{ _MMIO(0x9888), 0x16b54d60 },
+	{ _MMIO(0x9888), 0x16d52e60 },
+	{ _MMIO(0x9888), 0x065c8000 },
+	{ _MMIO(0x9888), 0x085cc000 },
+	{ _MMIO(0x9888), 0x0a5cc000 },
+	{ _MMIO(0x9888), 0x0c5c4000 },
+	{ _MMIO(0x9888), 0x0e3d8000 },
+	{ _MMIO(0x9888), 0x183da000 },
+	{ _MMIO(0x9888), 0x06588000 },
+	{ _MMIO(0x9888), 0x08588000 },
+	{ _MMIO(0x9888), 0x0a584000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x185b5800 },
+	{ _MMIO(0x9888), 0x1a5b000a },
+	{ _MMIO(0x9888), 0x0e1faa00 },
+	{ _MMIO(0x9888), 0x101f02aa },
+	{ _MMIO(0x9888), 0x0e384000 },
+	{ _MMIO(0x9888), 0x16384000 },
+	{ _MMIO(0x9888), 0x18382a55 },
+	{ _MMIO(0x9888), 0x06398000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0c39a000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x1a3a02a0 },
+	{ _MMIO(0x9888), 0x0e138000 },
+	{ _MMIO(0x9888), 0x16130500 },
+	{ _MMIO(0x9888), 0x06148000 },
+	{ _MMIO(0x9888), 0x08146000 },
+	{ _MMIO(0x9888), 0x0615c100 },
+	{ _MMIO(0x9888), 0x0815c500 },
+	{ _MMIO(0x9888), 0x0a1500c3 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x16335040 },
+	{ _MMIO(0x9888), 0x08349000 },
+	{ _MMIO(0x9888), 0x0a341000 },
+	{ _MMIO(0x9888), 0x083500c1 },
+	{ _MMIO(0x9888), 0x0a35c500 },
+	{ _MMIO(0x9888), 0x0c3500c3 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x1853002a },
+	{ _MMIO(0x9888), 0x0a54e000 },
+	{ _MMIO(0x9888), 0x0c55c500 },
+	{ _MMIO(0x9888), 0x0e55c1c3 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x00dc8000 },
+	{ _MMIO(0x9888), 0x02dcc000 },
+	{ _MMIO(0x9888), 0x04dc4000 },
+	{ _MMIO(0x9888), 0x04bd8000 },
+	{ _MMIO(0x9888), 0x06bd8000 },
+	{ _MMIO(0x9888), 0x02d8c000 },
+	{ _MMIO(0x9888), 0x02db8000 },
+	{ _MMIO(0x9888), 0x04db4000 },
+	{ _MMIO(0x9888), 0x06db4000 },
+	{ _MMIO(0x9888), 0x08db8000 },
+	{ _MMIO(0x9888), 0x0c9fa000 },
+	{ _MMIO(0x9888), 0x0e9f00aa },
+	{ _MMIO(0x9888), 0x02b84000 },
+	{ _MMIO(0x9888), 0x04b84000 },
+	{ _MMIO(0x9888), 0x06b84000 },
+	{ _MMIO(0x9888), 0x08b84000 },
+	{ _MMIO(0x9888), 0x0ab88000 },
+	{ _MMIO(0x9888), 0x0cb88000 },
+	{ _MMIO(0x9888), 0x00b98000 },
+	{ _MMIO(0x9888), 0x02b9a000 },
+	{ _MMIO(0x9888), 0x04b9a000 },
+	{ _MMIO(0x9888), 0x06b92000 },
+	{ _MMIO(0x9888), 0x0aba8000 },
+	{ _MMIO(0x9888), 0x0cba8000 },
+	{ _MMIO(0x9888), 0x04938000 },
+	{ _MMIO(0x9888), 0x06938000 },
+	{ _MMIO(0x9888), 0x0494c000 },
+	{ _MMIO(0x9888), 0x0295cfc7 },
+	{ _MMIO(0x9888), 0x10950000 },
+	{ _MMIO(0x9888), 0x02b38000 },
+	{ _MMIO(0x9888), 0x08b38000 },
+	{ _MMIO(0x9888), 0x04b42000 },
+	{ _MMIO(0x9888), 0x06b41000 },
+	{ _MMIO(0x9888), 0x00b5c700 },
+	{ _MMIO(0x9888), 0x04b500cf },
+	{ _MMIO(0x9888), 0x10b50000 },
+	{ _MMIO(0x9888), 0x0ad38000 },
+	{ _MMIO(0x9888), 0x0cd38000 },
+	{ _MMIO(0x9888), 0x06d46000 },
+	{ _MMIO(0x9888), 0x04d5c700 },
+	{ _MMIO(0x9888), 0x06d500cf },
+	{ _MMIO(0x9888), 0x10d50000 },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8aaaa0 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x258b555a },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x238b5500 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x018d8000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800882 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45801082 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x478014a5 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800002 },
+	{ _MMIO(0x9888), 0x41800c62 },
+};
+
+static int
+get_tdl_2_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_2;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extra[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extra[] = {
+	{ _MMIO(0xe458), 0x00001000 },
+	{ _MMIO(0xe558), 0x00003002 },
+	{ _MMIO(0xe658), 0x00005004 },
+	{ _MMIO(0xe758), 0x00011010 },
+	{ _MMIO(0xe45c), 0x00050012 },
+	{ _MMIO(0xe55c), 0x00052051 },
+	{ _MMIO(0xe65c), 0x00000008 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extra[] = {
+	{ _MMIO(0x9888), 0x161503e0 },
+	{ _MMIO(0x9888), 0x163503e0 },
+	{ _MMIO(0x9888), 0x165503e0 },
+	{ _MMIO(0x9888), 0x169503e0 },
+	{ _MMIO(0x9888), 0x16b503e0 },
+	{ _MMIO(0x9888), 0x16d503e0 },
+	{ _MMIO(0x9888), 0x045cc000 },
+	{ _MMIO(0x9888), 0x083d8000 },
+	{ _MMIO(0x9888), 0x04584000 },
+	{ _MMIO(0x9888), 0x085b4000 },
+	{ _MMIO(0x9888), 0x0a5b8000 },
+	{ _MMIO(0x9888), 0x0e1f00a8 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0c388000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x0c3a8000 },
+	{ _MMIO(0x9888), 0x08138000 },
+	{ _MMIO(0x9888), 0x06141000 },
+	{ _MMIO(0x9888), 0x041500c3 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x0a338000 },
+	{ _MMIO(0x9888), 0x06342000 },
+	{ _MMIO(0x9888), 0x0435c300 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x0c538000 },
+	{ _MMIO(0x9888), 0x06544000 },
+	{ _MMIO(0x9888), 0x065500c3 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x00dc8000 },
+	{ _MMIO(0x9888), 0x02dc4000 },
+	{ _MMIO(0x9888), 0x02bd8000 },
+	{ _MMIO(0x9888), 0x00d88000 },
+	{ _MMIO(0x9888), 0x02db4000 },
+	{ _MMIO(0x9888), 0x04db8000 },
+	{ _MMIO(0x9888), 0x0c9fa000 },
+	{ _MMIO(0x9888), 0x0e9f0002 },
+	{ _MMIO(0x9888), 0x02b84000 },
+	{ _MMIO(0x9888), 0x04b84000 },
+	{ _MMIO(0x9888), 0x06b88000 },
+	{ _MMIO(0x9888), 0x00b98000 },
+	{ _MMIO(0x9888), 0x02b9a000 },
+	{ _MMIO(0x9888), 0x06ba8000 },
+	{ _MMIO(0x9888), 0x02938000 },
+	{ _MMIO(0x9888), 0x04942000 },
+	{ _MMIO(0x9888), 0x0095c300 },
+	{ _MMIO(0x9888), 0x10950000 },
+	{ _MMIO(0x9888), 0x04b38000 },
+	{ _MMIO(0x9888), 0x04b44000 },
+	{ _MMIO(0x9888), 0x02b500c3 },
+	{ _MMIO(0x9888), 0x10b50000 },
+	{ _MMIO(0x9888), 0x06d38000 },
+	{ _MMIO(0x9888), 0x04d48000 },
+	{ _MMIO(0x9888), 0x02d5c300 },
+	{ _MMIO(0x9888), 0x10d50000 },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x238b3500 },
+	{ _MMIO(0x9888), 0x258b0005 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x018d8000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x2185000a },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800c40 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41801482 },
+	{ _MMIO(0x9888), 0x31800000 },
+};
+
+static int
+get_compute_extra_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_extra;
+	lens[n] = ARRAY_SIZE(mux_config_compute_extra);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_vme_pipe[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00100030 },
+	{ _MMIO(0x2774), 0x0000fff9 },
+	{ _MMIO(0x2778), 0x00000002 },
+	{ _MMIO(0x277c), 0x0000fffc },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000fff3 },
+	{ _MMIO(0x2788), 0x00100180 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00000002 },
+	{ _MMIO(0x279c), 0x0000ff3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_vme_pipe[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00008003 },
+};
+
+static const struct i915_oa_reg mux_config_vme_pipe[] = {
+	{ _MMIO(0x9888), 0x14100812 },
+	{ _MMIO(0x9888), 0x14125800 },
+	{ _MMIO(0x9888), 0x161200c0 },
+	{ _MMIO(0x9888), 0x14300812 },
+	{ _MMIO(0x9888), 0x14325800 },
+	{ _MMIO(0x9888), 0x163200c0 },
+	{ _MMIO(0x9888), 0x005c4000 },
+	{ _MMIO(0x9888), 0x065c8000 },
+	{ _MMIO(0x9888), 0x085cc000 },
+	{ _MMIO(0x9888), 0x0a5cc000 },
+	{ _MMIO(0x9888), 0x0c5cc000 },
+	{ _MMIO(0x9888), 0x003d8000 },
+	{ _MMIO(0x9888), 0x0e3d8000 },
+	{ _MMIO(0x9888), 0x183d2800 },
+	{ _MMIO(0x9888), 0x00584000 },
+	{ _MMIO(0x9888), 0x06588000 },
+	{ _MMIO(0x9888), 0x0858c000 },
+	{ _MMIO(0x9888), 0x005b4000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x185b9400 },
+	{ _MMIO(0x9888), 0x1a5b002a },
+	{ _MMIO(0x9888), 0x0c1f0800 },
+	{ _MMIO(0x9888), 0x0e1faa00 },
+	{ _MMIO(0x9888), 0x101f002a },
+	{ _MMIO(0x9888), 0x00384000 },
+	{ _MMIO(0x9888), 0x0e384000 },
+	{ _MMIO(0x9888), 0x16384000 },
+	{ _MMIO(0x9888), 0x18380155 },
+	{ _MMIO(0x9888), 0x00392000 },
+	{ _MMIO(0x9888), 0x06398000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0c39a000 },
+	{ _MMIO(0x9888), 0x00100047 },
+	{ _MMIO(0x9888), 0x06101a80 },
+	{ _MMIO(0x9888), 0x10100000 },
+	{ _MMIO(0x9888), 0x0810c000 },
+	{ _MMIO(0x9888), 0x0811c000 },
+	{ _MMIO(0x9888), 0x08126151 },
+	{ _MMIO(0x9888), 0x10120000 },
+	{ _MMIO(0x9888), 0x00134000 },
+	{ _MMIO(0x9888), 0x0e134000 },
+	{ _MMIO(0x9888), 0x161300a0 },
+	{ _MMIO(0x9888), 0x0a301ac7 },
+	{ _MMIO(0x9888), 0x10300000 },
+	{ _MMIO(0x9888), 0x0c30c000 },
+	{ _MMIO(0x9888), 0x0c31c000 },
+	{ _MMIO(0x9888), 0x0c326151 },
+	{ _MMIO(0x9888), 0x10320000 },
+	{ _MMIO(0x9888), 0x16332a00 },
+	{ _MMIO(0x9888), 0x18330001 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8a2aa0 },
+	{ _MMIO(0x9888), 0x238b0020 },
+	{ _MMIO(0x9888), 0x258b5550 },
+	{ _MMIO(0x9888), 0x278b0001 },
+	{ _MMIO(0x9888), 0x1f850080 },
+	{ _MMIO(0x9888), 0x2185aaa0 },
+	{ _MMIO(0x9888), 0x23850002 },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830015 },
+	{ _MMIO(0x9888), 0x01844000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x11804000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3d800800 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800002 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800884 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800002 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+};
+
+static int
+get_vme_pipe_mux_config(struct drm_i915_private *dev_priv,
+			const struct i915_oa_reg **regs,
+			int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_vme_pipe;
+	lens[n] = ARRAY_SIZE(mux_config_vme_pipe);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_test_oa[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2770), 0x00000004 },
+	{ _MMIO(0x2774), 0x00000000 },
+	{ _MMIO(0x2778), 0x00000003 },
+	{ _MMIO(0x277c), 0x00000000 },
+	{ _MMIO(0x2780), 0x00000007 },
+	{ _MMIO(0x2784), 0x00000000 },
+	{ _MMIO(0x2788), 0x00100002 },
+	{ _MMIO(0x278c), 0x0000fff7 },
+	{ _MMIO(0x2790), 0x00100002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00100082 },
+	{ _MMIO(0x279c), 0x0000ffef },
+	{ _MMIO(0x27a0), 0x001000c2 },
+	{ _MMIO(0x27a4), 0x0000ffe7 },
+	{ _MMIO(0x27a8), 0x00100001 },
+	{ _MMIO(0x27ac), 0x0000ffe7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_test_oa[] = {
+};
+
+static const struct i915_oa_reg mux_config_test_oa[] = {
+	{ _MMIO(0x9888), 0x198b0000 },
+	{ _MMIO(0x9888), 0x078b0066 },
+	{ _MMIO(0x9888), 0x118b0000 },
+	{ _MMIO(0x9888), 0x258b0000 },
+	{ _MMIO(0x9888), 0x21850008 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+};
+
+static int
+get_test_oa_mux_config(struct drm_i915_private *dev_priv,
+		       const struct i915_oa_reg **regs,
+		       int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_test_oa;
+	lens[n] = ARRAY_SIZE(mux_config_test_oa);
+	n++;
+
+	return n;
+}
+
 int i915_oa_select_metric_set_bdw(struct drm_i915_private *dev_priv)
 {
-	dev_priv->perf.oa.n_mux_configs = 0;
-	dev_priv->perf.oa.b_counter_regs = NULL;
-	dev_priv->perf.oa.b_counter_regs_len = 0;
-	dev_priv->perf.oa.flex_regs = NULL;
-	dev_priv->perf.oa.flex_regs_len = 0;
+	dev_priv->perf.oa.n_mux_configs = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_basic_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_basic_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
+		return 0;
+	case METRIC_SET_ID_RENDER_PIPE_PROFILE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_pipe_profile_mux_config(dev_priv,
+							   dev_priv->perf.oa.mux_regs,
+							   dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_PIPE_PROFILE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_pipe_profile;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_pipe_profile);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_pipe_profile;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_pipe_profile);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_READS:
+		dev_priv->perf.oa.n_mux_configs =
+			get_memory_reads_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_READS\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_reads;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_reads);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_reads;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_reads);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_WRITES:
+		dev_priv->perf.oa.n_mux_configs =
+			get_memory_writes_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_WRITES\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_writes;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_writes);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_writes;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_writes);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTENDED:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_extended_mux_config(dev_priv,
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTENDED\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extended;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extended);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extended;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extended);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_L3_CACHE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_l3_cache_mux_config(dev_priv,
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_L3_CACHE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_l3_cache;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_l3_cache);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_l3_cache;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_l3_cache);
+
+		return 0;
+	case METRIC_SET_ID_DATA_PORT_READS_COALESCING:
+		dev_priv->perf.oa.n_mux_configs =
+			get_data_port_reads_coalescing_mux_config(dev_priv,
+								  dev_priv->perf.oa.mux_regs,
+								  dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"DATA_PORT_READS_COALESCING\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_data_port_reads_coalescing;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_data_port_reads_coalescing);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_data_port_reads_coalescing;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_data_port_reads_coalescing);
+
+		return 0;
+	case METRIC_SET_ID_DATA_PORT_WRITES_COALESCING:
+		dev_priv->perf.oa.n_mux_configs =
+			get_data_port_writes_coalescing_mux_config(dev_priv,
+								   dev_priv->perf.oa.mux_regs,
+								   dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"DATA_PORT_WRITES_COALESCING\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_data_port_writes_coalescing;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_data_port_writes_coalescing);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_data_port_writes_coalescing;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_data_port_writes_coalescing);
+
+		return 0;
+	case METRIC_SET_ID_HDC_AND_SF:
+		dev_priv->perf.oa.n_mux_configs =
+			get_hdc_and_sf_mux_config(dev_priv,
+						  dev_priv->perf.oa.mux_regs,
+						  dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"HDC_AND_SF\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_hdc_and_sf;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_hdc_and_sf);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_hdc_and_sf;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_hdc_and_sf);
+
+		return 0;
+	case METRIC_SET_ID_L3_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_1_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_1);
+
+		return 0;
+	case METRIC_SET_ID_L3_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_2_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_2);
+
+		return 0;
+	case METRIC_SET_ID_L3_3:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_3_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_3\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_3;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_3);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_3;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_3);
+
+		return 0;
+	case METRIC_SET_ID_L3_4:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_4_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_4\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_4;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_4);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_4;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_4);
+
+		return 0;
+	case METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND:
+		dev_priv->perf.oa.n_mux_configs =
+			get_rasterizer_and_pixel_backend_mux_config(dev_priv,
+								    dev_priv->perf.oa.mux_regs,
+								    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RASTERIZER_AND_PIXEL_BACKEND\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_rasterizer_and_pixel_backend);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_rasterizer_and_pixel_backend);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_sampler_1_mux_config(dev_priv,
+						 dev_priv->perf.oa.mux_regs,
+						 dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler_1);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_sampler_2_mux_config(dev_priv,
+						 dev_priv->perf.oa.mux_regs,
+						 dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler_2);
+
+		return 0;
+	case METRIC_SET_ID_TDL_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_1_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_1);
+
+		return 0;
+	case METRIC_SET_ID_TDL_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_2_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_2);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTRA:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_extra_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTRA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extra;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extra);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extra;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extra);
+
+		return 0;
+	case METRIC_SET_ID_VME_PIPE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_vme_pipe_mux_config(dev_priv,
+						dev_priv->perf.oa.mux_regs,
+						dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"VME_PIPE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_vme_pipe;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_vme_pipe);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_vme_pipe;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_vme_pipe);
+
+		return 0;
+	case METRIC_SET_ID_TEST_OA:
+		dev_priv->perf.oa.n_mux_configs =
+			get_test_oa_mux_config(dev_priv,
+					       dev_priv->perf.oa.mux_regs,
+					       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TEST_OA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_test_oa;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_test_oa);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_test_oa;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_test_oa);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "b541bd57-0e0f-4154-b4c0-5858010a2bf7",
+	.attrs =  attrs_render_basic,
+};
+
+static ssize_t
+show_compute_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_BASIC);
+}
+
+static struct device_attribute dev_attr_compute_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_basic[] = {
+	&dev_attr_compute_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_basic = {
+	.name = "35fbc9b2-a891-40a6-a38d-022bb7057552",
+	.attrs =  attrs_compute_basic,
+};
+
+static ssize_t
+show_render_pipe_profile_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_PIPE_PROFILE);
+}
+
+static struct device_attribute dev_attr_render_pipe_profile_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_pipe_profile_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_pipe_profile[] = {
+	&dev_attr_render_pipe_profile_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_pipe_profile = {
+	.name = "233d0544-fff7-4281-8291-e02f222aff72",
+	.attrs =  attrs_render_pipe_profile,
+};
+
+static ssize_t
+show_memory_reads_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_READS);
+}
+
+static struct device_attribute dev_attr_memory_reads_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_reads_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_reads[] = {
+	&dev_attr_memory_reads_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_reads = {
+	.name = "2b255d48-2117-4fef-a8f7-f151e1d25a2c",
+	.attrs =  attrs_memory_reads,
+};
+
+static ssize_t
+show_memory_writes_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_WRITES);
+}
+
+static struct device_attribute dev_attr_memory_writes_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_writes_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_writes[] = {
+	&dev_attr_memory_writes_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_writes = {
+	.name = "f7fd3220-b466-4a4d-9f98-b0caf3f2394c",
+	.attrs =  attrs_memory_writes,
+};
+
+static ssize_t
+show_compute_extended_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTENDED);
+}
+
+static struct device_attribute dev_attr_compute_extended_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extended_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extended[] = {
+	&dev_attr_compute_extended_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extended = {
+	.name = "e99ccaca-821c-4df9-97a7-96bdb7204e43",
+	.attrs =  attrs_compute_extended,
+};
+
+static ssize_t
+show_compute_l3_cache_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_L3_CACHE);
+}
+
+static struct device_attribute dev_attr_compute_l3_cache_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_l3_cache_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_l3_cache[] = {
+	&dev_attr_compute_l3_cache_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_l3_cache = {
+	.name = "27a364dc-8225-4ecb-b607-d6f1925598d9",
+	.attrs =  attrs_compute_l3_cache,
+};
+
+static ssize_t
+show_data_port_reads_coalescing_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_DATA_PORT_READS_COALESCING);
+}
+
+static struct device_attribute dev_attr_data_port_reads_coalescing_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_data_port_reads_coalescing_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_data_port_reads_coalescing[] = {
+	&dev_attr_data_port_reads_coalescing_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_data_port_reads_coalescing = {
+	.name = "857fc630-2f09-4804-85f1-084adfadd5ab",
+	.attrs =  attrs_data_port_reads_coalescing,
+};
+
+static ssize_t
+show_data_port_writes_coalescing_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_DATA_PORT_WRITES_COALESCING);
+}
 
-	switch (dev_priv->perf.oa.metrics_set) {
-	case METRIC_SET_ID_RENDER_BASIC:
-		dev_priv->perf.oa.n_mux_configs =
-			get_render_basic_mux_config(dev_priv,
-						    dev_priv->perf.oa.mux_regs,
-						    dev_priv->perf.oa.mux_regs_lens);
-		if (dev_priv->perf.oa.n_mux_configs == 0) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+static struct device_attribute dev_attr_data_port_writes_coalescing_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_data_port_writes_coalescing_id,
+	.store = NULL,
+};
 
-			/* EINVAL because *_register_sysfs already checked this
-			 * and so it wouldn't have been advertised to userspace and
-			 * so shouldn't have been requested
-			 */
-			return -EINVAL;
-		}
+static struct attribute *attrs_data_port_writes_coalescing[] = {
+	&dev_attr_data_port_writes_coalescing_id.attr,
+	NULL,
+};
 
-		dev_priv->perf.oa.b_counter_regs =
-			b_counter_config_render_basic;
-		dev_priv->perf.oa.b_counter_regs_len =
-			ARRAY_SIZE(b_counter_config_render_basic);
+static struct attribute_group group_data_port_writes_coalescing = {
+	.name = "343ebc99-4a55-414c-8c17-d8e259cf5e20",
+	.attrs =  attrs_data_port_writes_coalescing,
+};
 
-		dev_priv->perf.oa.flex_regs =
-			flex_eu_config_render_basic;
-		dev_priv->perf.oa.flex_regs_len =
-			ARRAY_SIZE(flex_eu_config_render_basic);
+static ssize_t
+show_hdc_and_sf_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_HDC_AND_SF);
+}
 
-		return 0;
-	default:
-		return -ENODEV;
-	}
+static struct device_attribute dev_attr_hdc_and_sf_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_hdc_and_sf_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_hdc_and_sf[] = {
+	&dev_attr_hdc_and_sf_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_hdc_and_sf = {
+	.name = "7bdafd88-a4fa-4ed5-bc09-1a977aa5be3e",
+	.attrs =  attrs_hdc_and_sf,
+};
+
+static ssize_t
+show_l3_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_1);
 }
 
+static struct device_attribute dev_attr_l3_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_1[] = {
+	&dev_attr_l3_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_1 = {
+	.name = "9385ebb2-f34f-4aa5-aec5-7e9cbbea0f0b",
+	.attrs =  attrs_l3_1,
+};
+
 static ssize_t
-show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+show_l3_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
 {
-	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_2);
 }
 
-static struct device_attribute dev_attr_render_basic_id = {
+static struct device_attribute dev_attr_l3_2_id = {
 	.attr = { .name = "id", .mode = 0444 },
-	.show = show_render_basic_id,
+	.show = show_l3_2_id,
 	.store = NULL,
 };
 
-static struct attribute *attrs_render_basic[] = {
-	&dev_attr_render_basic_id.attr,
+static struct attribute *attrs_l3_2[] = {
+	&dev_attr_l3_2_id.attr,
 	NULL,
 };
 
-static struct attribute_group group_render_basic = {
-	.name = "b541bd57-0e0f-4154-b4c0-5858010a2bf7",
-	.attrs =  attrs_render_basic,
+static struct attribute_group group_l3_2 = {
+	.name = "446ae59b-ff2e-41c9-b49e-0184a54bf00a",
+	.attrs =  attrs_l3_2,
+};
+
+static ssize_t
+show_l3_3_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_3);
+}
+
+static struct device_attribute dev_attr_l3_3_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_3_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_3[] = {
+	&dev_attr_l3_3_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_3 = {
+	.name = "84a7956f-1ea4-4d0d-837f-e39a0376e38c",
+	.attrs =  attrs_l3_3,
+};
+
+static ssize_t
+show_l3_4_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_4);
+}
+
+static struct device_attribute dev_attr_l3_4_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_4_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_4[] = {
+	&dev_attr_l3_4_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_4 = {
+	.name = "92b493d9-df18-4bed-be06-5cac6f2a6f5f",
+	.attrs =  attrs_l3_4,
+};
+
+static ssize_t
+show_rasterizer_and_pixel_backend_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND);
+}
+
+static struct device_attribute dev_attr_rasterizer_and_pixel_backend_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_rasterizer_and_pixel_backend_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_rasterizer_and_pixel_backend[] = {
+	&dev_attr_rasterizer_and_pixel_backend_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_rasterizer_and_pixel_backend = {
+	.name = "14345c35-cc46-40d0-bb04-6ed1fbb43679",
+	.attrs =  attrs_rasterizer_and_pixel_backend,
+};
+
+static ssize_t
+show_sampler_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER_1);
+}
+
+static struct device_attribute dev_attr_sampler_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler_1[] = {
+	&dev_attr_sampler_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler_1 = {
+	.name = "f0c6ba37-d3d3-4211-91b5-226730312a54",
+	.attrs =  attrs_sampler_1,
+};
+
+static ssize_t
+show_sampler_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER_2);
+}
+
+static struct device_attribute dev_attr_sampler_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler_2[] = {
+	&dev_attr_sampler_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler_2 = {
+	.name = "30bf3702-48cf-4bca-b412-7cf50bb2f564",
+	.attrs =  attrs_sampler_2,
+};
+
+static ssize_t
+show_tdl_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_1);
+}
+
+static struct device_attribute dev_attr_tdl_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_1[] = {
+	&dev_attr_tdl_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_1 = {
+	.name = "238bec85-df05-44f3-b905-d166712f2451",
+	.attrs =  attrs_tdl_1,
+};
+
+static ssize_t
+show_tdl_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_2);
+}
+
+static struct device_attribute dev_attr_tdl_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_2[] = {
+	&dev_attr_tdl_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_2 = {
+	.name = "24bf02cd-8693-4583-981c-c4165b33da01",
+	.attrs =  attrs_tdl_2,
+};
+
+static ssize_t
+show_compute_extra_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTRA);
+}
+
+static struct device_attribute dev_attr_compute_extra_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extra_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extra[] = {
+	&dev_attr_compute_extra_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extra = {
+	.name = "8fb61ba2-2fbb-454c-a136-2dec5a8a595e",
+	.attrs =  attrs_compute_extra,
+};
+
+static ssize_t
+show_vme_pipe_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_VME_PIPE);
+}
+
+static struct device_attribute dev_attr_vme_pipe_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_vme_pipe_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_vme_pipe[] = {
+	&dev_attr_vme_pipe_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_vme_pipe = {
+	.name = "e1743ca0-7fc8-410b-a066-de7bbb9280b7",
+	.attrs =  attrs_vme_pipe,
+};
+
+static ssize_t
+show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TEST_OA);
+}
+
+static struct device_attribute dev_attr_test_oa_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_test_oa_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_test_oa[] = {
+	&dev_attr_test_oa_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_test_oa = {
+	.name = "d6de6f55-e526-4f79-a6a6-d7315c09044e",
+	.attrs =  attrs_test_oa,
 };
 
 int
@@ -374,9 +5148,177 @@ i915_perf_register_sysfs_bdw(struct drm_i915_private *dev_priv)
 		if (ret)
 			goto error_render_basic;
 	}
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+		if (ret)
+			goto error_compute_basic;
+	}
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+		if (ret)
+			goto error_render_pipe_profile;
+	}
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+		if (ret)
+			goto error_memory_reads;
+	}
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+		if (ret)
+			goto error_memory_writes;
+	}
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+		if (ret)
+			goto error_compute_extended;
+	}
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+		if (ret)
+			goto error_compute_l3_cache;
+	}
+	if (get_data_port_reads_coalescing_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_data_port_reads_coalescing);
+		if (ret)
+			goto error_data_port_reads_coalescing;
+	}
+	if (get_data_port_writes_coalescing_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_data_port_writes_coalescing);
+		if (ret)
+			goto error_data_port_writes_coalescing;
+	}
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+		if (ret)
+			goto error_hdc_and_sf;
+	}
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+		if (ret)
+			goto error_l3_1;
+	}
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+		if (ret)
+			goto error_l3_2;
+	}
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+		if (ret)
+			goto error_l3_3;
+	}
+	if (get_l3_4_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_4);
+		if (ret)
+			goto error_l3_4;
+	}
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+		if (ret)
+			goto error_rasterizer_and_pixel_backend;
+	}
+	if (get_sampler_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler_1);
+		if (ret)
+			goto error_sampler_1;
+	}
+	if (get_sampler_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler_2);
+		if (ret)
+			goto error_sampler_2;
+	}
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+		if (ret)
+			goto error_tdl_1;
+	}
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+		if (ret)
+			goto error_tdl_2;
+	}
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+		if (ret)
+			goto error_compute_extra;
+	}
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+		if (ret)
+			goto error_vme_pipe;
+	}
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+		if (ret)
+			goto error_test_oa;
+	}
 
 	return 0;
 
+error_test_oa:
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+error_vme_pipe:
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+error_compute_extra:
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+error_tdl_2:
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+error_tdl_1:
+	if (get_sampler_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_2);
+error_sampler_2:
+	if (get_sampler_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_1);
+error_sampler_1:
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+error_rasterizer_and_pixel_backend:
+	if (get_l3_4_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_4);
+error_l3_4:
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+error_l3_3:
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+error_l3_2:
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+error_l3_1:
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+error_hdc_and_sf:
+	if (get_data_port_writes_coalescing_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_data_port_writes_coalescing);
+error_data_port_writes_coalescing:
+	if (get_data_port_reads_coalescing_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_data_port_reads_coalescing);
+error_data_port_reads_coalescing:
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+error_compute_l3_cache:
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+error_compute_extended:
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+error_memory_writes:
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+error_memory_reads:
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+error_render_pipe_profile:
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+error_compute_basic:
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
 error_render_basic:
 	return ret;
 }
@@ -389,4 +5331,46 @@ i915_perf_unregister_sysfs_bdw(struct drm_i915_private *dev_priv)
 
 	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+	if (get_data_port_reads_coalescing_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_data_port_reads_coalescing);
+	if (get_data_port_writes_coalescing_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_data_port_writes_coalescing);
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+	if (get_l3_4_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_4);
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+	if (get_sampler_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_1);
+	if (get_sampler_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_2);
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_test_oa);
 }
diff --git a/drivers/gpu/drm/i915/i915_oa_bxt.c b/drivers/gpu/drm/i915/i915_oa_bxt.c
index 345ec1d3faa7..93864d8f32dd 100644
--- a/drivers/gpu/drm/i915/i915_oa_bxt.c
+++ b/drivers/gpu/drm/i915/i915_oa_bxt.c
@@ -33,9 +33,23 @@
 
 enum metric_set_id {
 	METRIC_SET_ID_RENDER_BASIC = 1,
+	METRIC_SET_ID_COMPUTE_BASIC,
+	METRIC_SET_ID_RENDER_PIPE_PROFILE,
+	METRIC_SET_ID_MEMORY_READS,
+	METRIC_SET_ID_MEMORY_WRITES,
+	METRIC_SET_ID_COMPUTE_EXTENDED,
+	METRIC_SET_ID_COMPUTE_L3_CACHE,
+	METRIC_SET_ID_HDC_AND_SF,
+	METRIC_SET_ID_L3_1,
+	METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND,
+	METRIC_SET_ID_SAMPLER,
+	METRIC_SET_ID_TDL_1,
+	METRIC_SET_ID_TDL_2,
+	METRIC_SET_ID_COMPUTE_EXTRA,
+	METRIC_SET_ID_TEST_OA,
 };
 
-int i915_oa_n_builtin_metric_sets_bxt = 1;
+int i915_oa_n_builtin_metric_sets_bxt = 15;
 
 static const struct i915_oa_reg b_counter_config_render_basic[] = {
 	{ _MMIO(0x2710), 0x00000000 },
@@ -156,6 +170,1622 @@ get_render_basic_mux_config(struct drm_i915_private *dev_priv,
 	return n;
 }
 
+static const struct i915_oa_reg b_counter_config_compute_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic[] = {
+	{ _MMIO(0x9888), 0x104f00e0 },
+	{ _MMIO(0x9888), 0x124f1c00 },
+	{ _MMIO(0x9888), 0x39900340 },
+	{ _MMIO(0x9888), 0x3f900c00 },
+	{ _MMIO(0x9888), 0x41900000 },
+	{ _MMIO(0x9888), 0x002d5000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x082d4000 },
+	{ _MMIO(0x9888), 0x0a2d1000 },
+	{ _MMIO(0x9888), 0x0c2d5000 },
+	{ _MMIO(0x9888), 0x0e2d4000 },
+	{ _MMIO(0x9888), 0x0c2e1400 },
+	{ _MMIO(0x9888), 0x0e2e5100 },
+	{ _MMIO(0x9888), 0x102e0114 },
+	{ _MMIO(0x9888), 0x044cc000 },
+	{ _MMIO(0x9888), 0x0a4c8000 },
+	{ _MMIO(0x9888), 0x0c4c8000 },
+	{ _MMIO(0x9888), 0x0e4c4000 },
+	{ _MMIO(0x9888), 0x104c8000 },
+	{ _MMIO(0x9888), 0x124c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x004ea000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e2000 },
+	{ _MMIO(0x9888), 0x0c4ea000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x004f6b42 },
+	{ _MMIO(0x9888), 0x064f6200 },
+	{ _MMIO(0x9888), 0x084f4100 },
+	{ _MMIO(0x9888), 0x0a4f0061 },
+	{ _MMIO(0x9888), 0x0c4f6c4c },
+	{ _MMIO(0x9888), 0x0e4f4b00 },
+	{ _MMIO(0x9888), 0x1a4f0000 },
+	{ _MMIO(0x9888), 0x1c4f0000 },
+	{ _MMIO(0x9888), 0x180f5000 },
+	{ _MMIO(0x9888), 0x1a0f8800 },
+	{ _MMIO(0x9888), 0x1c0f08a2 },
+	{ _MMIO(0x9888), 0x182c4000 },
+	{ _MMIO(0x9888), 0x1c2c1451 },
+	{ _MMIO(0x9888), 0x1e2c0001 },
+	{ _MMIO(0x9888), 0x1a2c0010 },
+	{ _MMIO(0x9888), 0x01938000 },
+	{ _MMIO(0x9888), 0x0f938000 },
+	{ _MMIO(0x9888), 0x19938a28 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x19900177 },
+	{ _MMIO(0x9888), 0x1b900178 },
+	{ _MMIO(0x9888), 0x1d900125 },
+	{ _MMIO(0x9888), 0x1f900123 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x25904000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x53901000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x55900111 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+};
+
+static int
+get_compute_basic_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_basic;
+	lens[n] = ARRAY_SIZE(mux_config_compute_basic);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_render_pipe_profile[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ffea },
+	{ _MMIO(0x2774), 0x00007ffc },
+	{ _MMIO(0x2778), 0x0007affa },
+	{ _MMIO(0x277c), 0x0000f5fd },
+	{ _MMIO(0x2780), 0x00079ffa },
+	{ _MMIO(0x2784), 0x0000f3fb },
+	{ _MMIO(0x2788), 0x0007bf7a },
+	{ _MMIO(0x278c), 0x0000f7e7 },
+	{ _MMIO(0x2790), 0x0007fefa },
+	{ _MMIO(0x2794), 0x0000f7cf },
+	{ _MMIO(0x2798), 0x00077ffa },
+	{ _MMIO(0x279c), 0x0000efdf },
+	{ _MMIO(0x27a0), 0x0006fffa },
+	{ _MMIO(0x27a4), 0x0000cfbf },
+	{ _MMIO(0x27a8), 0x0003fffa },
+	{ _MMIO(0x27ac), 0x00005f7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_pipe_profile[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile[] = {
+	{ _MMIO(0x9888), 0x0c2e001f },
+	{ _MMIO(0x9888), 0x0a2f0000 },
+	{ _MMIO(0x9888), 0x10186800 },
+	{ _MMIO(0x9888), 0x11810019 },
+	{ _MMIO(0x9888), 0x15810013 },
+	{ _MMIO(0x9888), 0x13820020 },
+	{ _MMIO(0x9888), 0x11830020 },
+	{ _MMIO(0x9888), 0x17840000 },
+	{ _MMIO(0x9888), 0x11860007 },
+	{ _MMIO(0x9888), 0x21860000 },
+	{ _MMIO(0x9888), 0x178703e0 },
+	{ _MMIO(0x9888), 0x0c2d8000 },
+	{ _MMIO(0x9888), 0x042d4000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x022e5400 },
+	{ _MMIO(0x9888), 0x002e0000 },
+	{ _MMIO(0x9888), 0x0e2e0080 },
+	{ _MMIO(0x9888), 0x082f0040 },
+	{ _MMIO(0x9888), 0x002f0000 },
+	{ _MMIO(0x9888), 0x06143000 },
+	{ _MMIO(0x9888), 0x06174000 },
+	{ _MMIO(0x9888), 0x06180012 },
+	{ _MMIO(0x9888), 0x00180000 },
+	{ _MMIO(0x9888), 0x0d804000 },
+	{ _MMIO(0x9888), 0x0f804000 },
+	{ _MMIO(0x9888), 0x05804000 },
+	{ _MMIO(0x9888), 0x09810200 },
+	{ _MMIO(0x9888), 0x0b810030 },
+	{ _MMIO(0x9888), 0x03810003 },
+	{ _MMIO(0x9888), 0x21819140 },
+	{ _MMIO(0x9888), 0x23819050 },
+	{ _MMIO(0x9888), 0x25810018 },
+	{ _MMIO(0x9888), 0x0b820980 },
+	{ _MMIO(0x9888), 0x03820d80 },
+	{ _MMIO(0x9888), 0x11820000 },
+	{ _MMIO(0x9888), 0x0182c000 },
+	{ _MMIO(0x9888), 0x07828000 },
+	{ _MMIO(0x9888), 0x09824000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x0d830004 },
+	{ _MMIO(0x9888), 0x0583000c },
+	{ _MMIO(0x9888), 0x0f831000 },
+	{ _MMIO(0x9888), 0x01848072 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x09844000 },
+	{ _MMIO(0x9888), 0x0f848000 },
+	{ _MMIO(0x9888), 0x07860000 },
+	{ _MMIO(0x9888), 0x09860092 },
+	{ _MMIO(0x9888), 0x0f860400 },
+	{ _MMIO(0x9888), 0x01869100 },
+	{ _MMIO(0x9888), 0x0f870065 },
+	{ _MMIO(0x9888), 0x01870000 },
+	{ _MMIO(0x9888), 0x19930800 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x1b952000 },
+	{ _MMIO(0x9888), 0x1d955055 },
+	{ _MMIO(0x9888), 0x1f951455 },
+	{ _MMIO(0x9888), 0x0992a000 },
+	{ _MMIO(0x9888), 0x0f928000 },
+	{ _MMIO(0x9888), 0x1192a800 },
+	{ _MMIO(0x9888), 0x1392028a },
+	{ _MMIO(0x9888), 0x0b92a000 },
+	{ _MMIO(0x9888), 0x0d922000 },
+	{ _MMIO(0x9888), 0x13908000 },
+	{ _MMIO(0x9888), 0x21908000 },
+	{ _MMIO(0x9888), 0x23908000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27908000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f908000 },
+	{ _MMIO(0x9888), 0x31908000 },
+	{ _MMIO(0x9888), 0x15908000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900c01 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900863 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900061 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900000 },
+	{ _MMIO(0x9888), 0x45900c22 },
+};
+
+static int
+get_render_pipe_profile_mux_config(struct drm_i915_private *dev_priv,
+				   const struct i915_oa_reg **regs,
+				   int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_render_pipe_profile;
+	lens[n] = ARRAY_SIZE(mux_config_render_pipe_profile);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_reads[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f872 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_reads[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads[] = {
+	{ _MMIO(0x9888), 0x19800343 },
+	{ _MMIO(0x9888), 0x39900340 },
+	{ _MMIO(0x9888), 0x3f901000 },
+	{ _MMIO(0x9888), 0x41900003 },
+	{ _MMIO(0x9888), 0x03803180 },
+	{ _MMIO(0x9888), 0x058035e2 },
+	{ _MMIO(0x9888), 0x0780006a },
+	{ _MMIO(0x9888), 0x11800000 },
+	{ _MMIO(0x9888), 0x2181a000 },
+	{ _MMIO(0x9888), 0x2381000a },
+	{ _MMIO(0x9888), 0x1d950550 },
+	{ _MMIO(0x9888), 0x0b928000 },
+	{ _MMIO(0x9888), 0x0d92a000 },
+	{ _MMIO(0x9888), 0x0f922000 },
+	{ _MMIO(0x9888), 0x13900170 },
+	{ _MMIO(0x9888), 0x21900171 },
+	{ _MMIO(0x9888), 0x23900172 },
+	{ _MMIO(0x9888), 0x25900173 },
+	{ _MMIO(0x9888), 0x27900174 },
+	{ _MMIO(0x9888), 0x29900175 },
+	{ _MMIO(0x9888), 0x2b900176 },
+	{ _MMIO(0x9888), 0x2d900177 },
+	{ _MMIO(0x9888), 0x2f90017f },
+	{ _MMIO(0x9888), 0x31900125 },
+	{ _MMIO(0x9888), 0x15900123 },
+	{ _MMIO(0x9888), 0x17900121 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43901084 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47901080 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49901084 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b901084 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900004 },
+	{ _MMIO(0x9888), 0x45900000 },
+};
+
+static int
+get_memory_reads_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_memory_reads;
+	lens[n] = ARRAY_SIZE(mux_config_memory_reads);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_writes[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f822 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_writes[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes[] = {
+	{ _MMIO(0x9888), 0x19800343 },
+	{ _MMIO(0x9888), 0x39900340 },
+	{ _MMIO(0x9888), 0x3f900000 },
+	{ _MMIO(0x9888), 0x41900080 },
+	{ _MMIO(0x9888), 0x03803180 },
+	{ _MMIO(0x9888), 0x058035e2 },
+	{ _MMIO(0x9888), 0x0780006a },
+	{ _MMIO(0x9888), 0x11800000 },
+	{ _MMIO(0x9888), 0x2181a000 },
+	{ _MMIO(0x9888), 0x2381000a },
+	{ _MMIO(0x9888), 0x1d950550 },
+	{ _MMIO(0x9888), 0x0b928000 },
+	{ _MMIO(0x9888), 0x0d92a000 },
+	{ _MMIO(0x9888), 0x0f922000 },
+	{ _MMIO(0x9888), 0x13900180 },
+	{ _MMIO(0x9888), 0x21900181 },
+	{ _MMIO(0x9888), 0x23900182 },
+	{ _MMIO(0x9888), 0x25900183 },
+	{ _MMIO(0x9888), 0x27900184 },
+	{ _MMIO(0x9888), 0x29900185 },
+	{ _MMIO(0x9888), 0x2b900186 },
+	{ _MMIO(0x9888), 0x2d900187 },
+	{ _MMIO(0x9888), 0x2f900170 },
+	{ _MMIO(0x9888), 0x31900125 },
+	{ _MMIO(0x9888), 0x15900123 },
+	{ _MMIO(0x9888), 0x17900121 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43901084 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47901080 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49901084 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b901084 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900004 },
+	{ _MMIO(0x9888), 0x45900000 },
+};
+
+static int
+get_memory_writes_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_memory_writes;
+	lens[n] = ARRAY_SIZE(mux_config_memory_writes);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extended[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fc2a },
+	{ _MMIO(0x2774), 0x0000bf00 },
+	{ _MMIO(0x2778), 0x0007fc6a },
+	{ _MMIO(0x277c), 0x0000bf00 },
+	{ _MMIO(0x2780), 0x0007fc92 },
+	{ _MMIO(0x2784), 0x0000bf00 },
+	{ _MMIO(0x2788), 0x0007fca2 },
+	{ _MMIO(0x278c), 0x0000bf00 },
+	{ _MMIO(0x2790), 0x0007fc32 },
+	{ _MMIO(0x2794), 0x0000bf00 },
+	{ _MMIO(0x2798), 0x0007fc9a },
+	{ _MMIO(0x279c), 0x0000bf00 },
+	{ _MMIO(0x27a0), 0x0007fe6a },
+	{ _MMIO(0x27a4), 0x0000bf00 },
+	{ _MMIO(0x27a8), 0x0007fe7a },
+	{ _MMIO(0x27ac), 0x0000bf00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extended[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended[] = {
+	{ _MMIO(0x9888), 0x104f00e0 },
+	{ _MMIO(0x9888), 0x141c0160 },
+	{ _MMIO(0x9888), 0x161c0015 },
+	{ _MMIO(0x9888), 0x181c0120 },
+	{ _MMIO(0x9888), 0x002d5000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x082d5000 },
+	{ _MMIO(0x9888), 0x0a2d5000 },
+	{ _MMIO(0x9888), 0x0c2d5000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x0c2e5400 },
+	{ _MMIO(0x9888), 0x0e2e5515 },
+	{ _MMIO(0x9888), 0x102e0155 },
+	{ _MMIO(0x9888), 0x044cc000 },
+	{ _MMIO(0x9888), 0x0a4c8000 },
+	{ _MMIO(0x9888), 0x0c4cc000 },
+	{ _MMIO(0x9888), 0x0e4cc000 },
+	{ _MMIO(0x9888), 0x104c8000 },
+	{ _MMIO(0x9888), 0x124c8000 },
+	{ _MMIO(0x9888), 0x144c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x064cc000 },
+	{ _MMIO(0x9888), 0x084cc000 },
+	{ _MMIO(0x9888), 0x004ea000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084ea000 },
+	{ _MMIO(0x9888), 0x0a4ea000 },
+	{ _MMIO(0x9888), 0x0c4ea000 },
+	{ _MMIO(0x9888), 0x0e4ea000 },
+	{ _MMIO(0x9888), 0x024ea000 },
+	{ _MMIO(0x9888), 0x044ea000 },
+	{ _MMIO(0x9888), 0x0e4f4b41 },
+	{ _MMIO(0x9888), 0x004f4200 },
+	{ _MMIO(0x9888), 0x024f404c },
+	{ _MMIO(0x9888), 0x1c4f0000 },
+	{ _MMIO(0x9888), 0x1a4f0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x061b8000 },
+	{ _MMIO(0x9888), 0x081bc000 },
+	{ _MMIO(0x9888), 0x0a1bc000 },
+	{ _MMIO(0x9888), 0x0c1bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x001c0031 },
+	{ _MMIO(0x9888), 0x061c1900 },
+	{ _MMIO(0x9888), 0x081c1a33 },
+	{ _MMIO(0x9888), 0x0a1c1b35 },
+	{ _MMIO(0x9888), 0x0c1c3337 },
+	{ _MMIO(0x9888), 0x041c31c7 },
+	{ _MMIO(0x9888), 0x180f5000 },
+	{ _MMIO(0x9888), 0x1a0fa8aa },
+	{ _MMIO(0x9888), 0x1c0f0aaa },
+	{ _MMIO(0x9888), 0x182c8000 },
+	{ _MMIO(0x9888), 0x1c2c6aaa },
+	{ _MMIO(0x9888), 0x1e2c0001 },
+	{ _MMIO(0x9888), 0x1a2c2950 },
+	{ _MMIO(0x9888), 0x01938000 },
+	{ _MMIO(0x9888), 0x0f938000 },
+	{ _MMIO(0x9888), 0x1993aaaa },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25904000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x29904000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900420 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900400 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900001 },
+	{ _MMIO(0x9888), 0x45900001 },
+};
+
+static int
+get_compute_extended_mux_config(struct drm_i915_private *dev_priv,
+				const struct i915_oa_reg **regs,
+				int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_extended;
+	lens[n] = ARRAY_SIZE(mux_config_compute_extended);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_l3_cache[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fffa },
+	{ _MMIO(0x2774), 0x0000fefe },
+	{ _MMIO(0x2778), 0x0007fffa },
+	{ _MMIO(0x277c), 0x0000fefd },
+	{ _MMIO(0x2790), 0x0007fffa },
+	{ _MMIO(0x2794), 0x0000fbef },
+	{ _MMIO(0x2798), 0x0007fffa },
+	{ _MMIO(0x279c), 0x0000fbdf },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_l3_cache[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00101100 },
+	{ _MMIO(0xe45c), 0x00201200 },
+	{ _MMIO(0xe55c), 0x00301300 },
+	{ _MMIO(0xe65c), 0x00401400 },
+};
+
+static const struct i915_oa_reg mux_config_compute_l3_cache[] = {
+	{ _MMIO(0x9888), 0x166c03b0 },
+	{ _MMIO(0x9888), 0x1593001e },
+	{ _MMIO(0x9888), 0x3f900c00 },
+	{ _MMIO(0x9888), 0x41900000 },
+	{ _MMIO(0x9888), 0x002d1000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x082d5000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x0c2e0400 },
+	{ _MMIO(0x9888), 0x0e2e1500 },
+	{ _MMIO(0x9888), 0x102e0140 },
+	{ _MMIO(0x9888), 0x044c4000 },
+	{ _MMIO(0x9888), 0x0a4c8000 },
+	{ _MMIO(0x9888), 0x0c4cc000 },
+	{ _MMIO(0x9888), 0x144c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x004e2000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084ea000 },
+	{ _MMIO(0x9888), 0x0e4ea000 },
+	{ _MMIO(0x9888), 0x1a4f4001 },
+	{ _MMIO(0x9888), 0x1c4f5005 },
+	{ _MMIO(0x9888), 0x006c0051 },
+	{ _MMIO(0x9888), 0x066c5000 },
+	{ _MMIO(0x9888), 0x086c5c5d },
+	{ _MMIO(0x9888), 0x0e6c5e5f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x146c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x180f1000 },
+	{ _MMIO(0x9888), 0x1a0fa800 },
+	{ _MMIO(0x9888), 0x1c0f0a00 },
+	{ _MMIO(0x9888), 0x182c4000 },
+	{ _MMIO(0x9888), 0x1c2c4015 },
+	{ _MMIO(0x9888), 0x1e2c0001 },
+	{ _MMIO(0x9888), 0x03931980 },
+	{ _MMIO(0x9888), 0x05930032 },
+	{ _MMIO(0x9888), 0x11930000 },
+	{ _MMIO(0x9888), 0x01938000 },
+	{ _MMIO(0x9888), 0x0f938000 },
+	{ _MMIO(0x9888), 0x1993a00a },
+	{ _MMIO(0x9888), 0x07930000 },
+	{ _MMIO(0x9888), 0x09930000 },
+	{ _MMIO(0x9888), 0x1d900177 },
+	{ _MMIO(0x9888), 0x1f900178 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x53901000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x55900111 },
+	{ _MMIO(0x9888), 0x47900001 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x4d900000 },
+	{ _MMIO(0x9888), 0x45900400 },
+};
+
+static int
+get_compute_l3_cache_mux_config(struct drm_i915_private *dev_priv,
+				const struct i915_oa_reg **regs,
+				int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_l3_cache;
+	lens[n] = ARRAY_SIZE(mux_config_compute_l3_cache);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_hdc_and_sf[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x10800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+};
+
+static const struct i915_oa_reg flex_eu_config_hdc_and_sf[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_hdc_and_sf[] = {
+	{ _MMIO(0x9888), 0x104f0232 },
+	{ _MMIO(0x9888), 0x124f4640 },
+	{ _MMIO(0x9888), 0x11834400 },
+	{ _MMIO(0x9888), 0x022d4000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x0e2e0055 },
+	{ _MMIO(0x9888), 0x064c8000 },
+	{ _MMIO(0x9888), 0x084cc000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044ea000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x024f6100 },
+	{ _MMIO(0x9888), 0x044f416b },
+	{ _MMIO(0x9888), 0x064f004b },
+	{ _MMIO(0x9888), 0x1a4f0000 },
+	{ _MMIO(0x9888), 0x1a0f02a8 },
+	{ _MMIO(0x9888), 0x1a2c5500 },
+	{ _MMIO(0x9888), 0x0f808000 },
+	{ _MMIO(0x9888), 0x25810020 },
+	{ _MMIO(0x9888), 0x0f8305c0 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x1f951000 },
+	{ _MMIO(0x9888), 0x13920200 },
+	{ _MMIO(0x9888), 0x31908000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4d900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_hdc_and_sf_mux_config(struct drm_i915_private *dev_priv,
+			  const struct i915_oa_reg **regs,
+			  int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_hdc_and_sf;
+	lens[n] = ARRAY_SIZE(mux_config_hdc_and_sf);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1_0_sku_gte_0x03[] = {
+	{ _MMIO(0x9888), 0x12643400 },
+	{ _MMIO(0x9888), 0x12653400 },
+	{ _MMIO(0x9888), 0x106c6800 },
+	{ _MMIO(0x9888), 0x126c001e },
+	{ _MMIO(0x9888), 0x166c0010 },
+	{ _MMIO(0x9888), 0x0c2d5000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x002d4000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x102e0154 },
+	{ _MMIO(0x9888), 0x0c2e5000 },
+	{ _MMIO(0x9888), 0x0e2e0055 },
+	{ _MMIO(0x9888), 0x104c8000 },
+	{ _MMIO(0x9888), 0x124c8000 },
+	{ _MMIO(0x9888), 0x144c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x044c8000 },
+	{ _MMIO(0x9888), 0x064cc000 },
+	{ _MMIO(0x9888), 0x084cc000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x0c4ea000 },
+	{ _MMIO(0x9888), 0x0e4ea000 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x024ea000 },
+	{ _MMIO(0x9888), 0x044ea000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x1c4f5500 },
+	{ _MMIO(0x9888), 0x1a4f1554 },
+	{ _MMIO(0x9888), 0x0a640024 },
+	{ _MMIO(0x9888), 0x10640000 },
+	{ _MMIO(0x9888), 0x04640000 },
+	{ _MMIO(0x9888), 0x0c650024 },
+	{ _MMIO(0x9888), 0x10650000 },
+	{ _MMIO(0x9888), 0x06650000 },
+	{ _MMIO(0x9888), 0x0c6c5327 },
+	{ _MMIO(0x9888), 0x0e6c5425 },
+	{ _MMIO(0x9888), 0x006c2a00 },
+	{ _MMIO(0x9888), 0x026c285b },
+	{ _MMIO(0x9888), 0x046c005c },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0900 },
+	{ _MMIO(0x9888), 0x1c0f0aa0 },
+	{ _MMIO(0x9888), 0x180f4000 },
+	{ _MMIO(0x9888), 0x1a0f02aa },
+	{ _MMIO(0x9888), 0x1c2c5400 },
+	{ _MMIO(0x9888), 0x1e2c0001 },
+	{ _MMIO(0x9888), 0x1a2c5550 },
+	{ _MMIO(0x9888), 0x1993aa00 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900421 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900001 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900420 },
+	{ _MMIO(0x9888), 0x45900021 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1_0_sku_lt_0x03[] = {
+	{ _MMIO(0x9888), 0x14640340 },
+	{ _MMIO(0x9888), 0x14650340 },
+	{ _MMIO(0x9888), 0x106c6800 },
+	{ _MMIO(0x9888), 0x126c001e },
+	{ _MMIO(0x9888), 0x166c0010 },
+	{ _MMIO(0x9888), 0x0c2d5000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x002d4000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x102e0154 },
+	{ _MMIO(0x9888), 0x0c2e5000 },
+	{ _MMIO(0x9888), 0x0e2e0055 },
+	{ _MMIO(0x9888), 0x104c8000 },
+	{ _MMIO(0x9888), 0x124c8000 },
+	{ _MMIO(0x9888), 0x144c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x044c8000 },
+	{ _MMIO(0x9888), 0x064cc000 },
+	{ _MMIO(0x9888), 0x084cc000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x0c4ea000 },
+	{ _MMIO(0x9888), 0x0e4ea000 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x024ea000 },
+	{ _MMIO(0x9888), 0x044ea000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x1c4f5500 },
+	{ _MMIO(0x9888), 0x1a4f1554 },
+	{ _MMIO(0x9888), 0x04642400 },
+	{ _MMIO(0x9888), 0x22640000 },
+	{ _MMIO(0x9888), 0x1a640000 },
+	{ _MMIO(0x9888), 0x06650024 },
+	{ _MMIO(0x9888), 0x22650000 },
+	{ _MMIO(0x9888), 0x1c650000 },
+	{ _MMIO(0x9888), 0x0c6c5327 },
+	{ _MMIO(0x9888), 0x0e6c5425 },
+	{ _MMIO(0x9888), 0x006c2a00 },
+	{ _MMIO(0x9888), 0x026c285b },
+	{ _MMIO(0x9888), 0x046c005c },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0900 },
+	{ _MMIO(0x9888), 0x1c0f0aa0 },
+	{ _MMIO(0x9888), 0x180f4000 },
+	{ _MMIO(0x9888), 0x1a0f02aa },
+	{ _MMIO(0x9888), 0x1c2c5400 },
+	{ _MMIO(0x9888), 0x1e2c0001 },
+	{ _MMIO(0x9888), 0x1a2c5550 },
+	{ _MMIO(0x9888), 0x1993aa00 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900421 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900001 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900420 },
+	{ _MMIO(0x9888), 0x45900021 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+};
+
+static int
+get_l3_1_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 2);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 2);
+
+	if (dev_priv->drm.pdev->revision >= 0x03) {
+		regs[n] = mux_config_l3_1_0_sku_gte_0x03;
+		lens[n] = ARRAY_SIZE(mux_config_l3_1_0_sku_gte_0x03);
+		n++;
+	}
+	if (dev_priv->drm.pdev->revision < 0x03) {
+		regs[n] = mux_config_l3_1_0_sku_lt_0x03;
+		lens[n] = ARRAY_SIZE(mux_config_l3_1_0_sku_lt_0x03);
+		n++;
+	}
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000efff },
+	{ _MMIO(0x2778), 0x00006000 },
+	{ _MMIO(0x277c), 0x0000f3ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x9888), 0x102d7800 },
+	{ _MMIO(0x9888), 0x122d79e0 },
+	{ _MMIO(0x9888), 0x0c2f0004 },
+	{ _MMIO(0x9888), 0x100e3800 },
+	{ _MMIO(0x9888), 0x180f0005 },
+	{ _MMIO(0x9888), 0x002d0940 },
+	{ _MMIO(0x9888), 0x022d802f },
+	{ _MMIO(0x9888), 0x042d4013 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x0e2e0050 },
+	{ _MMIO(0x9888), 0x022f0010 },
+	{ _MMIO(0x9888), 0x002f0000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x040e0480 },
+	{ _MMIO(0x9888), 0x000e0000 },
+	{ _MMIO(0x9888), 0x060f0027 },
+	{ _MMIO(0x9888), 0x100f0000 },
+	{ _MMIO(0x9888), 0x1a0f0040 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x439014a0 },
+	{ _MMIO(0x9888), 0x459000a4 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900001 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_rasterizer_and_pixel_backend_mux_config(struct drm_i915_private *dev_priv,
+					    const struct i915_oa_reg **regs,
+					    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_rasterizer_and_pixel_backend;
+	lens[n] = ARRAY_SIZE(mux_config_rasterizer_and_pixel_backend);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler[] = {
+	{ _MMIO(0x9888), 0x121300a0 },
+	{ _MMIO(0x9888), 0x141600ab },
+	{ _MMIO(0x9888), 0x123300a0 },
+	{ _MMIO(0x9888), 0x143600ab },
+	{ _MMIO(0x9888), 0x125300a0 },
+	{ _MMIO(0x9888), 0x145600ab },
+	{ _MMIO(0x9888), 0x0c2d4000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x002d4000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x102e01a0 },
+	{ _MMIO(0x9888), 0x0c2e5000 },
+	{ _MMIO(0x9888), 0x0e2e0065 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x044c8000 },
+	{ _MMIO(0x9888), 0x064cc000 },
+	{ _MMIO(0x9888), 0x084c4000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x024ea000 },
+	{ _MMIO(0x9888), 0x044e2000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x1c0f0800 },
+	{ _MMIO(0x9888), 0x180f4000 },
+	{ _MMIO(0x9888), 0x1a0f023f },
+	{ _MMIO(0x9888), 0x1e2c0003 },
+	{ _MMIO(0x9888), 0x1a2cc030 },
+	{ _MMIO(0x9888), 0x04132180 },
+	{ _MMIO(0x9888), 0x02130000 },
+	{ _MMIO(0x9888), 0x0c148000 },
+	{ _MMIO(0x9888), 0x0e142000 },
+	{ _MMIO(0x9888), 0x04148000 },
+	{ _MMIO(0x9888), 0x1e150140 },
+	{ _MMIO(0x9888), 0x1c150040 },
+	{ _MMIO(0x9888), 0x0c163000 },
+	{ _MMIO(0x9888), 0x0e160068 },
+	{ _MMIO(0x9888), 0x10160000 },
+	{ _MMIO(0x9888), 0x18160000 },
+	{ _MMIO(0x9888), 0x0a164000 },
+	{ _MMIO(0x9888), 0x04330043 },
+	{ _MMIO(0x9888), 0x02330000 },
+	{ _MMIO(0x9888), 0x0234a000 },
+	{ _MMIO(0x9888), 0x04342000 },
+	{ _MMIO(0x9888), 0x1c350015 },
+	{ _MMIO(0x9888), 0x02363460 },
+	{ _MMIO(0x9888), 0x10360000 },
+	{ _MMIO(0x9888), 0x04360000 },
+	{ _MMIO(0x9888), 0x06360000 },
+	{ _MMIO(0x9888), 0x08364000 },
+	{ _MMIO(0x9888), 0x06530043 },
+	{ _MMIO(0x9888), 0x02530000 },
+	{ _MMIO(0x9888), 0x0e548000 },
+	{ _MMIO(0x9888), 0x00548000 },
+	{ _MMIO(0x9888), 0x06542000 },
+	{ _MMIO(0x9888), 0x1e550400 },
+	{ _MMIO(0x9888), 0x1a552000 },
+	{ _MMIO(0x9888), 0x1c550100 },
+	{ _MMIO(0x9888), 0x0e563000 },
+	{ _MMIO(0x9888), 0x00563400 },
+	{ _MMIO(0x9888), 0x10560000 },
+	{ _MMIO(0x9888), 0x18560000 },
+	{ _MMIO(0x9888), 0x02560000 },
+	{ _MMIO(0x9888), 0x0c564000 },
+	{ _MMIO(0x9888), 0x1993a800 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b9014a0 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900001 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900820 },
+	{ _MMIO(0x9888), 0x45901022 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+};
+
+static int
+get_sampler_mux_config(struct drm_i915_private *dev_priv,
+		       const struct i915_oa_reg **regs,
+		       int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_sampler;
+	lens[n] = ARRAY_SIZE(mux_config_sampler);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x00007fff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x00009fff },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000efff },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000f3ff },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fdff },
+	{ _MMIO(0x2798), 0x00000000 },
+	{ _MMIO(0x279c), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_1[] = {
+	{ _MMIO(0x9888), 0x141a0000 },
+	{ _MMIO(0x9888), 0x143a0000 },
+	{ _MMIO(0x9888), 0x145a0000 },
+	{ _MMIO(0x9888), 0x0c2d4000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x002d4000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x102e0150 },
+	{ _MMIO(0x9888), 0x0c2e5000 },
+	{ _MMIO(0x9888), 0x0e2e006a },
+	{ _MMIO(0x9888), 0x124c8000 },
+	{ _MMIO(0x9888), 0x144c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x044c8000 },
+	{ _MMIO(0x9888), 0x064c4000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x0e4ea000 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x024e2000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x1c0f0bc0 },
+	{ _MMIO(0x9888), 0x180f4000 },
+	{ _MMIO(0x9888), 0x1a0f0302 },
+	{ _MMIO(0x9888), 0x1e2c0003 },
+	{ _MMIO(0x9888), 0x1a2c00f0 },
+	{ _MMIO(0x9888), 0x021a3080 },
+	{ _MMIO(0x9888), 0x041a31e5 },
+	{ _MMIO(0x9888), 0x02148000 },
+	{ _MMIO(0x9888), 0x0414a000 },
+	{ _MMIO(0x9888), 0x1c150054 },
+	{ _MMIO(0x9888), 0x06168000 },
+	{ _MMIO(0x9888), 0x08168000 },
+	{ _MMIO(0x9888), 0x0a168000 },
+	{ _MMIO(0x9888), 0x0c3a3280 },
+	{ _MMIO(0x9888), 0x0e3a0063 },
+	{ _MMIO(0x9888), 0x063a0061 },
+	{ _MMIO(0x9888), 0x023a0000 },
+	{ _MMIO(0x9888), 0x0c348000 },
+	{ _MMIO(0x9888), 0x0e342000 },
+	{ _MMIO(0x9888), 0x06342000 },
+	{ _MMIO(0x9888), 0x1e350140 },
+	{ _MMIO(0x9888), 0x1c350100 },
+	{ _MMIO(0x9888), 0x18360028 },
+	{ _MMIO(0x9888), 0x0c368000 },
+	{ _MMIO(0x9888), 0x0e5a3080 },
+	{ _MMIO(0x9888), 0x005a3280 },
+	{ _MMIO(0x9888), 0x025a0063 },
+	{ _MMIO(0x9888), 0x0e548000 },
+	{ _MMIO(0x9888), 0x00548000 },
+	{ _MMIO(0x9888), 0x02542000 },
+	{ _MMIO(0x9888), 0x1e550400 },
+	{ _MMIO(0x9888), 0x1a552000 },
+	{ _MMIO(0x9888), 0x1c550001 },
+	{ _MMIO(0x9888), 0x18560080 },
+	{ _MMIO(0x9888), 0x02568000 },
+	{ _MMIO(0x9888), 0x04568000 },
+	{ _MMIO(0x9888), 0x1993a800 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900420 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x45901084 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900001 },
+};
+
+static int
+get_tdl_1_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_1;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_2[] = {
+	{ _MMIO(0x9888), 0x141a026b },
+	{ _MMIO(0x9888), 0x143a0173 },
+	{ _MMIO(0x9888), 0x145a026b },
+	{ _MMIO(0x9888), 0x002d4000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x0c2e5000 },
+	{ _MMIO(0x9888), 0x0e2e0069 },
+	{ _MMIO(0x9888), 0x044c8000 },
+	{ _MMIO(0x9888), 0x064cc000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x024ea000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x180f6000 },
+	{ _MMIO(0x9888), 0x1a0f030a },
+	{ _MMIO(0x9888), 0x1a2c03c0 },
+	{ _MMIO(0x9888), 0x041a37e7 },
+	{ _MMIO(0x9888), 0x021a0000 },
+	{ _MMIO(0x9888), 0x0414a000 },
+	{ _MMIO(0x9888), 0x1c150050 },
+	{ _MMIO(0x9888), 0x08168000 },
+	{ _MMIO(0x9888), 0x0a168000 },
+	{ _MMIO(0x9888), 0x003a3380 },
+	{ _MMIO(0x9888), 0x063a006f },
+	{ _MMIO(0x9888), 0x023a0000 },
+	{ _MMIO(0x9888), 0x00348000 },
+	{ _MMIO(0x9888), 0x06342000 },
+	{ _MMIO(0x9888), 0x1a352000 },
+	{ _MMIO(0x9888), 0x1c350100 },
+	{ _MMIO(0x9888), 0x02368000 },
+	{ _MMIO(0x9888), 0x0c368000 },
+	{ _MMIO(0x9888), 0x025a37e7 },
+	{ _MMIO(0x9888), 0x0254a000 },
+	{ _MMIO(0x9888), 0x1c550005 },
+	{ _MMIO(0x9888), 0x04568000 },
+	{ _MMIO(0x9888), 0x06568000 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900020 },
+	{ _MMIO(0x9888), 0x45901080 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900001 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_tdl_2_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_2;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extra[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extra[] = {
+	{ _MMIO(0xe458), 0x00001000 },
+	{ _MMIO(0xe558), 0x00003002 },
+	{ _MMIO(0xe658), 0x00005004 },
+	{ _MMIO(0xe758), 0x00011010 },
+	{ _MMIO(0xe45c), 0x00050012 },
+	{ _MMIO(0xe55c), 0x00052051 },
+	{ _MMIO(0xe65c), 0x00000008 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extra[] = {
+	{ _MMIO(0x9888), 0x141a001f },
+	{ _MMIO(0x9888), 0x143a001f },
+	{ _MMIO(0x9888), 0x145a001f },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x0e2e0094 },
+	{ _MMIO(0x9888), 0x084cc000 },
+	{ _MMIO(0x9888), 0x044ea000 },
+	{ _MMIO(0x9888), 0x1a0f00e0 },
+	{ _MMIO(0x9888), 0x1a2c0c00 },
+	{ _MMIO(0x9888), 0x061a0063 },
+	{ _MMIO(0x9888), 0x021a0000 },
+	{ _MMIO(0x9888), 0x06142000 },
+	{ _MMIO(0x9888), 0x1c150100 },
+	{ _MMIO(0x9888), 0x0c168000 },
+	{ _MMIO(0x9888), 0x043a3180 },
+	{ _MMIO(0x9888), 0x023a0000 },
+	{ _MMIO(0x9888), 0x04348000 },
+	{ _MMIO(0x9888), 0x1c350040 },
+	{ _MMIO(0x9888), 0x0a368000 },
+	{ _MMIO(0x9888), 0x045a0063 },
+	{ _MMIO(0x9888), 0x025a0000 },
+	{ _MMIO(0x9888), 0x04542000 },
+	{ _MMIO(0x9888), 0x1c550010 },
+	{ _MMIO(0x9888), 0x08568000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900400 },
+	{ _MMIO(0x9888), 0x47900004 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_compute_extra_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_extra;
+	lens[n] = ARRAY_SIZE(mux_config_compute_extra);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_test_oa[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2770), 0x00000004 },
+	{ _MMIO(0x2774), 0x00000000 },
+	{ _MMIO(0x2778), 0x00000003 },
+	{ _MMIO(0x277c), 0x00000000 },
+	{ _MMIO(0x2780), 0x00000007 },
+	{ _MMIO(0x2784), 0x00000000 },
+	{ _MMIO(0x2788), 0x00100002 },
+	{ _MMIO(0x278c), 0x0000fff7 },
+	{ _MMIO(0x2790), 0x00100002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00100082 },
+	{ _MMIO(0x279c), 0x0000ffef },
+	{ _MMIO(0x27a0), 0x001000c2 },
+	{ _MMIO(0x27a4), 0x0000ffe7 },
+	{ _MMIO(0x27a8), 0x00100001 },
+	{ _MMIO(0x27ac), 0x0000ffe7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_test_oa[] = {
+};
+
+static const struct i915_oa_reg mux_config_test_oa[] = {
+	{ _MMIO(0x9888), 0x19800000 },
+	{ _MMIO(0x9888), 0x07800063 },
+	{ _MMIO(0x9888), 0x11800000 },
+	{ _MMIO(0x9888), 0x23810008 },
+	{ _MMIO(0x9888), 0x1d950400 },
+	{ _MMIO(0x9888), 0x0f922000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_test_oa_mux_config(struct drm_i915_private *dev_priv,
+		       const struct i915_oa_reg **regs,
+		       int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_test_oa;
+	lens[n] = ARRAY_SIZE(mux_config_test_oa);
+	n++;
+
+	return n;
+}
+
 int i915_oa_select_metric_set_bxt(struct drm_i915_private *dev_priv)
 {
 	dev_priv->perf.oa.n_mux_configs = 0;
@@ -164,14 +1794,352 @@ int i915_oa_select_metric_set_bxt(struct drm_i915_private *dev_priv)
 	dev_priv->perf.oa.flex_regs = NULL;
 	dev_priv->perf.oa.flex_regs_len = 0;
 
-	switch (dev_priv->perf.oa.metrics_set) {
-	case METRIC_SET_ID_RENDER_BASIC:
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_basic_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_basic_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
+		return 0;
+	case METRIC_SET_ID_RENDER_PIPE_PROFILE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_pipe_profile_mux_config(dev_priv,
+							   dev_priv->perf.oa.mux_regs,
+							   dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_PIPE_PROFILE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_pipe_profile;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_pipe_profile);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_pipe_profile;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_pipe_profile);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_READS:
+		dev_priv->perf.oa.n_mux_configs =
+			get_memory_reads_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_READS\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_reads;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_reads);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_reads;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_reads);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_WRITES:
+		dev_priv->perf.oa.n_mux_configs =
+			get_memory_writes_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_WRITES\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_writes;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_writes);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_writes;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_writes);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTENDED:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_extended_mux_config(dev_priv,
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTENDED\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extended;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extended);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extended;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extended);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_L3_CACHE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_l3_cache_mux_config(dev_priv,
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_L3_CACHE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_l3_cache;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_l3_cache);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_l3_cache;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_l3_cache);
+
+		return 0;
+	case METRIC_SET_ID_HDC_AND_SF:
+		dev_priv->perf.oa.n_mux_configs =
+			get_hdc_and_sf_mux_config(dev_priv,
+						  dev_priv->perf.oa.mux_regs,
+						  dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"HDC_AND_SF\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_hdc_and_sf;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_hdc_and_sf);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_hdc_and_sf;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_hdc_and_sf);
+
+		return 0;
+	case METRIC_SET_ID_L3_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_1_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_1);
+
+		return 0;
+	case METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND:
+		dev_priv->perf.oa.n_mux_configs =
+			get_rasterizer_and_pixel_backend_mux_config(dev_priv,
+								    dev_priv->perf.oa.mux_regs,
+								    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RASTERIZER_AND_PIXEL_BACKEND\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_rasterizer_and_pixel_backend);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_rasterizer_and_pixel_backend);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER:
+		dev_priv->perf.oa.n_mux_configs =
+			get_sampler_mux_config(dev_priv,
+					       dev_priv->perf.oa.mux_regs,
+					       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler);
+
+		return 0;
+	case METRIC_SET_ID_TDL_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_1_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_1);
+
+		return 0;
+	case METRIC_SET_ID_TDL_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_2_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_2);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTRA:
 		dev_priv->perf.oa.n_mux_configs =
-			get_render_basic_mux_config(dev_priv,
-						    dev_priv->perf.oa.mux_regs,
-						    dev_priv->perf.oa.mux_regs_lens);
+			get_compute_extra_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
 		if (dev_priv->perf.oa.n_mux_configs == 0) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTRA\" metric set\n");
 
 			/* EINVAL because *_register_sysfs already checked this
 			 * and so it wouldn't have been advertised to userspace and
@@ -181,14 +2149,40 @@ int i915_oa_select_metric_set_bxt(struct drm_i915_private *dev_priv)
 		}
 
 		dev_priv->perf.oa.b_counter_regs =
-			b_counter_config_render_basic;
+			b_counter_config_compute_extra;
 		dev_priv->perf.oa.b_counter_regs_len =
-			ARRAY_SIZE(b_counter_config_render_basic);
+			ARRAY_SIZE(b_counter_config_compute_extra);
 
 		dev_priv->perf.oa.flex_regs =
-			flex_eu_config_render_basic;
+			flex_eu_config_compute_extra;
 		dev_priv->perf.oa.flex_regs_len =
-			ARRAY_SIZE(flex_eu_config_render_basic);
+			ARRAY_SIZE(flex_eu_config_compute_extra);
+
+		return 0;
+	case METRIC_SET_ID_TEST_OA:
+		dev_priv->perf.oa.n_mux_configs =
+			get_test_oa_mux_config(dev_priv,
+					       dev_priv->perf.oa.mux_regs,
+					       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TEST_OA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_test_oa;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_test_oa);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_test_oa;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_test_oa);
 
 		return 0;
 	default:
@@ -218,6 +2212,314 @@ static struct attribute_group group_render_basic = {
 	.attrs =  attrs_render_basic,
 };
 
+static ssize_t
+show_compute_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_BASIC);
+}
+
+static struct device_attribute dev_attr_compute_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_basic[] = {
+	&dev_attr_compute_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_basic = {
+	.name = "012d72cf-82a9-4d25-8ddf-74076fd30797",
+	.attrs =  attrs_compute_basic,
+};
+
+static ssize_t
+show_render_pipe_profile_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_PIPE_PROFILE);
+}
+
+static struct device_attribute dev_attr_render_pipe_profile_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_pipe_profile_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_pipe_profile[] = {
+	&dev_attr_render_pipe_profile_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_pipe_profile = {
+	.name = "ce416533-e49e-4211-80af-ec513590a914",
+	.attrs =  attrs_render_pipe_profile,
+};
+
+static ssize_t
+show_memory_reads_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_READS);
+}
+
+static struct device_attribute dev_attr_memory_reads_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_reads_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_reads[] = {
+	&dev_attr_memory_reads_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_reads = {
+	.name = "398e2452-18d7-42d0-b241-e4d0a9148ada",
+	.attrs =  attrs_memory_reads,
+};
+
+static ssize_t
+show_memory_writes_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_WRITES);
+}
+
+static struct device_attribute dev_attr_memory_writes_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_writes_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_writes[] = {
+	&dev_attr_memory_writes_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_writes = {
+	.name = "d324a0d6-7269-4847-a5c2-6f71ddc7fed5",
+	.attrs =  attrs_memory_writes,
+};
+
+static ssize_t
+show_compute_extended_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTENDED);
+}
+
+static struct device_attribute dev_attr_compute_extended_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extended_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extended[] = {
+	&dev_attr_compute_extended_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extended = {
+	.name = "caf3596a-7bb1-4dec-b3b3-2a080d283b49",
+	.attrs =  attrs_compute_extended,
+};
+
+static ssize_t
+show_compute_l3_cache_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_L3_CACHE);
+}
+
+static struct device_attribute dev_attr_compute_l3_cache_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_l3_cache_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_l3_cache[] = {
+	&dev_attr_compute_l3_cache_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_l3_cache = {
+	.name = "49b956e2-d5b9-47e0-9d8a-cee5e8cec527",
+	.attrs =  attrs_compute_l3_cache,
+};
+
+static ssize_t
+show_hdc_and_sf_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_HDC_AND_SF);
+}
+
+static struct device_attribute dev_attr_hdc_and_sf_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_hdc_and_sf_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_hdc_and_sf[] = {
+	&dev_attr_hdc_and_sf_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_hdc_and_sf = {
+	.name = "f64ef50a-bdba-4b35-8f09-203c13d8ee5a",
+	.attrs =  attrs_hdc_and_sf,
+};
+
+static ssize_t
+show_l3_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_1);
+}
+
+static struct device_attribute dev_attr_l3_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_1[] = {
+	&dev_attr_l3_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_1 = {
+	.name = "00ad5a41-7eab-4f7a-9103-49d411c67219",
+	.attrs =  attrs_l3_1,
+};
+
+static ssize_t
+show_rasterizer_and_pixel_backend_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND);
+}
+
+static struct device_attribute dev_attr_rasterizer_and_pixel_backend_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_rasterizer_and_pixel_backend_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_rasterizer_and_pixel_backend[] = {
+	&dev_attr_rasterizer_and_pixel_backend_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_rasterizer_and_pixel_backend = {
+	.name = "46dc44ca-491c-4cc1-a951-e7b3e62bf02b",
+	.attrs =  attrs_rasterizer_and_pixel_backend,
+};
+
+static ssize_t
+show_sampler_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER);
+}
+
+static struct device_attribute dev_attr_sampler_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler[] = {
+	&dev_attr_sampler_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler = {
+	.name = "8364e2a8-af63-40af-b0d5-42969a255654",
+	.attrs =  attrs_sampler,
+};
+
+static ssize_t
+show_tdl_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_1);
+}
+
+static struct device_attribute dev_attr_tdl_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_1[] = {
+	&dev_attr_tdl_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_1 = {
+	.name = "175c8092-cb25-4d1e-8dc7-b4fdd39e2d92",
+	.attrs =  attrs_tdl_1,
+};
+
+static ssize_t
+show_tdl_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_2);
+}
+
+static struct device_attribute dev_attr_tdl_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_2[] = {
+	&dev_attr_tdl_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_2 = {
+	.name = "d260f03f-b34d-4b49-a44e-436819117332",
+	.attrs =  attrs_tdl_2,
+};
+
+static ssize_t
+show_compute_extra_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTRA);
+}
+
+static struct device_attribute dev_attr_compute_extra_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extra_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extra[] = {
+	&dev_attr_compute_extra_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extra = {
+	.name = "fa6ecf21-2cb8-4d0b-9308-6e4a7b4ca87a",
+	.attrs =  attrs_compute_extra,
+};
+
+static ssize_t
+show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TEST_OA);
+}
+
+static struct device_attribute dev_attr_test_oa_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_test_oa_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_test_oa[] = {
+	&dev_attr_test_oa_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_test_oa = {
+	.name = "5ee72f5c-092f-421e-8b70-225f7c3e9612",
+	.attrs =  attrs_test_oa,
+};
+
 int
 i915_perf_register_sysfs_bxt(struct drm_i915_private *dev_priv)
 {
@@ -230,9 +2532,121 @@ i915_perf_register_sysfs_bxt(struct drm_i915_private *dev_priv)
 		if (ret)
 			goto error_render_basic;
 	}
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+		if (ret)
+			goto error_compute_basic;
+	}
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+		if (ret)
+			goto error_render_pipe_profile;
+	}
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+		if (ret)
+			goto error_memory_reads;
+	}
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+		if (ret)
+			goto error_memory_writes;
+	}
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+		if (ret)
+			goto error_compute_extended;
+	}
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+		if (ret)
+			goto error_compute_l3_cache;
+	}
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+		if (ret)
+			goto error_hdc_and_sf;
+	}
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+		if (ret)
+			goto error_l3_1;
+	}
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+		if (ret)
+			goto error_rasterizer_and_pixel_backend;
+	}
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler);
+		if (ret)
+			goto error_sampler;
+	}
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+		if (ret)
+			goto error_tdl_1;
+	}
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+		if (ret)
+			goto error_tdl_2;
+	}
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+		if (ret)
+			goto error_compute_extra;
+	}
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+		if (ret)
+			goto error_test_oa;
+	}
 
 	return 0;
 
+error_test_oa:
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+error_compute_extra:
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+error_tdl_2:
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+error_tdl_1:
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+error_sampler:
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+error_rasterizer_and_pixel_backend:
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+error_l3_1:
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+error_hdc_and_sf:
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+error_compute_l3_cache:
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+error_compute_extended:
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+error_memory_writes:
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+error_memory_reads:
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+error_render_pipe_profile:
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+error_compute_basic:
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
 error_render_basic:
 	return ret;
 }
@@ -245,4 +2659,32 @@ i915_perf_unregister_sysfs_bxt(struct drm_i915_private *dev_priv)
 
 	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_test_oa);
 }
diff --git a/drivers/gpu/drm/i915/i915_oa_chv.c b/drivers/gpu/drm/i915/i915_oa_chv.c
index b15f6c980d11..aa6bece7e75f 100644
--- a/drivers/gpu/drm/i915/i915_oa_chv.c
+++ b/drivers/gpu/drm/i915/i915_oa_chv.c
@@ -33,9 +33,22 @@
 
 enum metric_set_id {
 	METRIC_SET_ID_RENDER_BASIC = 1,
+	METRIC_SET_ID_COMPUTE_BASIC,
+	METRIC_SET_ID_RENDER_PIPE_PROFILE,
+	METRIC_SET_ID_HDC_AND_SF,
+	METRIC_SET_ID_L3_1,
+	METRIC_SET_ID_L3_2,
+	METRIC_SET_ID_L3_3,
+	METRIC_SET_ID_L3_4,
+	METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND,
+	METRIC_SET_ID_SAMPLER_1,
+	METRIC_SET_ID_SAMPLER_2,
+	METRIC_SET_ID_TDL_1,
+	METRIC_SET_ID_TDL_2,
+	METRIC_SET_ID_TEST_OA,
 };
 
-int i915_oa_n_builtin_metric_sets_chv = 1;
+int i915_oa_n_builtin_metric_sets_chv = 14;
 
 static const struct i915_oa_reg b_counter_config_render_basic[] = {
 	{ _MMIO(0x2740), 0x00000000 },
@@ -146,6 +159,1874 @@ get_render_basic_mux_config(struct drm_i915_private *dev_priv,
 	return n;
 }
 
+static const struct i915_oa_reg b_counter_config_compute_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic[] = {
+	{ _MMIO(0x9888), 0x59800000 },
+	{ _MMIO(0x9888), 0x59800001 },
+	{ _MMIO(0x9888), 0x2e5800e0 },
+	{ _MMIO(0x9888), 0x2e3800e0 },
+	{ _MMIO(0x9888), 0x3580024f },
+	{ _MMIO(0x9888), 0x3d800140 },
+	{ _MMIO(0x9888), 0x08580042 },
+	{ _MMIO(0x9888), 0x0c580040 },
+	{ _MMIO(0x9888), 0x1058004c },
+	{ _MMIO(0x9888), 0x1458004b },
+	{ _MMIO(0x9888), 0x04580000 },
+	{ _MMIO(0x9888), 0x00580000 },
+	{ _MMIO(0x9888), 0x00195555 },
+	{ _MMIO(0x9888), 0x06380042 },
+	{ _MMIO(0x9888), 0x0a380040 },
+	{ _MMIO(0x9888), 0x0e38004c },
+	{ _MMIO(0x9888), 0x1238004b },
+	{ _MMIO(0x9888), 0x04380000 },
+	{ _MMIO(0x9888), 0x00384444 },
+	{ _MMIO(0x9888), 0x003a5555 },
+	{ _MMIO(0x9888), 0x018bffff },
+	{ _MMIO(0x9888), 0x01845555 },
+	{ _MMIO(0x9888), 0x17800074 },
+	{ _MMIO(0x9888), 0x1980007d },
+	{ _MMIO(0x9888), 0x1b80007c },
+	{ _MMIO(0x9888), 0x1d8000b6 },
+	{ _MMIO(0x9888), 0x1f8000b7 },
+	{ _MMIO(0x9888), 0x05800000 },
+	{ _MMIO(0x9888), 0x03800000 },
+	{ _MMIO(0x9888), 0x418000aa },
+	{ _MMIO(0x9888), 0x438000aa },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x47800000 },
+	{ _MMIO(0x9888), 0x4980012a },
+	{ _MMIO(0x9888), 0x4b80012a },
+	{ _MMIO(0x9888), 0x4d80012a },
+	{ _MMIO(0x9888), 0x4f80012a },
+	{ _MMIO(0x9888), 0x518001ce },
+	{ _MMIO(0x9888), 0x538001ce },
+	{ _MMIO(0x9888), 0x5580000e },
+	{ _MMIO(0x9888), 0x59800000 },
+};
+
+static int
+get_compute_basic_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_basic;
+	lens[n] = ARRAY_SIZE(mux_config_compute_basic);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_render_pipe_profile[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ffea },
+	{ _MMIO(0x2774), 0x00007ffc },
+	{ _MMIO(0x2778), 0x0007affa },
+	{ _MMIO(0x277c), 0x0000f5fd },
+	{ _MMIO(0x2780), 0x00079ffa },
+	{ _MMIO(0x2784), 0x0000f3fb },
+	{ _MMIO(0x2788), 0x0007bf7a },
+	{ _MMIO(0x278c), 0x0000f7e7 },
+	{ _MMIO(0x2790), 0x0007fefa },
+	{ _MMIO(0x2794), 0x0000f7cf },
+	{ _MMIO(0x2798), 0x00077ffa },
+	{ _MMIO(0x279c), 0x0000efdf },
+	{ _MMIO(0x27a0), 0x0006fffa },
+	{ _MMIO(0x27a4), 0x0000cfbf },
+	{ _MMIO(0x27a8), 0x0003fffa },
+	{ _MMIO(0x27ac), 0x00005f7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_pipe_profile[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile[] = {
+	{ _MMIO(0x9888), 0x59800000 },
+	{ _MMIO(0x9888), 0x59800001 },
+	{ _MMIO(0x9888), 0x261e0000 },
+	{ _MMIO(0x9888), 0x281f000f },
+	{ _MMIO(0x9888), 0x2817001a },
+	{ _MMIO(0x9888), 0x2791001f },
+	{ _MMIO(0x9888), 0x27880019 },
+	{ _MMIO(0x9888), 0x2d890000 },
+	{ _MMIO(0x9888), 0x278a0007 },
+	{ _MMIO(0x9888), 0x298d001f },
+	{ _MMIO(0x9888), 0x278e0020 },
+	{ _MMIO(0x9888), 0x2b8f0012 },
+	{ _MMIO(0x9888), 0x29900000 },
+	{ _MMIO(0x9888), 0x00184000 },
+	{ _MMIO(0x9888), 0x02181000 },
+	{ _MMIO(0x9888), 0x02194000 },
+	{ _MMIO(0x9888), 0x141e0002 },
+	{ _MMIO(0x9888), 0x041e0000 },
+	{ _MMIO(0x9888), 0x001e0000 },
+	{ _MMIO(0x9888), 0x221f0015 },
+	{ _MMIO(0x9888), 0x041f0000 },
+	{ _MMIO(0x9888), 0x001f4000 },
+	{ _MMIO(0x9888), 0x021f0000 },
+	{ _MMIO(0x9888), 0x023a8000 },
+	{ _MMIO(0x9888), 0x0213c000 },
+	{ _MMIO(0x9888), 0x02164000 },
+	{ _MMIO(0x9888), 0x24170012 },
+	{ _MMIO(0x9888), 0x04170000 },
+	{ _MMIO(0x9888), 0x07910005 },
+	{ _MMIO(0x9888), 0x05910000 },
+	{ _MMIO(0x9888), 0x01911500 },
+	{ _MMIO(0x9888), 0x03910501 },
+	{ _MMIO(0x9888), 0x0d880002 },
+	{ _MMIO(0x9888), 0x1d880003 },
+	{ _MMIO(0x9888), 0x05880000 },
+	{ _MMIO(0x9888), 0x0b890032 },
+	{ _MMIO(0x9888), 0x1b890031 },
+	{ _MMIO(0x9888), 0x05890000 },
+	{ _MMIO(0x9888), 0x01890040 },
+	{ _MMIO(0x9888), 0x03890040 },
+	{ _MMIO(0x9888), 0x098a0000 },
+	{ _MMIO(0x9888), 0x198a0004 },
+	{ _MMIO(0x9888), 0x058a0000 },
+	{ _MMIO(0x9888), 0x018a8050 },
+	{ _MMIO(0x9888), 0x038a2050 },
+	{ _MMIO(0x9888), 0x018b95a9 },
+	{ _MMIO(0x9888), 0x038be5a9 },
+	{ _MMIO(0x9888), 0x018c1500 },
+	{ _MMIO(0x9888), 0x038c0501 },
+	{ _MMIO(0x9888), 0x178d0015 },
+	{ _MMIO(0x9888), 0x058d0000 },
+	{ _MMIO(0x9888), 0x138e0004 },
+	{ _MMIO(0x9888), 0x218e000c },
+	{ _MMIO(0x9888), 0x058e0000 },
+	{ _MMIO(0x9888), 0x018e0500 },
+	{ _MMIO(0x9888), 0x038e0101 },
+	{ _MMIO(0x9888), 0x0f8f0027 },
+	{ _MMIO(0x9888), 0x058f0000 },
+	{ _MMIO(0x9888), 0x018f0000 },
+	{ _MMIO(0x9888), 0x038f0001 },
+	{ _MMIO(0x9888), 0x11900013 },
+	{ _MMIO(0x9888), 0x1f900017 },
+	{ _MMIO(0x9888), 0x05900000 },
+	{ _MMIO(0x9888), 0x01900100 },
+	{ _MMIO(0x9888), 0x03900001 },
+	{ _MMIO(0x9888), 0x01845555 },
+	{ _MMIO(0x9888), 0x03845555 },
+	{ _MMIO(0x9888), 0x418000aa },
+	{ _MMIO(0x9888), 0x438000aa },
+	{ _MMIO(0x9888), 0x458000aa },
+	{ _MMIO(0x9888), 0x478000aa },
+	{ _MMIO(0x9888), 0x4980018c },
+	{ _MMIO(0x9888), 0x4b80014b },
+	{ _MMIO(0x9888), 0x4d800128 },
+	{ _MMIO(0x9888), 0x4f80012a },
+	{ _MMIO(0x9888), 0x51800187 },
+	{ _MMIO(0x9888), 0x5380014b },
+	{ _MMIO(0x9888), 0x55800149 },
+	{ _MMIO(0x9888), 0x5780010a },
+	{ _MMIO(0x9888), 0x59800000 },
+};
+
+static int
+get_render_pipe_profile_mux_config(struct drm_i915_private *dev_priv,
+				   const struct i915_oa_reg **regs,
+				   int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_render_pipe_profile;
+	lens[n] = ARRAY_SIZE(mux_config_render_pipe_profile);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_hdc_and_sf[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x10800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fff7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_hdc_and_sf[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_hdc_and_sf[] = {
+	{ _MMIO(0x9888), 0x105c0232 },
+	{ _MMIO(0x9888), 0x10580232 },
+	{ _MMIO(0x9888), 0x10380232 },
+	{ _MMIO(0x9888), 0x10dc0232 },
+	{ _MMIO(0x9888), 0x10d80232 },
+	{ _MMIO(0x9888), 0x10b80232 },
+	{ _MMIO(0x9888), 0x118e4400 },
+	{ _MMIO(0x9888), 0x025c6080 },
+	{ _MMIO(0x9888), 0x045c004b },
+	{ _MMIO(0x9888), 0x005c8000 },
+	{ _MMIO(0x9888), 0x00582080 },
+	{ _MMIO(0x9888), 0x0258004b },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x045b4000 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f00aa },
+	{ _MMIO(0x9888), 0x04386080 },
+	{ _MMIO(0x9888), 0x0638404b },
+	{ _MMIO(0x9888), 0x02384000 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a380000 },
+	{ _MMIO(0x9888), 0x0c380000 },
+	{ _MMIO(0x9888), 0x00398000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x0cdc25c1 },
+	{ _MMIO(0x9888), 0x0adcc000 },
+	{ _MMIO(0x9888), 0x0ad825c1 },
+	{ _MMIO(0x9888), 0x18db4000 },
+	{ _MMIO(0x9888), 0x1adb0001 },
+	{ _MMIO(0x9888), 0x0e9f8000 },
+	{ _MMIO(0x9888), 0x109f02aa },
+	{ _MMIO(0x9888), 0x0eb825c1 },
+	{ _MMIO(0x9888), 0x18b80154 },
+	{ _MMIO(0x9888), 0x0ab9a000 },
+	{ _MMIO(0x9888), 0x0cb9a000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x0d88c000 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x258baa05 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x198c5400 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x098dc000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x098e05c0 },
+	{ _MMIO(0x9888), 0x058e0000 },
+	{ _MMIO(0x9888), 0x198f0020 },
+	{ _MMIO(0x9888), 0x2185aa0a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x19835000 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x09848000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x19808000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x51800040 },
+	{ _MMIO(0x9888), 0x43800400 },
+	{ _MMIO(0x9888), 0x45800800 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800c62 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f801042 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x418014a4 },
+};
+
+static int
+get_hdc_and_sf_mux_config(struct drm_i915_private *dev_priv,
+			  const struct i915_oa_reg **regs,
+			  int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_hdc_and_sf;
+	lens[n] = ARRAY_SIZE(mux_config_hdc_and_sf);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1[] = {
+	{ _MMIO(0x9888), 0x10bf03da },
+	{ _MMIO(0x9888), 0x14bf0001 },
+	{ _MMIO(0x9888), 0x12980340 },
+	{ _MMIO(0x9888), 0x12990340 },
+	{ _MMIO(0x9888), 0x0cbf1187 },
+	{ _MMIO(0x9888), 0x0ebf1205 },
+	{ _MMIO(0x9888), 0x00bf0500 },
+	{ _MMIO(0x9888), 0x02bf042b },
+	{ _MMIO(0x9888), 0x04bf002c },
+	{ _MMIO(0x9888), 0x0cdac000 },
+	{ _MMIO(0x9888), 0x0edac000 },
+	{ _MMIO(0x9888), 0x00da8000 },
+	{ _MMIO(0x9888), 0x02dac000 },
+	{ _MMIO(0x9888), 0x04da4000 },
+	{ _MMIO(0x9888), 0x04983400 },
+	{ _MMIO(0x9888), 0x10980000 },
+	{ _MMIO(0x9888), 0x06990034 },
+	{ _MMIO(0x9888), 0x10990000 },
+	{ _MMIO(0x9888), 0x0c9dc000 },
+	{ _MMIO(0x9888), 0x0e9dc000 },
+	{ _MMIO(0x9888), 0x009d8000 },
+	{ _MMIO(0x9888), 0x029dc000 },
+	{ _MMIO(0x9888), 0x049d4000 },
+	{ _MMIO(0x9888), 0x109f02a8 },
+	{ _MMIO(0x9888), 0x0c9fa000 },
+	{ _MMIO(0x9888), 0x0e9f00ba },
+	{ _MMIO(0x9888), 0x0cb88000 },
+	{ _MMIO(0x9888), 0x0cb95000 },
+	{ _MMIO(0x9888), 0x0eb95000 },
+	{ _MMIO(0x9888), 0x00b94000 },
+	{ _MMIO(0x9888), 0x02b95000 },
+	{ _MMIO(0x9888), 0x04b91000 },
+	{ _MMIO(0x9888), 0x06b92000 },
+	{ _MMIO(0x9888), 0x0cba4000 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x258b800a },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b5500 },
+	{ _MMIO(0x9888), 0x198c4000 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x018d8000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2185800a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x1b830154 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x47800000 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800060 },
+};
+
+static int
+get_l3_1_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_1;
+	lens[n] = ARRAY_SIZE(mux_config_l3_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_2[] = {
+	{ _MMIO(0x9888), 0x103f03da },
+	{ _MMIO(0x9888), 0x143f0001 },
+	{ _MMIO(0x9888), 0x12180340 },
+	{ _MMIO(0x9888), 0x12190340 },
+	{ _MMIO(0x9888), 0x0c3f1187 },
+	{ _MMIO(0x9888), 0x0e3f1205 },
+	{ _MMIO(0x9888), 0x003f0500 },
+	{ _MMIO(0x9888), 0x023f042b },
+	{ _MMIO(0x9888), 0x043f002c },
+	{ _MMIO(0x9888), 0x0c5ac000 },
+	{ _MMIO(0x9888), 0x0e5ac000 },
+	{ _MMIO(0x9888), 0x005a8000 },
+	{ _MMIO(0x9888), 0x025ac000 },
+	{ _MMIO(0x9888), 0x045a4000 },
+	{ _MMIO(0x9888), 0x04183400 },
+	{ _MMIO(0x9888), 0x10180000 },
+	{ _MMIO(0x9888), 0x06190034 },
+	{ _MMIO(0x9888), 0x10190000 },
+	{ _MMIO(0x9888), 0x0c1dc000 },
+	{ _MMIO(0x9888), 0x0e1dc000 },
+	{ _MMIO(0x9888), 0x001d8000 },
+	{ _MMIO(0x9888), 0x021dc000 },
+	{ _MMIO(0x9888), 0x041d4000 },
+	{ _MMIO(0x9888), 0x101f02a8 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f00ba },
+	{ _MMIO(0x9888), 0x0c388000 },
+	{ _MMIO(0x9888), 0x0c395000 },
+	{ _MMIO(0x9888), 0x0e395000 },
+	{ _MMIO(0x9888), 0x00394000 },
+	{ _MMIO(0x9888), 0x02395000 },
+	{ _MMIO(0x9888), 0x04391000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x0c3a4000 },
+	{ _MMIO(0x9888), 0x1b8aa800 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x258b4005 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x2185800a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x1b830154 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x47800000 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800060 },
+};
+
+static int
+get_l3_2_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_2;
+	lens[n] = ARRAY_SIZE(mux_config_l3_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_3[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_3[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_3[] = {
+	{ _MMIO(0x9888), 0x121b0340 },
+	{ _MMIO(0x9888), 0x103f0274 },
+	{ _MMIO(0x9888), 0x123f0000 },
+	{ _MMIO(0x9888), 0x129b0340 },
+	{ _MMIO(0x9888), 0x10bf0274 },
+	{ _MMIO(0x9888), 0x12bf0000 },
+	{ _MMIO(0x9888), 0x041b3400 },
+	{ _MMIO(0x9888), 0x101b0000 },
+	{ _MMIO(0x9888), 0x045c8000 },
+	{ _MMIO(0x9888), 0x0a3d4000 },
+	{ _MMIO(0x9888), 0x003f0080 },
+	{ _MMIO(0x9888), 0x023f0793 },
+	{ _MMIO(0x9888), 0x043f0014 },
+	{ _MMIO(0x9888), 0x04588000 },
+	{ _MMIO(0x9888), 0x005a8000 },
+	{ _MMIO(0x9888), 0x025ac000 },
+	{ _MMIO(0x9888), 0x045a4000 },
+	{ _MMIO(0x9888), 0x0a5b4000 },
+	{ _MMIO(0x9888), 0x001d8000 },
+	{ _MMIO(0x9888), 0x021dc000 },
+	{ _MMIO(0x9888), 0x041d4000 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f002a },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x00394000 },
+	{ _MMIO(0x9888), 0x02395000 },
+	{ _MMIO(0x9888), 0x04399000 },
+	{ _MMIO(0x9888), 0x069b0034 },
+	{ _MMIO(0x9888), 0x109b0000 },
+	{ _MMIO(0x9888), 0x06dc4000 },
+	{ _MMIO(0x9888), 0x0cbd4000 },
+	{ _MMIO(0x9888), 0x0cbf0981 },
+	{ _MMIO(0x9888), 0x0ebf0a0f },
+	{ _MMIO(0x9888), 0x06d84000 },
+	{ _MMIO(0x9888), 0x0cdac000 },
+	{ _MMIO(0x9888), 0x0edac000 },
+	{ _MMIO(0x9888), 0x0cdb4000 },
+	{ _MMIO(0x9888), 0x0c9dc000 },
+	{ _MMIO(0x9888), 0x0e9dc000 },
+	{ _MMIO(0x9888), 0x109f02a8 },
+	{ _MMIO(0x9888), 0x0e9f0080 },
+	{ _MMIO(0x9888), 0x0cb84000 },
+	{ _MMIO(0x9888), 0x0cb95000 },
+	{ _MMIO(0x9888), 0x0eb95000 },
+	{ _MMIO(0x9888), 0x06b92000 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x258b8009 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x198c4000 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2185800a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x1b830154 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x45800c00 },
+	{ _MMIO(0x9888), 0x47800c63 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f8014a5 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800045 },
+};
+
+static int
+get_l3_3_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_3;
+	lens[n] = ARRAY_SIZE(mux_config_l3_3);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_4[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_4[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_4[] = {
+	{ _MMIO(0x9888), 0x121a0340 },
+	{ _MMIO(0x9888), 0x103f0017 },
+	{ _MMIO(0x9888), 0x123f0020 },
+	{ _MMIO(0x9888), 0x129a0340 },
+	{ _MMIO(0x9888), 0x10bf0017 },
+	{ _MMIO(0x9888), 0x12bf0020 },
+	{ _MMIO(0x9888), 0x041a3400 },
+	{ _MMIO(0x9888), 0x101a0000 },
+	{ _MMIO(0x9888), 0x043b8000 },
+	{ _MMIO(0x9888), 0x0a3e0010 },
+	{ _MMIO(0x9888), 0x003f0200 },
+	{ _MMIO(0x9888), 0x023f0113 },
+	{ _MMIO(0x9888), 0x043f0014 },
+	{ _MMIO(0x9888), 0x02592000 },
+	{ _MMIO(0x9888), 0x005a8000 },
+	{ _MMIO(0x9888), 0x025ac000 },
+	{ _MMIO(0x9888), 0x045a4000 },
+	{ _MMIO(0x9888), 0x0a1c8000 },
+	{ _MMIO(0x9888), 0x001d8000 },
+	{ _MMIO(0x9888), 0x021dc000 },
+	{ _MMIO(0x9888), 0x041d4000 },
+	{ _MMIO(0x9888), 0x0a1e8000 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f001a },
+	{ _MMIO(0x9888), 0x00394000 },
+	{ _MMIO(0x9888), 0x02395000 },
+	{ _MMIO(0x9888), 0x04391000 },
+	{ _MMIO(0x9888), 0x069a0034 },
+	{ _MMIO(0x9888), 0x109a0000 },
+	{ _MMIO(0x9888), 0x06bb4000 },
+	{ _MMIO(0x9888), 0x0abe0040 },
+	{ _MMIO(0x9888), 0x0cbf0984 },
+	{ _MMIO(0x9888), 0x0ebf0a02 },
+	{ _MMIO(0x9888), 0x02d94000 },
+	{ _MMIO(0x9888), 0x0cdac000 },
+	{ _MMIO(0x9888), 0x0edac000 },
+	{ _MMIO(0x9888), 0x0c9c0400 },
+	{ _MMIO(0x9888), 0x0c9dc000 },
+	{ _MMIO(0x9888), 0x0e9dc000 },
+	{ _MMIO(0x9888), 0x0c9e0400 },
+	{ _MMIO(0x9888), 0x109f02a8 },
+	{ _MMIO(0x9888), 0x0e9f0040 },
+	{ _MMIO(0x9888), 0x0cb95000 },
+	{ _MMIO(0x9888), 0x0eb95000 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x258b8009 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x198c4000 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2185800a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x1b830154 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x45800800 },
+	{ _MMIO(0x9888), 0x47800842 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f801084 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800044 },
+};
+
+static int
+get_l3_4_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_4;
+	lens[n] = ARRAY_SIZE(mux_config_l3_4);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00006000 },
+	{ _MMIO(0x2774), 0x0000f3ff },
+	{ _MMIO(0x2778), 0x00001800 },
+	{ _MMIO(0x277c), 0x0000fcff },
+	{ _MMIO(0x2780), 0x00000600 },
+	{ _MMIO(0x2784), 0x0000ff3f },
+	{ _MMIO(0x2788), 0x00000180 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000060 },
+	{ _MMIO(0x2794), 0x0000fff3 },
+	{ _MMIO(0x2798), 0x00000018 },
+	{ _MMIO(0x279c), 0x0000fffc },
+};
+
+static const struct i915_oa_reg flex_eu_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x9888), 0x143b000e },
+	{ _MMIO(0x9888), 0x043c55c0 },
+	{ _MMIO(0x9888), 0x0a1e0280 },
+	{ _MMIO(0x9888), 0x0c1e0408 },
+	{ _MMIO(0x9888), 0x10390000 },
+	{ _MMIO(0x9888), 0x12397a1f },
+	{ _MMIO(0x9888), 0x14bb000e },
+	{ _MMIO(0x9888), 0x04bc5000 },
+	{ _MMIO(0x9888), 0x0a9e0296 },
+	{ _MMIO(0x9888), 0x0c9e0008 },
+	{ _MMIO(0x9888), 0x10b90000 },
+	{ _MMIO(0x9888), 0x12b97a1f },
+	{ _MMIO(0x9888), 0x063b0042 },
+	{ _MMIO(0x9888), 0x103b0000 },
+	{ _MMIO(0x9888), 0x083c0000 },
+	{ _MMIO(0x9888), 0x0a3e0040 },
+	{ _MMIO(0x9888), 0x043f8000 },
+	{ _MMIO(0x9888), 0x02594000 },
+	{ _MMIO(0x9888), 0x045a8000 },
+	{ _MMIO(0x9888), 0x0c1c0400 },
+	{ _MMIO(0x9888), 0x041d8000 },
+	{ _MMIO(0x9888), 0x081e02c0 },
+	{ _MMIO(0x9888), 0x0e1e0000 },
+	{ _MMIO(0x9888), 0x0c1fa800 },
+	{ _MMIO(0x9888), 0x0e1f0260 },
+	{ _MMIO(0x9888), 0x101f0014 },
+	{ _MMIO(0x9888), 0x003905e0 },
+	{ _MMIO(0x9888), 0x06390bc0 },
+	{ _MMIO(0x9888), 0x02390018 },
+	{ _MMIO(0x9888), 0x04394000 },
+	{ _MMIO(0x9888), 0x04bb0042 },
+	{ _MMIO(0x9888), 0x10bb0000 },
+	{ _MMIO(0x9888), 0x02bc05c0 },
+	{ _MMIO(0x9888), 0x08bc0000 },
+	{ _MMIO(0x9888), 0x0abe0004 },
+	{ _MMIO(0x9888), 0x02bf8000 },
+	{ _MMIO(0x9888), 0x02d91000 },
+	{ _MMIO(0x9888), 0x02da8000 },
+	{ _MMIO(0x9888), 0x089c8000 },
+	{ _MMIO(0x9888), 0x029d8000 },
+	{ _MMIO(0x9888), 0x089e8000 },
+	{ _MMIO(0x9888), 0x0e9e0000 },
+	{ _MMIO(0x9888), 0x0e9fa806 },
+	{ _MMIO(0x9888), 0x109f0142 },
+	{ _MMIO(0x9888), 0x08b90617 },
+	{ _MMIO(0x9888), 0x0ab90be0 },
+	{ _MMIO(0x9888), 0x02b94000 },
+	{ _MMIO(0x9888), 0x0d88f000 },
+	{ _MMIO(0x9888), 0x0f88000c },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x1b8a2800 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x238b52a0 },
+	{ _MMIO(0x9888), 0x258b6a95 },
+	{ _MMIO(0x9888), 0x278b0029 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c1500 },
+	{ _MMIO(0x9888), 0x1b8c0014 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x038d8000 },
+	{ _MMIO(0x9888), 0x058d2000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x4d800444 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f804000 },
+	{ _MMIO(0x9888), 0x43801080 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800084 },
+	{ _MMIO(0x9888), 0x53800044 },
+	{ _MMIO(0x9888), 0x47801080 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800000 },
+	{ _MMIO(0x9888), 0x41800840 },
+};
+
+static int
+get_rasterizer_and_pixel_backend_mux_config(struct drm_i915_private *dev_priv,
+					    const struct i915_oa_reg **regs,
+					    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_rasterizer_and_pixel_backend;
+	lens[n] = ARRAY_SIZE(mux_config_rasterizer_and_pixel_backend);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler_1[] = {
+	{ _MMIO(0x9888), 0x18921400 },
+	{ _MMIO(0x9888), 0x149500ab },
+	{ _MMIO(0x9888), 0x18b21400 },
+	{ _MMIO(0x9888), 0x14b500ab },
+	{ _MMIO(0x9888), 0x18d21400 },
+	{ _MMIO(0x9888), 0x14d500ab },
+	{ _MMIO(0x9888), 0x0cdc8000 },
+	{ _MMIO(0x9888), 0x0edc4000 },
+	{ _MMIO(0x9888), 0x02dcc000 },
+	{ _MMIO(0x9888), 0x04dcc000 },
+	{ _MMIO(0x9888), 0x1abd00a0 },
+	{ _MMIO(0x9888), 0x0abd8000 },
+	{ _MMIO(0x9888), 0x0cd88000 },
+	{ _MMIO(0x9888), 0x0ed84000 },
+	{ _MMIO(0x9888), 0x04d88000 },
+	{ _MMIO(0x9888), 0x1adb0050 },
+	{ _MMIO(0x9888), 0x04db8000 },
+	{ _MMIO(0x9888), 0x06db8000 },
+	{ _MMIO(0x9888), 0x08db8000 },
+	{ _MMIO(0x9888), 0x0adb4000 },
+	{ _MMIO(0x9888), 0x109f02a0 },
+	{ _MMIO(0x9888), 0x0c9fa000 },
+	{ _MMIO(0x9888), 0x0e9f00aa },
+	{ _MMIO(0x9888), 0x18b82500 },
+	{ _MMIO(0x9888), 0x02b88000 },
+	{ _MMIO(0x9888), 0x04b84000 },
+	{ _MMIO(0x9888), 0x06b84000 },
+	{ _MMIO(0x9888), 0x08b84000 },
+	{ _MMIO(0x9888), 0x0ab84000 },
+	{ _MMIO(0x9888), 0x0cb88000 },
+	{ _MMIO(0x9888), 0x0cb98000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x00b98000 },
+	{ _MMIO(0x9888), 0x02b9a000 },
+	{ _MMIO(0x9888), 0x04b9a000 },
+	{ _MMIO(0x9888), 0x06b92000 },
+	{ _MMIO(0x9888), 0x1aba0200 },
+	{ _MMIO(0x9888), 0x02ba8000 },
+	{ _MMIO(0x9888), 0x0cba8000 },
+	{ _MMIO(0x9888), 0x04908000 },
+	{ _MMIO(0x9888), 0x04918000 },
+	{ _MMIO(0x9888), 0x04927300 },
+	{ _MMIO(0x9888), 0x10920000 },
+	{ _MMIO(0x9888), 0x1893000a },
+	{ _MMIO(0x9888), 0x0a934000 },
+	{ _MMIO(0x9888), 0x0a946000 },
+	{ _MMIO(0x9888), 0x0c959000 },
+	{ _MMIO(0x9888), 0x0e950098 },
+	{ _MMIO(0x9888), 0x10950000 },
+	{ _MMIO(0x9888), 0x04b04000 },
+	{ _MMIO(0x9888), 0x04b14000 },
+	{ _MMIO(0x9888), 0x04b20073 },
+	{ _MMIO(0x9888), 0x10b20000 },
+	{ _MMIO(0x9888), 0x04b38000 },
+	{ _MMIO(0x9888), 0x06b38000 },
+	{ _MMIO(0x9888), 0x08b34000 },
+	{ _MMIO(0x9888), 0x04b4c000 },
+	{ _MMIO(0x9888), 0x02b59890 },
+	{ _MMIO(0x9888), 0x10b50000 },
+	{ _MMIO(0x9888), 0x06d04000 },
+	{ _MMIO(0x9888), 0x06d14000 },
+	{ _MMIO(0x9888), 0x06d20073 },
+	{ _MMIO(0x9888), 0x10d20000 },
+	{ _MMIO(0x9888), 0x18d30020 },
+	{ _MMIO(0x9888), 0x02d38000 },
+	{ _MMIO(0x9888), 0x0cd34000 },
+	{ _MMIO(0x9888), 0x0ad48000 },
+	{ _MMIO(0x9888), 0x04d42000 },
+	{ _MMIO(0x9888), 0x0ed59000 },
+	{ _MMIO(0x9888), 0x00d59800 },
+	{ _MMIO(0x9888), 0x10d50000 },
+	{ _MMIO(0x9888), 0x0f88000e },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b5500 },
+	{ _MMIO(0x9888), 0x258b000a },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x0d8d8000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x018d8000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x2185000a },
+	{ _MMIO(0x9888), 0x1b830150 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d848000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d808000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47801021 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800c64 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800c02 },
+};
+
+static int
+get_sampler_1_mux_config(struct drm_i915_private *dev_priv,
+			 const struct i915_oa_reg **regs,
+			 int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_sampler_1;
+	lens[n] = ARRAY_SIZE(mux_config_sampler_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler_2[] = {
+	{ _MMIO(0x9888), 0x18121400 },
+	{ _MMIO(0x9888), 0x141500ab },
+	{ _MMIO(0x9888), 0x18321400 },
+	{ _MMIO(0x9888), 0x143500ab },
+	{ _MMIO(0x9888), 0x18521400 },
+	{ _MMIO(0x9888), 0x145500ab },
+	{ _MMIO(0x9888), 0x0c5c8000 },
+	{ _MMIO(0x9888), 0x0e5c4000 },
+	{ _MMIO(0x9888), 0x025cc000 },
+	{ _MMIO(0x9888), 0x045cc000 },
+	{ _MMIO(0x9888), 0x1a3d00a0 },
+	{ _MMIO(0x9888), 0x0a3d8000 },
+	{ _MMIO(0x9888), 0x0c588000 },
+	{ _MMIO(0x9888), 0x0e584000 },
+	{ _MMIO(0x9888), 0x04588000 },
+	{ _MMIO(0x9888), 0x1a5b0050 },
+	{ _MMIO(0x9888), 0x045b8000 },
+	{ _MMIO(0x9888), 0x065b8000 },
+	{ _MMIO(0x9888), 0x085b8000 },
+	{ _MMIO(0x9888), 0x0a5b4000 },
+	{ _MMIO(0x9888), 0x101f02a0 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f00aa },
+	{ _MMIO(0x9888), 0x18382500 },
+	{ _MMIO(0x9888), 0x02388000 },
+	{ _MMIO(0x9888), 0x04384000 },
+	{ _MMIO(0x9888), 0x06384000 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0c388000 },
+	{ _MMIO(0x9888), 0x0c398000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x00398000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x1a3a0200 },
+	{ _MMIO(0x9888), 0x023a8000 },
+	{ _MMIO(0x9888), 0x0c3a8000 },
+	{ _MMIO(0x9888), 0x04108000 },
+	{ _MMIO(0x9888), 0x04118000 },
+	{ _MMIO(0x9888), 0x04127300 },
+	{ _MMIO(0x9888), 0x10120000 },
+	{ _MMIO(0x9888), 0x1813000a },
+	{ _MMIO(0x9888), 0x0a134000 },
+	{ _MMIO(0x9888), 0x0a146000 },
+	{ _MMIO(0x9888), 0x0c159000 },
+	{ _MMIO(0x9888), 0x0e150098 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x04304000 },
+	{ _MMIO(0x9888), 0x04314000 },
+	{ _MMIO(0x9888), 0x04320073 },
+	{ _MMIO(0x9888), 0x10320000 },
+	{ _MMIO(0x9888), 0x04338000 },
+	{ _MMIO(0x9888), 0x06338000 },
+	{ _MMIO(0x9888), 0x08334000 },
+	{ _MMIO(0x9888), 0x0434c000 },
+	{ _MMIO(0x9888), 0x02359890 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x06504000 },
+	{ _MMIO(0x9888), 0x06514000 },
+	{ _MMIO(0x9888), 0x06520073 },
+	{ _MMIO(0x9888), 0x10520000 },
+	{ _MMIO(0x9888), 0x18530020 },
+	{ _MMIO(0x9888), 0x02538000 },
+	{ _MMIO(0x9888), 0x0c534000 },
+	{ _MMIO(0x9888), 0x0a548000 },
+	{ _MMIO(0x9888), 0x04542000 },
+	{ _MMIO(0x9888), 0x0e559000 },
+	{ _MMIO(0x9888), 0x00559800 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x1b8aa000 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x258b0005 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x2185000a },
+	{ _MMIO(0x9888), 0x1b830150 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d848000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d808000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47801021 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800c64 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800c02 },
+};
+
+static int
+get_sampler_2_mux_config(struct drm_i915_private *dev_priv,
+			 const struct i915_oa_reg **regs,
+			 int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_sampler_2;
+	lens[n] = ARRAY_SIZE(mux_config_sampler_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x0000fe7f },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000ffbf },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fff7 },
+	{ _MMIO(0x2798), 0x00000000 },
+	{ _MMIO(0x279c), 0x0000fff9 },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_1[] = {
+	{ _MMIO(0x9888), 0x16154d60 },
+	{ _MMIO(0x9888), 0x16352e60 },
+	{ _MMIO(0x9888), 0x16554d60 },
+	{ _MMIO(0x9888), 0x16950000 },
+	{ _MMIO(0x9888), 0x16b50000 },
+	{ _MMIO(0x9888), 0x16d50000 },
+	{ _MMIO(0x9888), 0x005c8000 },
+	{ _MMIO(0x9888), 0x045cc000 },
+	{ _MMIO(0x9888), 0x065c4000 },
+	{ _MMIO(0x9888), 0x083d8000 },
+	{ _MMIO(0x9888), 0x0a3d8000 },
+	{ _MMIO(0x9888), 0x0458c000 },
+	{ _MMIO(0x9888), 0x025b8000 },
+	{ _MMIO(0x9888), 0x085b4000 },
+	{ _MMIO(0x9888), 0x0a5b4000 },
+	{ _MMIO(0x9888), 0x0c5b8000 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f00aa },
+	{ _MMIO(0x9888), 0x02384000 },
+	{ _MMIO(0x9888), 0x04388000 },
+	{ _MMIO(0x9888), 0x06388000 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0c384000 },
+	{ _MMIO(0x9888), 0x00398000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x043a8000 },
+	{ _MMIO(0x9888), 0x063a8000 },
+	{ _MMIO(0x9888), 0x08138000 },
+	{ _MMIO(0x9888), 0x0a138000 },
+	{ _MMIO(0x9888), 0x06143000 },
+	{ _MMIO(0x9888), 0x0415cfc7 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x02338000 },
+	{ _MMIO(0x9888), 0x0c338000 },
+	{ _MMIO(0x9888), 0x04342000 },
+	{ _MMIO(0x9888), 0x06344000 },
+	{ _MMIO(0x9888), 0x0035c700 },
+	{ _MMIO(0x9888), 0x063500cf },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x04538000 },
+	{ _MMIO(0x9888), 0x06538000 },
+	{ _MMIO(0x9888), 0x0454c000 },
+	{ _MMIO(0x9888), 0x0255cfc7 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x06dc8000 },
+	{ _MMIO(0x9888), 0x08dc4000 },
+	{ _MMIO(0x9888), 0x0cdcc000 },
+	{ _MMIO(0x9888), 0x0edcc000 },
+	{ _MMIO(0x9888), 0x1abd00a8 },
+	{ _MMIO(0x9888), 0x0cd8c000 },
+	{ _MMIO(0x9888), 0x0ed84000 },
+	{ _MMIO(0x9888), 0x0edb8000 },
+	{ _MMIO(0x9888), 0x18db0800 },
+	{ _MMIO(0x9888), 0x1adb0254 },
+	{ _MMIO(0x9888), 0x0e9faa00 },
+	{ _MMIO(0x9888), 0x109f02aa },
+	{ _MMIO(0x9888), 0x0eb84000 },
+	{ _MMIO(0x9888), 0x16b84000 },
+	{ _MMIO(0x9888), 0x18b8156a },
+	{ _MMIO(0x9888), 0x06b98000 },
+	{ _MMIO(0x9888), 0x08b9a000 },
+	{ _MMIO(0x9888), 0x0ab9a000 },
+	{ _MMIO(0x9888), 0x0cb9a000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x18baa000 },
+	{ _MMIO(0x9888), 0x1aba0002 },
+	{ _MMIO(0x9888), 0x16934000 },
+	{ _MMIO(0x9888), 0x1893000a },
+	{ _MMIO(0x9888), 0x0a947000 },
+	{ _MMIO(0x9888), 0x0c95c5c1 },
+	{ _MMIO(0x9888), 0x0e9500c3 },
+	{ _MMIO(0x9888), 0x10950000 },
+	{ _MMIO(0x9888), 0x0eb38000 },
+	{ _MMIO(0x9888), 0x16b30040 },
+	{ _MMIO(0x9888), 0x18b30020 },
+	{ _MMIO(0x9888), 0x06b48000 },
+	{ _MMIO(0x9888), 0x08b41000 },
+	{ _MMIO(0x9888), 0x0ab48000 },
+	{ _MMIO(0x9888), 0x06b5c500 },
+	{ _MMIO(0x9888), 0x08b500c3 },
+	{ _MMIO(0x9888), 0x0eb5c100 },
+	{ _MMIO(0x9888), 0x10b50000 },
+	{ _MMIO(0x9888), 0x16d31500 },
+	{ _MMIO(0x9888), 0x08d4e000 },
+	{ _MMIO(0x9888), 0x08d5c100 },
+	{ _MMIO(0x9888), 0x0ad5c3c5 },
+	{ _MMIO(0x9888), 0x10d50000 },
+	{ _MMIO(0x9888), 0x0d88f800 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x258baaa5 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x0f8c4000 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c5500 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x078d8000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800c42 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800063 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800800 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f8014a4 },
+	{ _MMIO(0x9888), 0x41801042 },
+};
+
+static int
+get_tdl_1_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_1;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x0000fe7f },
+	{ _MMIO(0x2780), 0x00000000 },
+	{ _MMIO(0x2784), 0x0000ff9f },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000ffe7 },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fffb },
+	{ _MMIO(0x2798), 0x00000002 },
+	{ _MMIO(0x279c), 0x0000fffd },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_2[] = {
+	{ _MMIO(0x9888), 0x16150000 },
+	{ _MMIO(0x9888), 0x16350000 },
+	{ _MMIO(0x9888), 0x16550000 },
+	{ _MMIO(0x9888), 0x16952e60 },
+	{ _MMIO(0x9888), 0x16b54d60 },
+	{ _MMIO(0x9888), 0x16d52e60 },
+	{ _MMIO(0x9888), 0x065c8000 },
+	{ _MMIO(0x9888), 0x085cc000 },
+	{ _MMIO(0x9888), 0x0a5cc000 },
+	{ _MMIO(0x9888), 0x0c5c4000 },
+	{ _MMIO(0x9888), 0x0e3d8000 },
+	{ _MMIO(0x9888), 0x183da000 },
+	{ _MMIO(0x9888), 0x06588000 },
+	{ _MMIO(0x9888), 0x08588000 },
+	{ _MMIO(0x9888), 0x0a584000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x185b5800 },
+	{ _MMIO(0x9888), 0x1a5b000a },
+	{ _MMIO(0x9888), 0x0e1faa00 },
+	{ _MMIO(0x9888), 0x101f02aa },
+	{ _MMIO(0x9888), 0x0e384000 },
+	{ _MMIO(0x9888), 0x16384000 },
+	{ _MMIO(0x9888), 0x18382a55 },
+	{ _MMIO(0x9888), 0x06398000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0c39a000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x1a3a02a0 },
+	{ _MMIO(0x9888), 0x0e138000 },
+	{ _MMIO(0x9888), 0x16130500 },
+	{ _MMIO(0x9888), 0x06148000 },
+	{ _MMIO(0x9888), 0x08146000 },
+	{ _MMIO(0x9888), 0x0615c100 },
+	{ _MMIO(0x9888), 0x0815c500 },
+	{ _MMIO(0x9888), 0x0a1500c3 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x16335040 },
+	{ _MMIO(0x9888), 0x08349000 },
+	{ _MMIO(0x9888), 0x0a341000 },
+	{ _MMIO(0x9888), 0x083500c1 },
+	{ _MMIO(0x9888), 0x0a35c500 },
+	{ _MMIO(0x9888), 0x0c3500c3 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x1853002a },
+	{ _MMIO(0x9888), 0x0a54e000 },
+	{ _MMIO(0x9888), 0x0c55c500 },
+	{ _MMIO(0x9888), 0x0e55c1c3 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x00dc8000 },
+	{ _MMIO(0x9888), 0x02dcc000 },
+	{ _MMIO(0x9888), 0x04dc4000 },
+	{ _MMIO(0x9888), 0x04bd8000 },
+	{ _MMIO(0x9888), 0x06bd8000 },
+	{ _MMIO(0x9888), 0x02d8c000 },
+	{ _MMIO(0x9888), 0x02db8000 },
+	{ _MMIO(0x9888), 0x04db4000 },
+	{ _MMIO(0x9888), 0x06db4000 },
+	{ _MMIO(0x9888), 0x08db8000 },
+	{ _MMIO(0x9888), 0x0c9fa000 },
+	{ _MMIO(0x9888), 0x0e9f00aa },
+	{ _MMIO(0x9888), 0x02b84000 },
+	{ _MMIO(0x9888), 0x04b84000 },
+	{ _MMIO(0x9888), 0x06b84000 },
+	{ _MMIO(0x9888), 0x08b84000 },
+	{ _MMIO(0x9888), 0x0ab88000 },
+	{ _MMIO(0x9888), 0x0cb88000 },
+	{ _MMIO(0x9888), 0x00b98000 },
+	{ _MMIO(0x9888), 0x02b9a000 },
+	{ _MMIO(0x9888), 0x04b9a000 },
+	{ _MMIO(0x9888), 0x06b92000 },
+	{ _MMIO(0x9888), 0x0aba8000 },
+	{ _MMIO(0x9888), 0x0cba8000 },
+	{ _MMIO(0x9888), 0x04938000 },
+	{ _MMIO(0x9888), 0x06938000 },
+	{ _MMIO(0x9888), 0x0494c000 },
+	{ _MMIO(0x9888), 0x0295cfc7 },
+	{ _MMIO(0x9888), 0x10950000 },
+	{ _MMIO(0x9888), 0x02b38000 },
+	{ _MMIO(0x9888), 0x08b38000 },
+	{ _MMIO(0x9888), 0x04b42000 },
+	{ _MMIO(0x9888), 0x06b41000 },
+	{ _MMIO(0x9888), 0x00b5c700 },
+	{ _MMIO(0x9888), 0x04b500cf },
+	{ _MMIO(0x9888), 0x10b50000 },
+	{ _MMIO(0x9888), 0x0ad38000 },
+	{ _MMIO(0x9888), 0x0cd38000 },
+	{ _MMIO(0x9888), 0x06d46000 },
+	{ _MMIO(0x9888), 0x04d5c700 },
+	{ _MMIO(0x9888), 0x06d500cf },
+	{ _MMIO(0x9888), 0x10d50000 },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8aaaa0 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x258b555a },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x238b5500 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x018d8000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800882 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45801082 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x478014a5 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800002 },
+	{ _MMIO(0x9888), 0x41800c62 },
+};
+
+static int
+get_tdl_2_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_2;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_test_oa[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2770), 0x00000004 },
+	{ _MMIO(0x2774), 0x00000000 },
+	{ _MMIO(0x2778), 0x00000003 },
+	{ _MMIO(0x277c), 0x00000000 },
+	{ _MMIO(0x2780), 0x00000007 },
+	{ _MMIO(0x2784), 0x00000000 },
+	{ _MMIO(0x2788), 0x00100002 },
+	{ _MMIO(0x278c), 0x0000fff7 },
+	{ _MMIO(0x2790), 0x00100002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00100082 },
+	{ _MMIO(0x279c), 0x0000ffef },
+	{ _MMIO(0x27a0), 0x001000c2 },
+	{ _MMIO(0x27a4), 0x0000ffe7 },
+	{ _MMIO(0x27a8), 0x00100001 },
+	{ _MMIO(0x27ac), 0x0000ffe7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_test_oa[] = {
+};
+
+static const struct i915_oa_reg mux_config_test_oa[] = {
+	{ _MMIO(0x9888), 0x59800000 },
+	{ _MMIO(0x9888), 0x59800001 },
+	{ _MMIO(0x9888), 0x338b0000 },
+	{ _MMIO(0x9888), 0x258b0066 },
+	{ _MMIO(0x9888), 0x058b0000 },
+	{ _MMIO(0x9888), 0x038b0000 },
+	{ _MMIO(0x9888), 0x03844000 },
+	{ _MMIO(0x9888), 0x47800080 },
+	{ _MMIO(0x9888), 0x57800000 },
+	{ _MMIO(0x1823a4), 0x00000000 },
+	{ _MMIO(0x9888), 0x59800000 },
+};
+
+static int
+get_test_oa_mux_config(struct drm_i915_private *dev_priv,
+		       const struct i915_oa_reg **regs,
+		       int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_test_oa;
+	lens[n] = ARRAY_SIZE(mux_config_test_oa);
+	n++;
+
+	return n;
+}
+
 int i915_oa_select_metric_set_chv(struct drm_i915_private *dev_priv)
 {
 	dev_priv->perf.oa.n_mux_configs = 0;
@@ -154,14 +2035,300 @@ int i915_oa_select_metric_set_chv(struct drm_i915_private *dev_priv)
 	dev_priv->perf.oa.flex_regs = NULL;
 	dev_priv->perf.oa.flex_regs_len = 0;
 
-	switch (dev_priv->perf.oa.metrics_set) {
-	case METRIC_SET_ID_RENDER_BASIC:
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_basic_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_basic_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
+		return 0;
+	case METRIC_SET_ID_RENDER_PIPE_PROFILE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_pipe_profile_mux_config(dev_priv,
+							   dev_priv->perf.oa.mux_regs,
+							   dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_PIPE_PROFILE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_pipe_profile;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_pipe_profile);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_pipe_profile;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_pipe_profile);
+
+		return 0;
+	case METRIC_SET_ID_HDC_AND_SF:
+		dev_priv->perf.oa.n_mux_configs =
+			get_hdc_and_sf_mux_config(dev_priv,
+						  dev_priv->perf.oa.mux_regs,
+						  dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"HDC_AND_SF\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_hdc_and_sf;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_hdc_and_sf);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_hdc_and_sf;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_hdc_and_sf);
+
+		return 0;
+	case METRIC_SET_ID_L3_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_1_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_1);
+
+		return 0;
+	case METRIC_SET_ID_L3_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_2_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_2);
+
+		return 0;
+	case METRIC_SET_ID_L3_3:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_3_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_3\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_3;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_3);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_3;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_3);
+
+		return 0;
+	case METRIC_SET_ID_L3_4:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_4_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_4\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_4;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_4);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_4;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_4);
+
+		return 0;
+	case METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND:
+		dev_priv->perf.oa.n_mux_configs =
+			get_rasterizer_and_pixel_backend_mux_config(dev_priv,
+								    dev_priv->perf.oa.mux_regs,
+								    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RASTERIZER_AND_PIXEL_BACKEND\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_rasterizer_and_pixel_backend);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_rasterizer_and_pixel_backend);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_sampler_1_mux_config(dev_priv,
+						 dev_priv->perf.oa.mux_regs,
+						 dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler_1);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_sampler_2_mux_config(dev_priv,
+						 dev_priv->perf.oa.mux_regs,
+						 dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler_2);
+
+		return 0;
+	case METRIC_SET_ID_TDL_1:
 		dev_priv->perf.oa.n_mux_configs =
-			get_render_basic_mux_config(dev_priv,
-						    dev_priv->perf.oa.mux_regs,
-						    dev_priv->perf.oa.mux_regs_lens);
+			get_tdl_1_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
 		if (dev_priv->perf.oa.n_mux_configs == 0) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_1\" metric set\n");
 
 			/* EINVAL because *_register_sysfs already checked this
 			 * and so it wouldn't have been advertised to userspace and
@@ -171,14 +2338,66 @@ int i915_oa_select_metric_set_chv(struct drm_i915_private *dev_priv)
 		}
 
 		dev_priv->perf.oa.b_counter_regs =
-			b_counter_config_render_basic;
+			b_counter_config_tdl_1;
 		dev_priv->perf.oa.b_counter_regs_len =
-			ARRAY_SIZE(b_counter_config_render_basic);
+			ARRAY_SIZE(b_counter_config_tdl_1);
 
 		dev_priv->perf.oa.flex_regs =
-			flex_eu_config_render_basic;
+			flex_eu_config_tdl_1;
 		dev_priv->perf.oa.flex_regs_len =
-			ARRAY_SIZE(flex_eu_config_render_basic);
+			ARRAY_SIZE(flex_eu_config_tdl_1);
+
+		return 0;
+	case METRIC_SET_ID_TDL_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_2_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_2);
+
+		return 0;
+	case METRIC_SET_ID_TEST_OA:
+		dev_priv->perf.oa.n_mux_configs =
+			get_test_oa_mux_config(dev_priv,
+					       dev_priv->perf.oa.mux_regs,
+					       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TEST_OA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_test_oa;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_test_oa);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_test_oa;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_test_oa);
 
 		return 0;
 	default:
@@ -208,6 +2427,292 @@ static struct attribute_group group_render_basic = {
 	.attrs =  attrs_render_basic,
 };
 
+static ssize_t
+show_compute_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_BASIC);
+}
+
+static struct device_attribute dev_attr_compute_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_basic[] = {
+	&dev_attr_compute_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_basic = {
+	.name = "f522a89c-ecd1-4522-8331-3383c54af5f5",
+	.attrs =  attrs_compute_basic,
+};
+
+static ssize_t
+show_render_pipe_profile_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_PIPE_PROFILE);
+}
+
+static struct device_attribute dev_attr_render_pipe_profile_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_pipe_profile_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_pipe_profile[] = {
+	&dev_attr_render_pipe_profile_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_pipe_profile = {
+	.name = "a9ccc03d-a943-4e6b-9cd6-13e063075927",
+	.attrs =  attrs_render_pipe_profile,
+};
+
+static ssize_t
+show_hdc_and_sf_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_HDC_AND_SF);
+}
+
+static struct device_attribute dev_attr_hdc_and_sf_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_hdc_and_sf_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_hdc_and_sf[] = {
+	&dev_attr_hdc_and_sf_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_hdc_and_sf = {
+	.name = "2cf0c064-68df-4fac-9b3f-57f51ca8a069",
+	.attrs =  attrs_hdc_and_sf,
+};
+
+static ssize_t
+show_l3_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_1);
+}
+
+static struct device_attribute dev_attr_l3_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_1[] = {
+	&dev_attr_l3_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_1 = {
+	.name = "78a87ff9-543a-49ce-95ea-26d86071ea93",
+	.attrs =  attrs_l3_1,
+};
+
+static ssize_t
+show_l3_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_2);
+}
+
+static struct device_attribute dev_attr_l3_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_2[] = {
+	&dev_attr_l3_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_2 = {
+	.name = "9f2cece5-7bfe-4320-ad66-8c7cc526bec5",
+	.attrs =  attrs_l3_2,
+};
+
+static ssize_t
+show_l3_3_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_3);
+}
+
+static struct device_attribute dev_attr_l3_3_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_3_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_3[] = {
+	&dev_attr_l3_3_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_3 = {
+	.name = "d890ef38-d309-47e4-b8b5-aa779bb19ab0",
+	.attrs =  attrs_l3_3,
+};
+
+static ssize_t
+show_l3_4_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_4);
+}
+
+static struct device_attribute dev_attr_l3_4_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_4_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_4[] = {
+	&dev_attr_l3_4_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_4 = {
+	.name = "5fdff4a6-9dc8-45e1-bfda-ef54869fbdd4",
+	.attrs =  attrs_l3_4,
+};
+
+static ssize_t
+show_rasterizer_and_pixel_backend_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND);
+}
+
+static struct device_attribute dev_attr_rasterizer_and_pixel_backend_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_rasterizer_and_pixel_backend_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_rasterizer_and_pixel_backend[] = {
+	&dev_attr_rasterizer_and_pixel_backend_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_rasterizer_and_pixel_backend = {
+	.name = "2c0e45e1-7e2c-4a14-ae00-0b7ec868b8aa",
+	.attrs =  attrs_rasterizer_and_pixel_backend,
+};
+
+static ssize_t
+show_sampler_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER_1);
+}
+
+static struct device_attribute dev_attr_sampler_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler_1[] = {
+	&dev_attr_sampler_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler_1 = {
+	.name = "71148d78-baf5-474f-878a-e23158d0265d",
+	.attrs =  attrs_sampler_1,
+};
+
+static ssize_t
+show_sampler_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER_2);
+}
+
+static struct device_attribute dev_attr_sampler_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler_2[] = {
+	&dev_attr_sampler_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler_2 = {
+	.name = "b996a2b7-c59c-492d-877a-8cd54fd6df84",
+	.attrs =  attrs_sampler_2,
+};
+
+static ssize_t
+show_tdl_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_1);
+}
+
+static struct device_attribute dev_attr_tdl_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_1[] = {
+	&dev_attr_tdl_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_1 = {
+	.name = "eb2fecba-b431-42e7-8261-fe9429a6e67a",
+	.attrs =  attrs_tdl_1,
+};
+
+static ssize_t
+show_tdl_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_2);
+}
+
+static struct device_attribute dev_attr_tdl_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_2[] = {
+	&dev_attr_tdl_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_2 = {
+	.name = "60749470-a648-4a4b-9f10-dbfe1e36e44d",
+	.attrs =  attrs_tdl_2,
+};
+
+static ssize_t
+show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TEST_OA);
+}
+
+static struct device_attribute dev_attr_test_oa_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_test_oa_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_test_oa[] = {
+	&dev_attr_test_oa_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_test_oa = {
+	.name = "4a534b07-cba3-414d-8d60-874830e883aa",
+	.attrs =  attrs_test_oa,
+};
+
 int
 i915_perf_register_sysfs_chv(struct drm_i915_private *dev_priv)
 {
@@ -220,9 +2725,113 @@ i915_perf_register_sysfs_chv(struct drm_i915_private *dev_priv)
 		if (ret)
 			goto error_render_basic;
 	}
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+		if (ret)
+			goto error_compute_basic;
+	}
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+		if (ret)
+			goto error_render_pipe_profile;
+	}
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+		if (ret)
+			goto error_hdc_and_sf;
+	}
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+		if (ret)
+			goto error_l3_1;
+	}
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+		if (ret)
+			goto error_l3_2;
+	}
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+		if (ret)
+			goto error_l3_3;
+	}
+	if (get_l3_4_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_4);
+		if (ret)
+			goto error_l3_4;
+	}
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+		if (ret)
+			goto error_rasterizer_and_pixel_backend;
+	}
+	if (get_sampler_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler_1);
+		if (ret)
+			goto error_sampler_1;
+	}
+	if (get_sampler_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler_2);
+		if (ret)
+			goto error_sampler_2;
+	}
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+		if (ret)
+			goto error_tdl_1;
+	}
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+		if (ret)
+			goto error_tdl_2;
+	}
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+		if (ret)
+			goto error_test_oa;
+	}
 
 	return 0;
 
+error_test_oa:
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+error_tdl_2:
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+error_tdl_1:
+	if (get_sampler_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_2);
+error_sampler_2:
+	if (get_sampler_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_1);
+error_sampler_1:
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+error_rasterizer_and_pixel_backend:
+	if (get_l3_4_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_4);
+error_l3_4:
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+error_l3_3:
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+error_l3_2:
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+error_l3_1:
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+error_hdc_and_sf:
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+error_render_pipe_profile:
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+error_compute_basic:
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
 error_render_basic:
 	return ret;
 }
@@ -235,4 +2844,30 @@ i915_perf_unregister_sysfs_chv(struct drm_i915_private *dev_priv)
 
 	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+	if (get_l3_4_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_4);
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+	if (get_sampler_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_1);
+	if (get_sampler_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_2);
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_test_oa);
 }
diff --git a/drivers/gpu/drm/i915/i915_oa_hsw.c b/drivers/gpu/drm/i915/i915_oa_hsw.c
index 8c13e0880e53..10f169f683b7 100644
--- a/drivers/gpu/drm/i915/i915_oa_hsw.c
+++ b/drivers/gpu/drm/i915/i915_oa_hsw.c
@@ -49,6 +49,9 @@ static const struct i915_oa_reg b_counter_config_render_basic[] = {
 	{ _MMIO(0x2710), 0x00000000 },
 };
 
+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+};
+
 static const struct i915_oa_reg mux_config_render_basic[] = {
 	{ _MMIO(0x253a4), 0x01600000 },
 	{ _MMIO(0x25440), 0x00100000 },
@@ -148,6 +151,9 @@ static const struct i915_oa_reg b_counter_config_compute_basic[] = {
 	{ _MMIO(0x236c), 0x00000000 },
 };
 
+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+};
+
 static const struct i915_oa_reg mux_config_compute_basic[] = {
 	{ _MMIO(0x253a4), 0x00000000 },
 	{ _MMIO(0x2681c), 0x01f00800 },
@@ -223,6 +229,9 @@ static const struct i915_oa_reg b_counter_config_compute_extended[] = {
 	{ _MMIO(0x27ac), 0x0000fffe },
 };
 
+static const struct i915_oa_reg flex_eu_config_compute_extended[] = {
+};
+
 static const struct i915_oa_reg mux_config_compute_extended[] = {
 	{ _MMIO(0x2681c), 0x3eb00800 },
 	{ _MMIO(0x26820), 0x00900000 },
@@ -289,6 +298,9 @@ static const struct i915_oa_reg b_counter_config_memory_reads[] = {
 	{ _MMIO(0x27ac), 0x0000fc00 },
 };
 
+static const struct i915_oa_reg flex_eu_config_memory_reads[] = {
+};
+
 static const struct i915_oa_reg mux_config_memory_reads[] = {
 	{ _MMIO(0x253a4), 0x34300000 },
 	{ _MMIO(0x25440), 0x2d800000 },
@@ -358,6 +370,9 @@ static const struct i915_oa_reg b_counter_config_memory_writes[] = {
 	{ _MMIO(0x27ac), 0x0000fc00 },
 };
 
+static const struct i915_oa_reg flex_eu_config_memory_writes[] = {
+};
+
 static const struct i915_oa_reg mux_config_memory_writes[] = {
 	{ _MMIO(0x253a4), 0x34300000 },
 	{ _MMIO(0x25440), 0x01500000 },
@@ -405,6 +420,9 @@ static const struct i915_oa_reg b_counter_config_sampler_balance[] = {
 	{ _MMIO(0x2724), 0x00800000 },
 };
 
+static const struct i915_oa_reg flex_eu_config_sampler_balance[] = {
+};
+
 static const struct i915_oa_reg mux_config_sampler_balance[] = {
 	{ _MMIO(0x2eb9c), 0x01906400 },
 	{ _MMIO(0x2fb9c), 0x01906400 },
@@ -492,6 +510,11 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 		dev_priv->perf.oa.b_counter_regs_len =
 			ARRAY_SIZE(b_counter_config_render_basic);
 
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
 		return 0;
 	case METRIC_SET_ID_COMPUTE_BASIC:
 		dev_priv->perf.oa.n_mux_configs =
@@ -513,6 +536,11 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 		dev_priv->perf.oa.b_counter_regs_len =
 			ARRAY_SIZE(b_counter_config_compute_basic);
 
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
 		return 0;
 	case METRIC_SET_ID_COMPUTE_EXTENDED:
 		dev_priv->perf.oa.n_mux_configs =
@@ -534,6 +562,11 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 		dev_priv->perf.oa.b_counter_regs_len =
 			ARRAY_SIZE(b_counter_config_compute_extended);
 
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extended;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extended);
+
 		return 0;
 	case METRIC_SET_ID_MEMORY_READS:
 		dev_priv->perf.oa.n_mux_configs =
@@ -555,6 +588,11 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 		dev_priv->perf.oa.b_counter_regs_len =
 			ARRAY_SIZE(b_counter_config_memory_reads);
 
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_reads;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_reads);
+
 		return 0;
 	case METRIC_SET_ID_MEMORY_WRITES:
 		dev_priv->perf.oa.n_mux_configs =
@@ -576,6 +614,11 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 		dev_priv->perf.oa.b_counter_regs_len =
 			ARRAY_SIZE(b_counter_config_memory_writes);
 
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_writes;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_writes);
+
 		return 0;
 	case METRIC_SET_ID_SAMPLER_BALANCE:
 		dev_priv->perf.oa.n_mux_configs =
@@ -597,6 +640,11 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 		dev_priv->perf.oa.b_counter_regs_len =
 			ARRAY_SIZE(b_counter_config_sampler_balance);
 
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler_balance;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler_balance);
+
 		return 0;
 	default:
 		return -ENODEV;
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt2.c b/drivers/gpu/drm/i915/i915_oa_sklgt2.c
index 9ab9d21ec335..1268beda212c 100644
--- a/drivers/gpu/drm/i915/i915_oa_sklgt2.c
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt2.c
@@ -33,9 +33,26 @@
 
 enum metric_set_id {
 	METRIC_SET_ID_RENDER_BASIC = 1,
+	METRIC_SET_ID_COMPUTE_BASIC,
+	METRIC_SET_ID_RENDER_PIPE_PROFILE,
+	METRIC_SET_ID_MEMORY_READS,
+	METRIC_SET_ID_MEMORY_WRITES,
+	METRIC_SET_ID_COMPUTE_EXTENDED,
+	METRIC_SET_ID_COMPUTE_L3_CACHE,
+	METRIC_SET_ID_HDC_AND_SF,
+	METRIC_SET_ID_L3_1,
+	METRIC_SET_ID_L3_2,
+	METRIC_SET_ID_L3_3,
+	METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND,
+	METRIC_SET_ID_SAMPLER,
+	METRIC_SET_ID_TDL_1,
+	METRIC_SET_ID_TDL_2,
+	METRIC_SET_ID_COMPUTE_EXTRA,
+	METRIC_SET_ID_VME_PIPE,
+	METRIC_SET_ID_TEST_OA,
 };
 
-int i915_oa_n_builtin_metric_sets_sklgt2 = 1;
+int i915_oa_n_builtin_metric_sets_sklgt2 = 18;
 
 static const struct i915_oa_reg b_counter_config_render_basic[] = {
 	{ _MMIO(0x2710), 0x00000000 },
@@ -146,66 +163,3120 @@ get_render_basic_mux_config(struct drm_i915_private *dev_priv,
 	return n;
 }
 
+static const struct i915_oa_reg b_counter_config_compute_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic_0_slices_0x01_and_sku_lt_0x02[] = {
+	{ _MMIO(0x9888), 0x104f00e0 },
+	{ _MMIO(0x9888), 0x124f1c00 },
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f901403 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4e8200 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x004f0db2 },
+	{ _MMIO(0x9888), 0x064f0900 },
+	{ _MMIO(0x9888), 0x084f1880 },
+	{ _MMIO(0x9888), 0x0a4f0011 },
+	{ _MMIO(0x9888), 0x0c4f0e3c },
+	{ _MMIO(0x9888), 0x0e4f1d80 },
+	{ _MMIO(0x9888), 0x086c0002 },
+	{ _MMIO(0x9888), 0x0a6c0100 },
+	{ _MMIO(0x9888), 0x0e6c000c },
+	{ _MMIO(0x9888), 0x026c000b },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x081b4000 },
+	{ _MMIO(0x9888), 0x0a1b8000 },
+	{ _MMIO(0x9888), 0x0e1b4000 },
+	{ _MMIO(0x9888), 0x021b4000 },
+	{ _MMIO(0x9888), 0x1a1c4000 },
+	{ _MMIO(0x9888), 0x1c1c0012 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x005bc000 },
+	{ _MMIO(0x9888), 0x065b8000 },
+	{ _MMIO(0x9888), 0x085b8000 },
+	{ _MMIO(0x9888), 0x0a5b4000 },
+	{ _MMIO(0x9888), 0x0c5bc000 },
+	{ _MMIO(0x9888), 0x0e5b8000 },
+	{ _MMIO(0x9888), 0x105c8000 },
+	{ _MMIO(0x9888), 0x1a5ca000 },
+	{ _MMIO(0x9888), 0x1c5c002d },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x0a4c0800 },
+	{ _MMIO(0x9888), 0x0c4c0082 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020d2000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5500 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002cc000 },
+	{ _MMIO(0x9888), 0x0e2cc000 },
+	{ _MMIO(0x9888), 0x162cbe00 },
+	{ _MMIO(0x9888), 0x182c00ef },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x19900157 },
+	{ _MMIO(0x9888), 0x1b900167 },
+	{ _MMIO(0x9888), 0x1d900105 },
+	{ _MMIO(0x9888), 0x1f900103 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0xd28), 0x00000000 },
+	{ _MMIO(0x9888), 0x11900fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900840 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900842 },
+	{ _MMIO(0x9888), 0x47900840 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900840 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900040 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900840 },
+	{ _MMIO(0x9888), 0x53901111 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic_0_slices_0x01_and_sku_gte_0x02[] = {
+	{ _MMIO(0x9888), 0x104f00e0 },
+	{ _MMIO(0x9888), 0x124f1c00 },
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f901403 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x1a4e0820 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x064f0900 },
+	{ _MMIO(0x9888), 0x084f0032 },
+	{ _MMIO(0x9888), 0x0a4f1810 },
+	{ _MMIO(0x9888), 0x0c4f0e00 },
+	{ _MMIO(0x9888), 0x0e4f003c },
+	{ _MMIO(0x9888), 0x004f0d80 },
+	{ _MMIO(0x9888), 0x024f003b },
+	{ _MMIO(0x9888), 0x006c0002 },
+	{ _MMIO(0x9888), 0x086c0000 },
+	{ _MMIO(0x9888), 0x0c6c000c },
+	{ _MMIO(0x9888), 0x0e6c0b00 },
+	{ _MMIO(0x9888), 0x186c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x081b8000 },
+	{ _MMIO(0x9888), 0x0c1b4000 },
+	{ _MMIO(0x9888), 0x0e1b8000 },
+	{ _MMIO(0x9888), 0x101c8000 },
+	{ _MMIO(0x9888), 0x1a1c8000 },
+	{ _MMIO(0x9888), 0x1c1c0024 },
+	{ _MMIO(0x9888), 0x065b8000 },
+	{ _MMIO(0x9888), 0x085b4000 },
+	{ _MMIO(0x9888), 0x0a5bc000 },
+	{ _MMIO(0x9888), 0x0c5b8000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x005b8000 },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x1a5c6000 },
+	{ _MMIO(0x9888), 0x1c5c001b },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x145c8000 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4c2000 },
+	{ _MMIO(0x9888), 0x0c4c0208 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020d2000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5500 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2cc000 },
+	{ _MMIO(0x9888), 0x162cfb00 },
+	{ _MMIO(0x9888), 0x182c00be },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042cc000 },
+	{ _MMIO(0x9888), 0x19900157 },
+	{ _MMIO(0x9888), 0x1b900167 },
+	{ _MMIO(0x9888), 0x1d900105 },
+	{ _MMIO(0x9888), 0x1f900103 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x11900fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900842 },
+	{ _MMIO(0x9888), 0x47900802 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900802 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900002 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53901111 },
+};
+
+static int
+get_compute_basic_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 2);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 2);
+
+	if ((INTEL_INFO(dev_priv)->sseu.slice_mask & 0x01) &&
+	    (dev_priv->drm.pdev->revision < 0x02)) {
+		regs[n] = mux_config_compute_basic_0_slices_0x01_and_sku_lt_0x02;
+		lens[n] = ARRAY_SIZE(mux_config_compute_basic_0_slices_0x01_and_sku_lt_0x02);
+		n++;
+	}
+	if ((INTEL_INFO(dev_priv)->sseu.slice_mask & 0x01) &&
+		   (dev_priv->drm.pdev->revision >= 0x02)) {
+		regs[n] = mux_config_compute_basic_0_slices_0x01_and_sku_gte_0x02;
+		lens[n] = ARRAY_SIZE(mux_config_compute_basic_0_slices_0x01_and_sku_gte_0x02);
+		n++;
+	}
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_render_pipe_profile[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ffea },
+	{ _MMIO(0x2774), 0x00007ffc },
+	{ _MMIO(0x2778), 0x0007affa },
+	{ _MMIO(0x277c), 0x0000f5fd },
+	{ _MMIO(0x2780), 0x00079ffa },
+	{ _MMIO(0x2784), 0x0000f3fb },
+	{ _MMIO(0x2788), 0x0007bf7a },
+	{ _MMIO(0x278c), 0x0000f7e7 },
+	{ _MMIO(0x2790), 0x0007fefa },
+	{ _MMIO(0x2794), 0x0000f7cf },
+	{ _MMIO(0x2798), 0x00077ffa },
+	{ _MMIO(0x279c), 0x0000efdf },
+	{ _MMIO(0x27a0), 0x0006fffa },
+	{ _MMIO(0x27a4), 0x0000cfbf },
+	{ _MMIO(0x27a8), 0x0003fffa },
+	{ _MMIO(0x27ac), 0x00005f7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_pipe_profile[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile_0_sku_lt_0x02[] = {
+	{ _MMIO(0x9888), 0x0c0e001f },
+	{ _MMIO(0x9888), 0x0a0f0000 },
+	{ _MMIO(0x9888), 0x10116800 },
+	{ _MMIO(0x9888), 0x178a03e0 },
+	{ _MMIO(0x9888), 0x11824c00 },
+	{ _MMIO(0x9888), 0x11830020 },
+	{ _MMIO(0x9888), 0x13840020 },
+	{ _MMIO(0x9888), 0x11850019 },
+	{ _MMIO(0x9888), 0x11860007 },
+	{ _MMIO(0x9888), 0x01870c40 },
+	{ _MMIO(0x9888), 0x17880000 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x0a4c0040 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x040d4000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020e5400 },
+	{ _MMIO(0x9888), 0x000e0000 },
+	{ _MMIO(0x9888), 0x080f0040 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x100f0000 },
+	{ _MMIO(0x9888), 0x0e0f0040 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06110012 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x01898000 },
+	{ _MMIO(0x9888), 0x0d890100 },
+	{ _MMIO(0x9888), 0x03898000 },
+	{ _MMIO(0x9888), 0x09808000 },
+	{ _MMIO(0x9888), 0x0b808000 },
+	{ _MMIO(0x9888), 0x0380c000 },
+	{ _MMIO(0x9888), 0x0f8a0075 },
+	{ _MMIO(0x9888), 0x1d8a0000 },
+	{ _MMIO(0x9888), 0x118a8000 },
+	{ _MMIO(0x9888), 0x1b8a4000 },
+	{ _MMIO(0x9888), 0x138a8000 },
+	{ _MMIO(0x9888), 0x1d81a000 },
+	{ _MMIO(0x9888), 0x15818000 },
+	{ _MMIO(0x9888), 0x17818000 },
+	{ _MMIO(0x9888), 0x0b820030 },
+	{ _MMIO(0x9888), 0x07828000 },
+	{ _MMIO(0x9888), 0x0d824000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x05824000 },
+	{ _MMIO(0x9888), 0x0d830003 },
+	{ _MMIO(0x9888), 0x0583000c },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x03838000 },
+	{ _MMIO(0x9888), 0x07838000 },
+	{ _MMIO(0x9888), 0x0b840980 },
+	{ _MMIO(0x9888), 0x03844d80 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x09848000 },
+	{ _MMIO(0x9888), 0x09850080 },
+	{ _MMIO(0x9888), 0x03850003 },
+	{ _MMIO(0x9888), 0x01850000 },
+	{ _MMIO(0x9888), 0x07860000 },
+	{ _MMIO(0x9888), 0x0f860400 },
+	{ _MMIO(0x9888), 0x09870032 },
+	{ _MMIO(0x9888), 0x01888052 },
+	{ _MMIO(0x9888), 0x11880000 },
+	{ _MMIO(0x9888), 0x09884000 },
+	{ _MMIO(0x9888), 0x15968000 },
+	{ _MMIO(0x9888), 0x17968000 },
+	{ _MMIO(0x9888), 0x0f96c000 },
+	{ _MMIO(0x9888), 0x1f950011 },
+	{ _MMIO(0x9888), 0x1d950014 },
+	{ _MMIO(0x9888), 0x0592c000 },
+	{ _MMIO(0x9888), 0x0b928000 },
+	{ _MMIO(0x9888), 0x0d924000 },
+	{ _MMIO(0x9888), 0x0f924000 },
+	{ _MMIO(0x9888), 0x11928000 },
+	{ _MMIO(0x9888), 0x1392c000 },
+	{ _MMIO(0x9888), 0x09924000 },
+	{ _MMIO(0x9888), 0x01985000 },
+	{ _MMIO(0x9888), 0x07988000 },
+	{ _MMIO(0x9888), 0x09981000 },
+	{ _MMIO(0x9888), 0x0b982000 },
+	{ _MMIO(0x9888), 0x0d982000 },
+	{ _MMIO(0x9888), 0x0f989000 },
+	{ _MMIO(0x9888), 0x05982000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x0b978000 },
+	{ _MMIO(0x9888), 0x0f974000 },
+	{ _MMIO(0x9888), 0x11974000 },
+	{ _MMIO(0x9888), 0x13978000 },
+	{ _MMIO(0x9888), 0x09974000 },
+	{ _MMIO(0xd28), 0x00000000 },
+	{ _MMIO(0x9888), 0x1190c080 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x419010a0 },
+	{ _MMIO(0x9888), 0x55904000 },
+	{ _MMIO(0x9888), 0x45901000 },
+	{ _MMIO(0x9888), 0x47900084 },
+	{ _MMIO(0x9888), 0x57904400 },
+	{ _MMIO(0x9888), 0x499000a5 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900081 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x439014a4 },
+	{ _MMIO(0x9888), 0x53900400 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile_0_sku_gte_0x02[] = {
+	{ _MMIO(0x9888), 0x0c0e001f },
+	{ _MMIO(0x9888), 0x0a0f0000 },
+	{ _MMIO(0x9888), 0x10116800 },
+	{ _MMIO(0x9888), 0x178a03e0 },
+	{ _MMIO(0x9888), 0x11824c00 },
+	{ _MMIO(0x9888), 0x11830020 },
+	{ _MMIO(0x9888), 0x13840020 },
+	{ _MMIO(0x9888), 0x11850019 },
+	{ _MMIO(0x9888), 0x11860007 },
+	{ _MMIO(0x9888), 0x01870c40 },
+	{ _MMIO(0x9888), 0x17880000 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x0a4c0040 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x040d4000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020e5400 },
+	{ _MMIO(0x9888), 0x000e0000 },
+	{ _MMIO(0x9888), 0x080f0040 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x100f0000 },
+	{ _MMIO(0x9888), 0x0e0f0040 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06110012 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x01898000 },
+	{ _MMIO(0x9888), 0x0d890100 },
+	{ _MMIO(0x9888), 0x03898000 },
+	{ _MMIO(0x9888), 0x09808000 },
+	{ _MMIO(0x9888), 0x0b808000 },
+	{ _MMIO(0x9888), 0x0380c000 },
+	{ _MMIO(0x9888), 0x0f8a0075 },
+	{ _MMIO(0x9888), 0x1d8a0000 },
+	{ _MMIO(0x9888), 0x118a8000 },
+	{ _MMIO(0x9888), 0x1b8a4000 },
+	{ _MMIO(0x9888), 0x138a8000 },
+	{ _MMIO(0x9888), 0x1d81a000 },
+	{ _MMIO(0x9888), 0x15818000 },
+	{ _MMIO(0x9888), 0x17818000 },
+	{ _MMIO(0x9888), 0x0b820030 },
+	{ _MMIO(0x9888), 0x07828000 },
+	{ _MMIO(0x9888), 0x0d824000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x05824000 },
+	{ _MMIO(0x9888), 0x0d830003 },
+	{ _MMIO(0x9888), 0x0583000c },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x03838000 },
+	{ _MMIO(0x9888), 0x07838000 },
+	{ _MMIO(0x9888), 0x0b840980 },
+	{ _MMIO(0x9888), 0x03844d80 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x09848000 },
+	{ _MMIO(0x9888), 0x09850080 },
+	{ _MMIO(0x9888), 0x03850003 },
+	{ _MMIO(0x9888), 0x01850000 },
+	{ _MMIO(0x9888), 0x07860000 },
+	{ _MMIO(0x9888), 0x0f860400 },
+	{ _MMIO(0x9888), 0x09870032 },
+	{ _MMIO(0x9888), 0x01888052 },
+	{ _MMIO(0x9888), 0x11880000 },
+	{ _MMIO(0x9888), 0x09884000 },
+	{ _MMIO(0x9888), 0x1b931001 },
+	{ _MMIO(0x9888), 0x1d930001 },
+	{ _MMIO(0x9888), 0x19934000 },
+	{ _MMIO(0x9888), 0x1b958000 },
+	{ _MMIO(0x9888), 0x1d950094 },
+	{ _MMIO(0x9888), 0x19958000 },
+	{ _MMIO(0x9888), 0x05e5a000 },
+	{ _MMIO(0x9888), 0x01e5c000 },
+	{ _MMIO(0x9888), 0x0592c000 },
+	{ _MMIO(0x9888), 0x0b928000 },
+	{ _MMIO(0x9888), 0x0d924000 },
+	{ _MMIO(0x9888), 0x0f924000 },
+	{ _MMIO(0x9888), 0x11928000 },
+	{ _MMIO(0x9888), 0x1392c000 },
+	{ _MMIO(0x9888), 0x09924000 },
+	{ _MMIO(0x9888), 0x01985000 },
+	{ _MMIO(0x9888), 0x07988000 },
+	{ _MMIO(0x9888), 0x09981000 },
+	{ _MMIO(0x9888), 0x0b982000 },
+	{ _MMIO(0x9888), 0x0d982000 },
+	{ _MMIO(0x9888), 0x0f989000 },
+	{ _MMIO(0x9888), 0x05982000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1190c080 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x419010a0 },
+	{ _MMIO(0x9888), 0x55904000 },
+	{ _MMIO(0x9888), 0x45901000 },
+	{ _MMIO(0x9888), 0x47900084 },
+	{ _MMIO(0x9888), 0x57904400 },
+	{ _MMIO(0x9888), 0x499000a5 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900081 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x439014a4 },
+	{ _MMIO(0x9888), 0x53900400 },
+};
+
+static int
+get_render_pipe_profile_mux_config(struct drm_i915_private *dev_priv,
+				   const struct i915_oa_reg **regs,
+				   int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 2);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 2);
+
+	if (dev_priv->drm.pdev->revision < 0x02) {
+		regs[n] = mux_config_render_pipe_profile_0_sku_lt_0x02;
+		lens[n] = ARRAY_SIZE(mux_config_render_pipe_profile_0_sku_lt_0x02);
+		n++;
+	}
+	if (dev_priv->drm.pdev->revision >= 0x02) {
+		regs[n] = mux_config_render_pipe_profile_0_sku_gte_0x02;
+		lens[n] = ARRAY_SIZE(mux_config_render_pipe_profile_0_sku_gte_0x02);
+		n++;
+	}
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_reads[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f872 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_reads[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads_0_slices_0x01_and_sku_lt_0x02[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x13946000 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x0f968000 },
+	{ _MMIO(0x9888), 0x1196c000 },
+	{ _MMIO(0x9888), 0x13964000 },
+	{ _MMIO(0x9888), 0x11938000 },
+	{ _MMIO(0x9888), 0x1b93fe00 },
+	{ _MMIO(0x9888), 0x01940010 },
+	{ _MMIO(0x9888), 0x07941100 },
+	{ _MMIO(0x9888), 0x09941312 },
+	{ _MMIO(0x9888), 0x0b941514 },
+	{ _MMIO(0x9888), 0x0d941716 },
+	{ _MMIO(0x9888), 0x11940000 },
+	{ _MMIO(0x9888), 0x19940000 },
+	{ _MMIO(0x9888), 0x1b940000 },
+	{ _MMIO(0x9888), 0x1d940000 },
+	{ _MMIO(0x9888), 0x1b954000 },
+	{ _MMIO(0x9888), 0x1d95a550 },
+	{ _MMIO(0x9888), 0x1f9502aa },
+	{ _MMIO(0x9888), 0x2f900157 },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13908000 },
+	{ _MMIO(0x9888), 0x21908000 },
+	{ _MMIO(0x9888), 0x23908000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27908000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0xd28), 0x00000000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads_0_sku_lt_0x05_and_sku_gte_0x02[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x13946000 },
+	{ _MMIO(0x9888), 0x15940016 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x19930800 },
+	{ _MMIO(0x9888), 0x1b93aa55 },
+	{ _MMIO(0x9888), 0x1d9300aa },
+	{ _MMIO(0x9888), 0x01940010 },
+	{ _MMIO(0x9888), 0x07941100 },
+	{ _MMIO(0x9888), 0x09941312 },
+	{ _MMIO(0x9888), 0x0b941514 },
+	{ _MMIO(0x9888), 0x0d941716 },
+	{ _MMIO(0x9888), 0x0f940018 },
+	{ _MMIO(0x9888), 0x1b940000 },
+	{ _MMIO(0x9888), 0x11940000 },
+	{ _MMIO(0x9888), 0x01e58000 },
+	{ _MMIO(0x9888), 0x03e57000 },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13908000 },
+	{ _MMIO(0x9888), 0x21908000 },
+	{ _MMIO(0x9888), 0x23908000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27908000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d908000 },
+	{ _MMIO(0x9888), 0x2f908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c20 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900400 },
+	{ _MMIO(0x9888), 0x47900421 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900421 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900061 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads_0_sku_gte_0x05[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f900064 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930055 },
+	{ _MMIO(0x9888), 0x03e58000 },
+	{ _MMIO(0x9888), 0x05e5c000 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x13900150 },
+	{ _MMIO(0x9888), 0x21900151 },
+	{ _MMIO(0x9888), 0x23900152 },
+	{ _MMIO(0x9888), 0x25900153 },
+	{ _MMIO(0x9888), 0x27900154 },
+	{ _MMIO(0x9888), 0x29900155 },
+	{ _MMIO(0x9888), 0x2b900156 },
+	{ _MMIO(0x9888), 0x2d900157 },
+	{ _MMIO(0x9888), 0x2f90015f },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c60 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900c63 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c63 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static int
+get_memory_reads_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 3);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 3);
+
+	if ((INTEL_INFO(dev_priv)->sseu.slice_mask & 0x01) &&
+	    (dev_priv->drm.pdev->revision < 0x02)) {
+		regs[n] = mux_config_memory_reads_0_slices_0x01_and_sku_lt_0x02;
+		lens[n] = ARRAY_SIZE(mux_config_memory_reads_0_slices_0x01_and_sku_lt_0x02);
+		n++;
+	}
+	if ((dev_priv->drm.pdev->revision < 0x05) &&
+		   (dev_priv->drm.pdev->revision >= 0x02)) {
+		regs[n] = mux_config_memory_reads_0_sku_lt_0x05_and_sku_gte_0x02;
+		lens[n] = ARRAY_SIZE(mux_config_memory_reads_0_sku_lt_0x05_and_sku_gte_0x02);
+		n++;
+	}
+	if (dev_priv->drm.pdev->revision >= 0x05) {
+		regs[n] = mux_config_memory_reads_0_sku_gte_0x05;
+		lens[n] = ARRAY_SIZE(mux_config_memory_reads_0_sku_gte_0x05);
+		n++;
+	}
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_writes[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f822 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_writes[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes_0_slices_0x01_and_sku_lt_0x02[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x13945400 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f901400 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x0f968000 },
+	{ _MMIO(0x9888), 0x1196c000 },
+	{ _MMIO(0x9888), 0x13964000 },
+	{ _MMIO(0x9888), 0x11938000 },
+	{ _MMIO(0x9888), 0x1b93fe00 },
+	{ _MMIO(0x9888), 0x01940010 },
+	{ _MMIO(0x9888), 0x07941100 },
+	{ _MMIO(0x9888), 0x09941312 },
+	{ _MMIO(0x9888), 0x0b941514 },
+	{ _MMIO(0x9888), 0x0d941716 },
+	{ _MMIO(0x9888), 0x11940000 },
+	{ _MMIO(0x9888), 0x19940000 },
+	{ _MMIO(0x9888), 0x1b940000 },
+	{ _MMIO(0x9888), 0x1d940000 },
+	{ _MMIO(0x9888), 0x1b954000 },
+	{ _MMIO(0x9888), 0x1d95a550 },
+	{ _MMIO(0x9888), 0x1f9502aa },
+	{ _MMIO(0x9888), 0x2f900167 },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13908000 },
+	{ _MMIO(0x9888), 0x21908000 },
+	{ _MMIO(0x9888), 0x23908000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27908000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0xd28), 0x00000000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes_0_sku_lt_0x05_and_sku_gte_0x02[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x13945400 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f901400 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x19930800 },
+	{ _MMIO(0x9888), 0x1b93aa55 },
+	{ _MMIO(0x9888), 0x1d93002a },
+	{ _MMIO(0x9888), 0x01940010 },
+	{ _MMIO(0x9888), 0x07941100 },
+	{ _MMIO(0x9888), 0x09941312 },
+	{ _MMIO(0x9888), 0x0b941514 },
+	{ _MMIO(0x9888), 0x0d941716 },
+	{ _MMIO(0x9888), 0x1b940000 },
+	{ _MMIO(0x9888), 0x11940000 },
+	{ _MMIO(0x9888), 0x01e58000 },
+	{ _MMIO(0x9888), 0x03e57000 },
+	{ _MMIO(0x9888), 0x2f900167 },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13908000 },
+	{ _MMIO(0x9888), 0x21908000 },
+	{ _MMIO(0x9888), 0x23908000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27908000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c20 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900400 },
+	{ _MMIO(0x9888), 0x47900421 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900421 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes_0_sku_gte_0x05[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f901000 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930055 },
+	{ _MMIO(0x9888), 0x03e58000 },
+	{ _MMIO(0x9888), 0x05e5c000 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x13900160 },
+	{ _MMIO(0x9888), 0x21900161 },
+	{ _MMIO(0x9888), 0x23900162 },
+	{ _MMIO(0x9888), 0x25900163 },
+	{ _MMIO(0x9888), 0x27900164 },
+	{ _MMIO(0x9888), 0x29900165 },
+	{ _MMIO(0x9888), 0x2b900166 },
+	{ _MMIO(0x9888), 0x2d900167 },
+	{ _MMIO(0x9888), 0x2f900150 },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c60 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900c63 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c63 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static int
+get_memory_writes_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 3);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 3);
+
+	if ((INTEL_INFO(dev_priv)->sseu.slice_mask & 0x01) &&
+	    (dev_priv->drm.pdev->revision < 0x02)) {
+		regs[n] = mux_config_memory_writes_0_slices_0x01_and_sku_lt_0x02;
+		lens[n] = ARRAY_SIZE(mux_config_memory_writes_0_slices_0x01_and_sku_lt_0x02);
+		n++;
+	}
+	if ((dev_priv->drm.pdev->revision < 0x05) &&
+		   (dev_priv->drm.pdev->revision >= 0x02)) {
+		regs[n] = mux_config_memory_writes_0_sku_lt_0x05_and_sku_gte_0x02;
+		lens[n] = ARRAY_SIZE(mux_config_memory_writes_0_sku_lt_0x05_and_sku_gte_0x02);
+		n++;
+	}
+	if (dev_priv->drm.pdev->revision >= 0x05) {
+		regs[n] = mux_config_memory_writes_0_sku_gte_0x05;
+		lens[n] = ARRAY_SIZE(mux_config_memory_writes_0_sku_gte_0x05);
+		n++;
+	}
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extended[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fc2a },
+	{ _MMIO(0x2774), 0x0000bf00 },
+	{ _MMIO(0x2778), 0x0007fc6a },
+	{ _MMIO(0x277c), 0x0000bf00 },
+	{ _MMIO(0x2780), 0x0007fc92 },
+	{ _MMIO(0x2784), 0x0000bf00 },
+	{ _MMIO(0x2788), 0x0007fca2 },
+	{ _MMIO(0x278c), 0x0000bf00 },
+	{ _MMIO(0x2790), 0x0007fc32 },
+	{ _MMIO(0x2794), 0x0000bf00 },
+	{ _MMIO(0x2798), 0x0007fc9a },
+	{ _MMIO(0x279c), 0x0000bf00 },
+	{ _MMIO(0x27a0), 0x0007fe6a },
+	{ _MMIO(0x27a4), 0x0000bf00 },
+	{ _MMIO(0x27a8), 0x0007fe7a },
+	{ _MMIO(0x27ac), 0x0000bf00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extended[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended_0_subslices_0x01[] = {
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x141c8160 },
+	{ _MMIO(0x9888), 0x161c8015 },
+	{ _MMIO(0x9888), 0x181c0120 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4eaaa0 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0e6c0b01 },
+	{ _MMIO(0x9888), 0x006c0200 },
+	{ _MMIO(0x9888), 0x026c000c },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x001b8000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x001c0041 },
+	{ _MMIO(0x9888), 0x061c4200 },
+	{ _MMIO(0x9888), 0x081c4443 },
+	{ _MMIO(0x9888), 0x0a1c4645 },
+	{ _MMIO(0x9888), 0x0c1c7647 },
+	{ _MMIO(0x9888), 0x041c7357 },
+	{ _MMIO(0x9888), 0x1c1c0030 },
+	{ _MMIO(0x9888), 0x101c0000 },
+	{ _MMIO(0x9888), 0x1a1c0000 },
+	{ _MMIO(0x9888), 0x121c8000 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4caa2a },
+	{ _MMIO(0x9888), 0x0c4c02aa },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5515 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162caa00 },
+	{ _MMIO(0x9888), 0x182c00aa },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0xd28), 0x00000000 },
+	{ _MMIO(0x9888), 0x11907fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900040 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900802 },
+	{ _MMIO(0x9888), 0x47900842 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900842 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900800 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static int
+get_compute_extended_mux_config(struct drm_i915_private *dev_priv,
+				const struct i915_oa_reg **regs,
+				int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x01) {
+		regs[n] = mux_config_compute_extended_0_subslices_0x01;
+		lens[n] = ARRAY_SIZE(mux_config_compute_extended_0_subslices_0x01);
+		n++;
+	}
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_l3_cache[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fffa },
+	{ _MMIO(0x2774), 0x0000fefe },
+	{ _MMIO(0x2778), 0x0007fffa },
+	{ _MMIO(0x277c), 0x0000fefd },
+	{ _MMIO(0x2790), 0x0007fffa },
+	{ _MMIO(0x2794), 0x0000fbef },
+	{ _MMIO(0x2798), 0x0007fffa },
+	{ _MMIO(0x279c), 0x0000fbdf },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_l3_cache[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00101100 },
+	{ _MMIO(0xe45c), 0x00201200 },
+	{ _MMIO(0xe55c), 0x00301300 },
+	{ _MMIO(0xe65c), 0x00401400 },
+};
+
+static const struct i915_oa_reg mux_config_compute_l3_cache[] = {
+	{ _MMIO(0x9888), 0x166c0760 },
+	{ _MMIO(0x9888), 0x1593001e },
+	{ _MMIO(0x9888), 0x3f901403 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4e8020 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x006c0051 },
+	{ _MMIO(0x9888), 0x066c5000 },
+	{ _MMIO(0x9888), 0x086c5c5d },
+	{ _MMIO(0x9888), 0x0e6c5e5f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x186c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x061b8000 },
+	{ _MMIO(0x9888), 0x081bc000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x101c8000 },
+	{ _MMIO(0x9888), 0x1a1ce000 },
+	{ _MMIO(0x9888), 0x1c1c0030 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4c2a00 },
+	{ _MMIO(0x9888), 0x0c4c0280 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f1500 },
+	{ _MMIO(0x9888), 0x100f0140 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162c0a00 },
+	{ _MMIO(0x9888), 0x182c00a0 },
+	{ _MMIO(0x9888), 0x03933300 },
+	{ _MMIO(0x9888), 0x05930032 },
+	{ _MMIO(0x9888), 0x11930000 },
+	{ _MMIO(0x9888), 0x1b930000 },
+	{ _MMIO(0x9888), 0x1d900157 },
+	{ _MMIO(0x9888), 0x1f900167 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1190030f },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900042 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x53901111 },
+	{ _MMIO(0x9888), 0x43900420 },
+};
+
+static int
+get_compute_l3_cache_mux_config(struct drm_i915_private *dev_priv,
+				const struct i915_oa_reg **regs,
+				int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_l3_cache;
+	lens[n] = ARRAY_SIZE(mux_config_compute_l3_cache);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_hdc_and_sf[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x10800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+};
+
+static const struct i915_oa_reg flex_eu_config_hdc_and_sf[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_hdc_and_sf[] = {
+	{ _MMIO(0x9888), 0x104f0232 },
+	{ _MMIO(0x9888), 0x124f4640 },
+	{ _MMIO(0x9888), 0x106c0232 },
+	{ _MMIO(0x9888), 0x11834400 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x004f1880 },
+	{ _MMIO(0x9888), 0x024f08bb },
+	{ _MMIO(0x9888), 0x044f001b },
+	{ _MMIO(0x9888), 0x046c0100 },
+	{ _MMIO(0x9888), 0x066c000b },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x041b8000 },
+	{ _MMIO(0x9888), 0x061b4000 },
+	{ _MMIO(0x9888), 0x1a1c1800 },
+	{ _MMIO(0x9888), 0x005b8000 },
+	{ _MMIO(0x9888), 0x025bc000 },
+	{ _MMIO(0x9888), 0x045b4000 },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x145c8000 },
+	{ _MMIO(0x9888), 0x165c8000 },
+	{ _MMIO(0x9888), 0x185c8000 },
+	{ _MMIO(0x9888), 0x0a4c00a0 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042cc000 },
+	{ _MMIO(0x9888), 0x062cc000 },
+	{ _MMIO(0x9888), 0x082cc000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x0f8305c0 },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x07830000 },
+	{ _MMIO(0x9888), 0x1d950080 },
+	{ _MMIO(0x9888), 0x13928000 },
+	{ _MMIO(0x9888), 0x0f988000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b9000a0 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_hdc_and_sf_mux_config(struct drm_i915_private *dev_priv,
+			  const struct i915_oa_reg **regs,
+			  int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_hdc_and_sf;
+	lens[n] = ARRAY_SIZE(mux_config_hdc_and_sf);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1[] = {
+	{ _MMIO(0x9888), 0x126c7b40 },
+	{ _MMIO(0x9888), 0x166c0020 },
+	{ _MMIO(0x9888), 0x0a603444 },
+	{ _MMIO(0x9888), 0x0a613400 },
+	{ _MMIO(0x9888), 0x1a4ea800 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x064f4000 },
+	{ _MMIO(0x9888), 0x0c6c5327 },
+	{ _MMIO(0x9888), 0x0e6c5425 },
+	{ _MMIO(0x9888), 0x006c2a00 },
+	{ _MMIO(0x9888), 0x026c285b },
+	{ _MMIO(0x9888), 0x046c005c },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0800 },
+	{ _MMIO(0x9888), 0x0c1bc000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x001b8000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x1c1c003c },
+	{ _MMIO(0x9888), 0x121c8000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c0800 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x1a5c1000 },
+	{ _MMIO(0x9888), 0x10600000 },
+	{ _MMIO(0x9888), 0x04600000 },
+	{ _MMIO(0x9888), 0x0c610044 },
+	{ _MMIO(0x9888), 0x10610000 },
+	{ _MMIO(0x9888), 0x06610000 },
+	{ _MMIO(0x9888), 0x0c4c02a8 },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x0a4c002a },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0154 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x182c00aa },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2cc000 },
+	{ _MMIO(0x9888), 0x1190ffc0 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900420 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900021 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900400 },
+	{ _MMIO(0x9888), 0x43900421 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+};
+
+static int
+get_l3_1_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_1;
+	lens[n] = ARRAY_SIZE(mux_config_l3_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00028002 },
+	{ _MMIO(0x277c), 0x000087ff },
+	{ _MMIO(0x2780), 0x00020002 },
+	{ _MMIO(0x2784), 0x00008fff },
+	{ _MMIO(0x2788), 0x00008002 },
+	{ _MMIO(0x278c), 0x0000a7ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_2[] = {
+	{ _MMIO(0x9888), 0x126c02e0 },
+	{ _MMIO(0x9888), 0x146c0001 },
+	{ _MMIO(0x9888), 0x0a623400 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x064f4000 },
+	{ _MMIO(0x9888), 0x026c3324 },
+	{ _MMIO(0x9888), 0x046c3422 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c0800 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x1a5c1000 },
+	{ _MMIO(0x9888), 0x06614000 },
+	{ _MMIO(0x9888), 0x0c620044 },
+	{ _MMIO(0x9888), 0x10620000 },
+	{ _MMIO(0x9888), 0x06620000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c002a },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f4000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2cc000 },
+	{ _MMIO(0x9888), 0x1190f800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_l3_2_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_2;
+	lens[n] = ARRAY_SIZE(mux_config_l3_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_3[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00028002 },
+	{ _MMIO(0x277c), 0x000087ff },
+	{ _MMIO(0x2780), 0x00020002 },
+	{ _MMIO(0x2784), 0x00008fff },
+	{ _MMIO(0x2788), 0x00008002 },
+	{ _MMIO(0x278c), 0x0000a7ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_3[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_3[] = {
+	{ _MMIO(0x9888), 0x126c4e80 },
+	{ _MMIO(0x9888), 0x146c0000 },
+	{ _MMIO(0x9888), 0x0a633400 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x026c3321 },
+	{ _MMIO(0x9888), 0x046c342f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1a6c2000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x061b4000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c1800 },
+	{ _MMIO(0x9888), 0x06604000 },
+	{ _MMIO(0x9888), 0x0c630044 },
+	{ _MMIO(0x9888), 0x10630000 },
+	{ _MMIO(0x9888), 0x06630000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c00aa },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f4000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x1190f800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900002 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_l3_3_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_3;
+	lens[n] = ARRAY_SIZE(mux_config_l3_3);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000efff },
+	{ _MMIO(0x2778), 0x00006000 },
+	{ _MMIO(0x277c), 0x0000f3ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x9888), 0x102f3800 },
+	{ _MMIO(0x9888), 0x144d0500 },
+	{ _MMIO(0x9888), 0x120d03c0 },
+	{ _MMIO(0x9888), 0x140d03cf },
+	{ _MMIO(0x9888), 0x0c0f0004 },
+	{ _MMIO(0x9888), 0x0c4e4000 },
+	{ _MMIO(0x9888), 0x042f0480 },
+	{ _MMIO(0x9888), 0x082f0000 },
+	{ _MMIO(0x9888), 0x022f0000 },
+	{ _MMIO(0x9888), 0x0a4c0090 },
+	{ _MMIO(0x9888), 0x064d0027 },
+	{ _MMIO(0x9888), 0x004d0000 },
+	{ _MMIO(0x9888), 0x000d0d40 },
+	{ _MMIO(0x9888), 0x020d803f },
+	{ _MMIO(0x9888), 0x040d8023 },
+	{ _MMIO(0x9888), 0x100d0000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020f0010 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x0e0f0050 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41901400 },
+	{ _MMIO(0x9888), 0x43901485 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900001 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_rasterizer_and_pixel_backend_mux_config(struct drm_i915_private *dev_priv,
+					    const struct i915_oa_reg **regs,
+					    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_rasterizer_and_pixel_backend;
+	lens[n] = ARRAY_SIZE(mux_config_rasterizer_and_pixel_backend);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler[] = {
+	{ _MMIO(0x9888), 0x14152c00 },
+	{ _MMIO(0x9888), 0x16150005 },
+	{ _MMIO(0x9888), 0x121600a0 },
+	{ _MMIO(0x9888), 0x14352c00 },
+	{ _MMIO(0x9888), 0x16350005 },
+	{ _MMIO(0x9888), 0x123600a0 },
+	{ _MMIO(0x9888), 0x14552c00 },
+	{ _MMIO(0x9888), 0x16550005 },
+	{ _MMIO(0x9888), 0x125600a0 },
+	{ _MMIO(0x9888), 0x062f6000 },
+	{ _MMIO(0x9888), 0x022f2000 },
+	{ _MMIO(0x9888), 0x0c4c0050 },
+	{ _MMIO(0x9888), 0x0a4c0010 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0350 },
+	{ _MMIO(0x9888), 0x0c0fb000 },
+	{ _MMIO(0x9888), 0x0e0f00da },
+	{ _MMIO(0x9888), 0x182c0028 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x022dc000 },
+	{ _MMIO(0x9888), 0x042d4000 },
+	{ _MMIO(0x9888), 0x0c138000 },
+	{ _MMIO(0x9888), 0x0e132000 },
+	{ _MMIO(0x9888), 0x0413c000 },
+	{ _MMIO(0x9888), 0x1c140018 },
+	{ _MMIO(0x9888), 0x0c157000 },
+	{ _MMIO(0x9888), 0x0e150078 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x04162180 },
+	{ _MMIO(0x9888), 0x02160000 },
+	{ _MMIO(0x9888), 0x04174000 },
+	{ _MMIO(0x9888), 0x0233a000 },
+	{ _MMIO(0x9888), 0x04333000 },
+	{ _MMIO(0x9888), 0x14348000 },
+	{ _MMIO(0x9888), 0x16348000 },
+	{ _MMIO(0x9888), 0x02357870 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x04360043 },
+	{ _MMIO(0x9888), 0x02360000 },
+	{ _MMIO(0x9888), 0x04371000 },
+	{ _MMIO(0x9888), 0x0e538000 },
+	{ _MMIO(0x9888), 0x00538000 },
+	{ _MMIO(0x9888), 0x06533000 },
+	{ _MMIO(0x9888), 0x1c540020 },
+	{ _MMIO(0x9888), 0x12548000 },
+	{ _MMIO(0x9888), 0x0e557000 },
+	{ _MMIO(0x9888), 0x00557800 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x06560043 },
+	{ _MMIO(0x9888), 0x02560000 },
+	{ _MMIO(0x9888), 0x06571000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900060 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900060 },
+};
+
+static int
+get_sampler_mux_config(struct drm_i915_private *dev_priv,
+		       const struct i915_oa_reg **regs,
+		       int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_sampler;
+	lens[n] = ARRAY_SIZE(mux_config_sampler);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x00007fff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x00009fff },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000efff },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000f3ff },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fdff },
+	{ _MMIO(0x2798), 0x00000000 },
+	{ _MMIO(0x279c), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_1[] = {
+	{ _MMIO(0x9888), 0x12120000 },
+	{ _MMIO(0x9888), 0x12320000 },
+	{ _MMIO(0x9888), 0x12520000 },
+	{ _MMIO(0x9888), 0x002f8000 },
+	{ _MMIO(0x9888), 0x022f3000 },
+	{ _MMIO(0x9888), 0x0a4c0015 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f03a0 },
+	{ _MMIO(0x9888), 0x0c0ff000 },
+	{ _MMIO(0x9888), 0x0e0f0095 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2d8000 },
+	{ _MMIO(0x9888), 0x0e2d4000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x02108000 },
+	{ _MMIO(0x9888), 0x0410c000 },
+	{ _MMIO(0x9888), 0x02118000 },
+	{ _MMIO(0x9888), 0x0411c000 },
+	{ _MMIO(0x9888), 0x02121880 },
+	{ _MMIO(0x9888), 0x041219b5 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x02134000 },
+	{ _MMIO(0x9888), 0x04135000 },
+	{ _MMIO(0x9888), 0x0c308000 },
+	{ _MMIO(0x9888), 0x0e304000 },
+	{ _MMIO(0x9888), 0x06304000 },
+	{ _MMIO(0x9888), 0x0c318000 },
+	{ _MMIO(0x9888), 0x0e314000 },
+	{ _MMIO(0x9888), 0x06314000 },
+	{ _MMIO(0x9888), 0x0c321a80 },
+	{ _MMIO(0x9888), 0x0e320033 },
+	{ _MMIO(0x9888), 0x06320031 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x0c334000 },
+	{ _MMIO(0x9888), 0x0e331000 },
+	{ _MMIO(0x9888), 0x06331000 },
+	{ _MMIO(0x9888), 0x0e508000 },
+	{ _MMIO(0x9888), 0x00508000 },
+	{ _MMIO(0x9888), 0x02504000 },
+	{ _MMIO(0x9888), 0x0e518000 },
+	{ _MMIO(0x9888), 0x00518000 },
+	{ _MMIO(0x9888), 0x02514000 },
+	{ _MMIO(0x9888), 0x0e521880 },
+	{ _MMIO(0x9888), 0x00521a80 },
+	{ _MMIO(0x9888), 0x02520033 },
+	{ _MMIO(0x9888), 0x0e534000 },
+	{ _MMIO(0x9888), 0x00534000 },
+	{ _MMIO(0x9888), 0x02531000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900062 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+};
+
+static int
+get_tdl_1_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_1;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_2[] = {
+	{ _MMIO(0x9888), 0x12124d60 },
+	{ _MMIO(0x9888), 0x12322e60 },
+	{ _MMIO(0x9888), 0x12524d60 },
+	{ _MMIO(0x9888), 0x022f3000 },
+	{ _MMIO(0x9888), 0x0a4c0014 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0fe000 },
+	{ _MMIO(0x9888), 0x0e0f0097 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x002d8000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x0410c000 },
+	{ _MMIO(0x9888), 0x0411c000 },
+	{ _MMIO(0x9888), 0x04121fb7 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x04135000 },
+	{ _MMIO(0x9888), 0x00308000 },
+	{ _MMIO(0x9888), 0x06304000 },
+	{ _MMIO(0x9888), 0x00318000 },
+	{ _MMIO(0x9888), 0x06314000 },
+	{ _MMIO(0x9888), 0x00321b80 },
+	{ _MMIO(0x9888), 0x0632003f },
+	{ _MMIO(0x9888), 0x00334000 },
+	{ _MMIO(0x9888), 0x06331000 },
+	{ _MMIO(0x9888), 0x0250c000 },
+	{ _MMIO(0x9888), 0x0251c000 },
+	{ _MMIO(0x9888), 0x02521fb7 },
+	{ _MMIO(0x9888), 0x00520000 },
+	{ _MMIO(0x9888), 0x02535000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x43900063 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_tdl_2_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_2;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extra[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extra[] = {
+	{ _MMIO(0xe458), 0x00001000 },
+	{ _MMIO(0xe558), 0x00003002 },
+	{ _MMIO(0xe658), 0x00005004 },
+	{ _MMIO(0xe758), 0x00011010 },
+	{ _MMIO(0xe45c), 0x00050012 },
+	{ _MMIO(0xe55c), 0x00052051 },
+	{ _MMIO(0xe65c), 0x00000008 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extra[] = {
+	{ _MMIO(0x9888), 0x121203e0 },
+	{ _MMIO(0x9888), 0x123203e0 },
+	{ _MMIO(0x9888), 0x125203e0 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x0a4c0040 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0e0f006c },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x042d8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06114000 },
+	{ _MMIO(0x9888), 0x06120033 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x04308000 },
+	{ _MMIO(0x9888), 0x04318000 },
+	{ _MMIO(0x9888), 0x04321980 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x04334000 },
+	{ _MMIO(0x9888), 0x04504000 },
+	{ _MMIO(0x9888), 0x04514000 },
+	{ _MMIO(0x9888), 0x04520033 },
+	{ _MMIO(0x9888), 0x00520000 },
+	{ _MMIO(0x9888), 0x04531000 },
+	{ _MMIO(0x9888), 0x1190e000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900c00 },
+	{ _MMIO(0x9888), 0x45900002 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_compute_extra_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_extra;
+	lens[n] = ARRAY_SIZE(mux_config_compute_extra);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_vme_pipe[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00100030 },
+	{ _MMIO(0x2774), 0x0000fff9 },
+	{ _MMIO(0x2778), 0x00000002 },
+	{ _MMIO(0x277c), 0x0000fffc },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000fff3 },
+	{ _MMIO(0x2788), 0x00100180 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00000002 },
+	{ _MMIO(0x279c), 0x0000ff3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_vme_pipe[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00008003 },
+};
+
+static const struct i915_oa_reg mux_config_vme_pipe[] = {
+	{ _MMIO(0x9888), 0x141a5800 },
+	{ _MMIO(0x9888), 0x161a00c0 },
+	{ _MMIO(0x9888), 0x12180240 },
+	{ _MMIO(0x9888), 0x14180002 },
+	{ _MMIO(0x9888), 0x143a5800 },
+	{ _MMIO(0x9888), 0x163a00c0 },
+	{ _MMIO(0x9888), 0x12380240 },
+	{ _MMIO(0x9888), 0x14380002 },
+	{ _MMIO(0x9888), 0x002f1000 },
+	{ _MMIO(0x9888), 0x022f8000 },
+	{ _MMIO(0x9888), 0x042f3000 },
+	{ _MMIO(0x9888), 0x004c4000 },
+	{ _MMIO(0x9888), 0x0a4c1500 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f9500 },
+	{ _MMIO(0x9888), 0x100f002a },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162c0a00 },
+	{ _MMIO(0x9888), 0x0a2dc000 },
+	{ _MMIO(0x9888), 0x0c2dc000 },
+	{ _MMIO(0x9888), 0x04193000 },
+	{ _MMIO(0x9888), 0x081a28c1 },
+	{ _MMIO(0x9888), 0x001a0000 },
+	{ _MMIO(0x9888), 0x00133000 },
+	{ _MMIO(0x9888), 0x0613c000 },
+	{ _MMIO(0x9888), 0x0813f000 },
+	{ _MMIO(0x9888), 0x00172000 },
+	{ _MMIO(0x9888), 0x06178000 },
+	{ _MMIO(0x9888), 0x0817a000 },
+	{ _MMIO(0x9888), 0x00180037 },
+	{ _MMIO(0x9888), 0x06180940 },
+	{ _MMIO(0x9888), 0x08180000 },
+	{ _MMIO(0x9888), 0x02180000 },
+	{ _MMIO(0x9888), 0x04183000 },
+	{ _MMIO(0x9888), 0x06393000 },
+	{ _MMIO(0x9888), 0x0c3a28c1 },
+	{ _MMIO(0x9888), 0x003a0000 },
+	{ _MMIO(0x9888), 0x0a33f000 },
+	{ _MMIO(0x9888), 0x0c33f000 },
+	{ _MMIO(0x9888), 0x0a37a000 },
+	{ _MMIO(0x9888), 0x0c37a000 },
+	{ _MMIO(0x9888), 0x0a380977 },
+	{ _MMIO(0x9888), 0x08380000 },
+	{ _MMIO(0x9888), 0x04380000 },
+	{ _MMIO(0x9888), 0x06383000 },
+	{ _MMIO(0x9888), 0x119000ff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900040 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900800 },
+	{ _MMIO(0x9888), 0x47901000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900844 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_vme_pipe_mux_config(struct drm_i915_private *dev_priv,
+			const struct i915_oa_reg **regs,
+			int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_vme_pipe;
+	lens[n] = ARRAY_SIZE(mux_config_vme_pipe);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_test_oa[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2770), 0x00000004 },
+	{ _MMIO(0x2774), 0x00000000 },
+	{ _MMIO(0x2778), 0x00000003 },
+	{ _MMIO(0x277c), 0x00000000 },
+	{ _MMIO(0x2780), 0x00000007 },
+	{ _MMIO(0x2784), 0x00000000 },
+	{ _MMIO(0x2788), 0x00100002 },
+	{ _MMIO(0x278c), 0x0000fff7 },
+	{ _MMIO(0x2790), 0x00100002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00100082 },
+	{ _MMIO(0x279c), 0x0000ffef },
+	{ _MMIO(0x27a0), 0x001000c2 },
+	{ _MMIO(0x27a4), 0x0000ffe7 },
+	{ _MMIO(0x27a8), 0x00100001 },
+	{ _MMIO(0x27ac), 0x0000ffe7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_test_oa[] = {
+};
+
+static const struct i915_oa_reg mux_config_test_oa[] = {
+	{ _MMIO(0x9888), 0x11810000 },
+	{ _MMIO(0x9888), 0x07810016 },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930040 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_test_oa_mux_config(struct drm_i915_private *dev_priv,
+		       const struct i915_oa_reg **regs,
+		       int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_test_oa;
+	lens[n] = ARRAY_SIZE(mux_config_test_oa);
+	n++;
+
+	return n;
+}
+
 int i915_oa_select_metric_set_sklgt2(struct drm_i915_private *dev_priv)
 {
-	dev_priv->perf.oa.n_mux_configs = 0;
-	dev_priv->perf.oa.b_counter_regs = NULL;
-	dev_priv->perf.oa.b_counter_regs_len = 0;
-	dev_priv->perf.oa.flex_regs = NULL;
-	dev_priv->perf.oa.flex_regs_len = 0;
+	dev_priv->perf.oa.n_mux_configs = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_basic_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_basic_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
+		return 0;
+	case METRIC_SET_ID_RENDER_PIPE_PROFILE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_pipe_profile_mux_config(dev_priv,
+							   dev_priv->perf.oa.mux_regs,
+							   dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_PIPE_PROFILE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_pipe_profile;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_pipe_profile);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_pipe_profile;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_pipe_profile);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_READS:
+		dev_priv->perf.oa.n_mux_configs =
+			get_memory_reads_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_READS\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_reads;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_reads);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_reads;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_reads);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_WRITES:
+		dev_priv->perf.oa.n_mux_configs =
+			get_memory_writes_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_WRITES\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_writes;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_writes);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_writes;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_writes);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTENDED:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_extended_mux_config(dev_priv,
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTENDED\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extended;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extended);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extended;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extended);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_L3_CACHE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_l3_cache_mux_config(dev_priv,
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_L3_CACHE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_l3_cache;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_l3_cache);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_l3_cache;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_l3_cache);
+
+		return 0;
+	case METRIC_SET_ID_HDC_AND_SF:
+		dev_priv->perf.oa.n_mux_configs =
+			get_hdc_and_sf_mux_config(dev_priv,
+						  dev_priv->perf.oa.mux_regs,
+						  dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"HDC_AND_SF\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_hdc_and_sf;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_hdc_and_sf);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_hdc_and_sf;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_hdc_and_sf);
+
+		return 0;
+	case METRIC_SET_ID_L3_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_1_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_1);
+
+		return 0;
+	case METRIC_SET_ID_L3_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_2_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_2);
+
+		return 0;
+	case METRIC_SET_ID_L3_3:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_3_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_3\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_3;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_3);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_3;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_3);
+
+		return 0;
+	case METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND:
+		dev_priv->perf.oa.n_mux_configs =
+			get_rasterizer_and_pixel_backend_mux_config(dev_priv,
+								    dev_priv->perf.oa.mux_regs,
+								    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RASTERIZER_AND_PIXEL_BACKEND\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_rasterizer_and_pixel_backend);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_rasterizer_and_pixel_backend);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER:
+		dev_priv->perf.oa.n_mux_configs =
+			get_sampler_mux_config(dev_priv,
+					       dev_priv->perf.oa.mux_regs,
+					       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler);
+
+		return 0;
+	case METRIC_SET_ID_TDL_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_1_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_1);
+
+		return 0;
+	case METRIC_SET_ID_TDL_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_2_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_2);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTRA:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_extra_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTRA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extra;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extra);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extra;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extra);
+
+		return 0;
+	case METRIC_SET_ID_VME_PIPE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_vme_pipe_mux_config(dev_priv,
+						dev_priv->perf.oa.mux_regs,
+						dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"VME_PIPE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_vme_pipe;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_vme_pipe);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_vme_pipe;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_vme_pipe);
+
+		return 0;
+	case METRIC_SET_ID_TEST_OA:
+		dev_priv->perf.oa.n_mux_configs =
+			get_test_oa_mux_config(dev_priv,
+					       dev_priv->perf.oa.mux_regs,
+					       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TEST_OA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_test_oa;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_test_oa);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_test_oa;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_test_oa);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "f519e481-24d2-4d42-87c9-3fdd12c00202",
+	.attrs =  attrs_render_basic,
+};
+
+static ssize_t
+show_compute_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_BASIC);
+}
+
+static struct device_attribute dev_attr_compute_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_basic[] = {
+	&dev_attr_compute_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_basic = {
+	.name = "fe47b29d-ae51-423e-bff4-27d965a95b60",
+	.attrs =  attrs_compute_basic,
+};
+
+static ssize_t
+show_render_pipe_profile_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_PIPE_PROFILE);
+}
+
+static struct device_attribute dev_attr_render_pipe_profile_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_pipe_profile_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_pipe_profile[] = {
+	&dev_attr_render_pipe_profile_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_pipe_profile = {
+	.name = "e0ad5ae0-84ba-4f29-a723-1906c12cb774",
+	.attrs =  attrs_render_pipe_profile,
+};
+
+static ssize_t
+show_memory_reads_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_READS);
+}
 
-	switch (dev_priv->perf.oa.metrics_set) {
-	case METRIC_SET_ID_RENDER_BASIC:
-		dev_priv->perf.oa.n_mux_configs =
-			get_render_basic_mux_config(dev_priv,
-						    dev_priv->perf.oa.mux_regs,
-						    dev_priv->perf.oa.mux_regs_lens);
-		if (dev_priv->perf.oa.n_mux_configs == 0) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+static struct device_attribute dev_attr_memory_reads_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_reads_id,
+	.store = NULL,
+};
 
-			/* EINVAL because *_register_sysfs already checked this
-			 * and so it wouldn't have been advertised to userspace and
-			 * so shouldn't have been requested
-			 */
-			return -EINVAL;
-		}
+static struct attribute *attrs_memory_reads[] = {
+	&dev_attr_memory_reads_id.attr,
+	NULL,
+};
 
-		dev_priv->perf.oa.b_counter_regs =
-			b_counter_config_render_basic;
-		dev_priv->perf.oa.b_counter_regs_len =
-			ARRAY_SIZE(b_counter_config_render_basic);
+static struct attribute_group group_memory_reads = {
+	.name = "9bc436dd-6130-4add-affc-283eb6eaa864",
+	.attrs =  attrs_memory_reads,
+};
 
-		dev_priv->perf.oa.flex_regs =
-			flex_eu_config_render_basic;
-		dev_priv->perf.oa.flex_regs_len =
-			ARRAY_SIZE(flex_eu_config_render_basic);
+static ssize_t
+show_memory_writes_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_WRITES);
+}
 
-		return 0;
-	default:
-		return -ENODEV;
-	}
+static struct device_attribute dev_attr_memory_writes_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_writes_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_writes[] = {
+	&dev_attr_memory_writes_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_writes = {
+	.name = "2ea0da8f-3527-4669-9d9d-13099a7435bf",
+	.attrs =  attrs_memory_writes,
+};
+
+static ssize_t
+show_compute_extended_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTENDED);
+}
+
+static struct device_attribute dev_attr_compute_extended_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extended_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extended[] = {
+	&dev_attr_compute_extended_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extended = {
+	.name = "d97d16af-028b-4cd1-a672-6210cb5513dd",
+	.attrs =  attrs_compute_extended,
+};
+
+static ssize_t
+show_compute_l3_cache_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_L3_CACHE);
 }
 
+static struct device_attribute dev_attr_compute_l3_cache_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_l3_cache_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_l3_cache[] = {
+	&dev_attr_compute_l3_cache_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_l3_cache = {
+	.name = "9fb22842-e708-43f7-9752-e0e41670c39e",
+	.attrs =  attrs_compute_l3_cache,
+};
+
 static ssize_t
-show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+show_hdc_and_sf_id(struct device *kdev, struct device_attribute *attr, char *buf)
 {
-	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+	return sprintf(buf, "%d\n", METRIC_SET_ID_HDC_AND_SF);
 }
 
-static struct device_attribute dev_attr_render_basic_id = {
+static struct device_attribute dev_attr_hdc_and_sf_id = {
 	.attr = { .name = "id", .mode = 0444 },
-	.show = show_render_basic_id,
+	.show = show_hdc_and_sf_id,
 	.store = NULL,
 };
 
-static struct attribute *attrs_render_basic[] = {
-	&dev_attr_render_basic_id.attr,
+static struct attribute *attrs_hdc_and_sf[] = {
+	&dev_attr_hdc_and_sf_id.attr,
 	NULL,
 };
 
-static struct attribute_group group_render_basic = {
-	.name = "f519e481-24d2-4d42-87c9-3fdd12c00202",
-	.attrs =  attrs_render_basic,
+static struct attribute_group group_hdc_and_sf = {
+	.name = "5378e2a1-4248-4188-a4ae-da25a794c603",
+	.attrs =  attrs_hdc_and_sf,
+};
+
+static ssize_t
+show_l3_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_1);
+}
+
+static struct device_attribute dev_attr_l3_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_1[] = {
+	&dev_attr_l3_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_1 = {
+	.name = "f42cdd6a-b000-42cb-870f-5eb423a7f514",
+	.attrs =  attrs_l3_1,
+};
+
+static ssize_t
+show_l3_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_2);
+}
+
+static struct device_attribute dev_attr_l3_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_2[] = {
+	&dev_attr_l3_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_2 = {
+	.name = "b9bf2423-d88c-4a7b-a051-627611d00dcc",
+	.attrs =  attrs_l3_2,
+};
+
+static ssize_t
+show_l3_3_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_3);
+}
+
+static struct device_attribute dev_attr_l3_3_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_3_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_3[] = {
+	&dev_attr_l3_3_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_3 = {
+	.name = "2414a93d-d84f-406e-99c0-472161194b40",
+	.attrs =  attrs_l3_3,
+};
+
+static ssize_t
+show_rasterizer_and_pixel_backend_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND);
+}
+
+static struct device_attribute dev_attr_rasterizer_and_pixel_backend_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_rasterizer_and_pixel_backend_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_rasterizer_and_pixel_backend[] = {
+	&dev_attr_rasterizer_and_pixel_backend_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_rasterizer_and_pixel_backend = {
+	.name = "53a45d2d-170b-4cf5-b7bb-585120c8e2f5",
+	.attrs =  attrs_rasterizer_and_pixel_backend,
+};
+
+static ssize_t
+show_sampler_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER);
+}
+
+static struct device_attribute dev_attr_sampler_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler[] = {
+	&dev_attr_sampler_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler = {
+	.name = "b4cff514-a91e-4798-a0b3-426ca13fc9c1",
+	.attrs =  attrs_sampler,
+};
+
+static ssize_t
+show_tdl_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_1);
+}
+
+static struct device_attribute dev_attr_tdl_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_1[] = {
+	&dev_attr_tdl_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_1 = {
+	.name = "7821d13b-9b8b-4405-9618-78cd56b62cce",
+	.attrs =  attrs_tdl_1,
+};
+
+static ssize_t
+show_tdl_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_2);
+}
+
+static struct device_attribute dev_attr_tdl_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_2[] = {
+	&dev_attr_tdl_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_2 = {
+	.name = "893f1a4d-919d-4388-8cb7-746d73ea7259",
+	.attrs =  attrs_tdl_2,
+};
+
+static ssize_t
+show_compute_extra_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTRA);
+}
+
+static struct device_attribute dev_attr_compute_extra_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extra_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extra[] = {
+	&dev_attr_compute_extra_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extra = {
+	.name = "41a24047-7484-4ead-ae37-de907e5ff2b2",
+	.attrs =  attrs_compute_extra,
+};
+
+static ssize_t
+show_vme_pipe_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_VME_PIPE);
+}
+
+static struct device_attribute dev_attr_vme_pipe_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_vme_pipe_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_vme_pipe[] = {
+	&dev_attr_vme_pipe_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_vme_pipe = {
+	.name = "95910492-943f-44bd-9461-390240f243fd",
+	.attrs =  attrs_vme_pipe,
+};
+
+static ssize_t
+show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TEST_OA);
+}
+
+static struct device_attribute dev_attr_test_oa_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_test_oa_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_test_oa[] = {
+	&dev_attr_test_oa_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_test_oa = {
+	.name = "1651949f-0ac0-4cb1-a06f-dafd74a407d1",
+	.attrs =  attrs_test_oa,
 };
 
 int
@@ -220,9 +3291,145 @@ i915_perf_register_sysfs_sklgt2(struct drm_i915_private *dev_priv)
 		if (ret)
 			goto error_render_basic;
 	}
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+		if (ret)
+			goto error_compute_basic;
+	}
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+		if (ret)
+			goto error_render_pipe_profile;
+	}
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+		if (ret)
+			goto error_memory_reads;
+	}
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+		if (ret)
+			goto error_memory_writes;
+	}
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+		if (ret)
+			goto error_compute_extended;
+	}
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+		if (ret)
+			goto error_compute_l3_cache;
+	}
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+		if (ret)
+			goto error_hdc_and_sf;
+	}
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+		if (ret)
+			goto error_l3_1;
+	}
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+		if (ret)
+			goto error_l3_2;
+	}
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+		if (ret)
+			goto error_l3_3;
+	}
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+		if (ret)
+			goto error_rasterizer_and_pixel_backend;
+	}
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler);
+		if (ret)
+			goto error_sampler;
+	}
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+		if (ret)
+			goto error_tdl_1;
+	}
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+		if (ret)
+			goto error_tdl_2;
+	}
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+		if (ret)
+			goto error_compute_extra;
+	}
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+		if (ret)
+			goto error_vme_pipe;
+	}
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+		if (ret)
+			goto error_test_oa;
+	}
 
 	return 0;
 
+error_test_oa:
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+error_vme_pipe:
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+error_compute_extra:
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+error_tdl_2:
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+error_tdl_1:
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+error_sampler:
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+error_rasterizer_and_pixel_backend:
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+error_l3_3:
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+error_l3_2:
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+error_l3_1:
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+error_hdc_and_sf:
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+error_compute_l3_cache:
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+error_compute_extended:
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+error_memory_writes:
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+error_memory_reads:
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+error_render_pipe_profile:
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+error_compute_basic:
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
 error_render_basic:
 	return ret;
 }
@@ -235,4 +3442,38 @@ i915_perf_unregister_sysfs_sklgt2(struct drm_i915_private *dev_priv)
 
 	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_test_oa);
 }
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt3.c b/drivers/gpu/drm/i915/i915_oa_sklgt3.c
index e32d3b3ad77a..7765e22dfa17 100644
--- a/drivers/gpu/drm/i915/i915_oa_sklgt3.c
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt3.c
@@ -33,9 +33,26 @@
 
 enum metric_set_id {
 	METRIC_SET_ID_RENDER_BASIC = 1,
+	METRIC_SET_ID_COMPUTE_BASIC,
+	METRIC_SET_ID_RENDER_PIPE_PROFILE,
+	METRIC_SET_ID_MEMORY_READS,
+	METRIC_SET_ID_MEMORY_WRITES,
+	METRIC_SET_ID_COMPUTE_EXTENDED,
+	METRIC_SET_ID_COMPUTE_L3_CACHE,
+	METRIC_SET_ID_HDC_AND_SF,
+	METRIC_SET_ID_L3_1,
+	METRIC_SET_ID_L3_2,
+	METRIC_SET_ID_L3_3,
+	METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND,
+	METRIC_SET_ID_SAMPLER,
+	METRIC_SET_ID_TDL_1,
+	METRIC_SET_ID_TDL_2,
+	METRIC_SET_ID_COMPUTE_EXTRA,
+	METRIC_SET_ID_VME_PIPE,
+	METRIC_SET_ID_TEST_OA,
 };
 
-int i915_oa_n_builtin_metric_sets_sklgt3 = 1;
+int i915_oa_n_builtin_metric_sets_sklgt3 = 18;
 
 static const struct i915_oa_reg b_counter_config_render_basic[] = {
 	{ _MMIO(0x2710), 0x00000000 },
@@ -157,66 +174,2669 @@ get_render_basic_mux_config(struct drm_i915_private *dev_priv,
 	return n;
 }
 
+static const struct i915_oa_reg b_counter_config_compute_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic[] = {
+	{ _MMIO(0x9888), 0x104f00e0 },
+	{ _MMIO(0x9888), 0x124f1c00 },
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x1a4e0820 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x064f0900 },
+	{ _MMIO(0x9888), 0x084f0032 },
+	{ _MMIO(0x9888), 0x0a4f1891 },
+	{ _MMIO(0x9888), 0x0c4f0e00 },
+	{ _MMIO(0x9888), 0x0e4f003c },
+	{ _MMIO(0x9888), 0x004f0d80 },
+	{ _MMIO(0x9888), 0x024f003b },
+	{ _MMIO(0x9888), 0x006c0002 },
+	{ _MMIO(0x9888), 0x086c0100 },
+	{ _MMIO(0x9888), 0x0c6c000c },
+	{ _MMIO(0x9888), 0x0e6c0b00 },
+	{ _MMIO(0x9888), 0x186c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x081b8000 },
+	{ _MMIO(0x9888), 0x0c1b4000 },
+	{ _MMIO(0x9888), 0x0e1b8000 },
+	{ _MMIO(0x9888), 0x101c8000 },
+	{ _MMIO(0x9888), 0x1a1c8000 },
+	{ _MMIO(0x9888), 0x1c1c0024 },
+	{ _MMIO(0x9888), 0x065b8000 },
+	{ _MMIO(0x9888), 0x085b4000 },
+	{ _MMIO(0x9888), 0x0a5bc000 },
+	{ _MMIO(0x9888), 0x0c5b8000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x005b8000 },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x1a5c6000 },
+	{ _MMIO(0x9888), 0x1c5c001b },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x145c8000 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4c2000 },
+	{ _MMIO(0x9888), 0x0c4c0208 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020d2000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5500 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2cc000 },
+	{ _MMIO(0x9888), 0x162cfb00 },
+	{ _MMIO(0x9888), 0x182c00be },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042cc000 },
+	{ _MMIO(0x9888), 0x19900157 },
+	{ _MMIO(0x9888), 0x1b900158 },
+	{ _MMIO(0x9888), 0x1d900105 },
+	{ _MMIO(0x9888), 0x1f900103 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x11900fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900863 },
+	{ _MMIO(0x9888), 0x47900802 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900802 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900002 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900c62 },
+	{ _MMIO(0x9888), 0x53903333 },
+};
+
+static int
+get_compute_basic_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_basic;
+	lens[n] = ARRAY_SIZE(mux_config_compute_basic);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_render_pipe_profile[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ffea },
+	{ _MMIO(0x2774), 0x00007ffc },
+	{ _MMIO(0x2778), 0x0007affa },
+	{ _MMIO(0x277c), 0x0000f5fd },
+	{ _MMIO(0x2780), 0x00079ffa },
+	{ _MMIO(0x2784), 0x0000f3fb },
+	{ _MMIO(0x2788), 0x0007bf7a },
+	{ _MMIO(0x278c), 0x0000f7e7 },
+	{ _MMIO(0x2790), 0x0007fefa },
+	{ _MMIO(0x2794), 0x0000f7cf },
+	{ _MMIO(0x2798), 0x00077ffa },
+	{ _MMIO(0x279c), 0x0000efdf },
+	{ _MMIO(0x27a0), 0x0006fffa },
+	{ _MMIO(0x27a4), 0x0000cfbf },
+	{ _MMIO(0x27a8), 0x0003fffa },
+	{ _MMIO(0x27ac), 0x00005f7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_pipe_profile[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile[] = {
+	{ _MMIO(0x9888), 0x0c0e001f },
+	{ _MMIO(0x9888), 0x0a0f0000 },
+	{ _MMIO(0x9888), 0x10116800 },
+	{ _MMIO(0x9888), 0x178a03e0 },
+	{ _MMIO(0x9888), 0x11824c00 },
+	{ _MMIO(0x9888), 0x11830020 },
+	{ _MMIO(0x9888), 0x13840020 },
+	{ _MMIO(0x9888), 0x11850019 },
+	{ _MMIO(0x9888), 0x11860007 },
+	{ _MMIO(0x9888), 0x01870c40 },
+	{ _MMIO(0x9888), 0x17880000 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x0a4c0040 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x040d4000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020e5400 },
+	{ _MMIO(0x9888), 0x000e0000 },
+	{ _MMIO(0x9888), 0x080f0040 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x100f0000 },
+	{ _MMIO(0x9888), 0x0e0f0040 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06110012 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x01898000 },
+	{ _MMIO(0x9888), 0x0d890100 },
+	{ _MMIO(0x9888), 0x03898000 },
+	{ _MMIO(0x9888), 0x09808000 },
+	{ _MMIO(0x9888), 0x0b808000 },
+	{ _MMIO(0x9888), 0x0380c000 },
+	{ _MMIO(0x9888), 0x0f8a0075 },
+	{ _MMIO(0x9888), 0x1d8a0000 },
+	{ _MMIO(0x9888), 0x118a8000 },
+	{ _MMIO(0x9888), 0x1b8a4000 },
+	{ _MMIO(0x9888), 0x138a8000 },
+	{ _MMIO(0x9888), 0x1d81a000 },
+	{ _MMIO(0x9888), 0x15818000 },
+	{ _MMIO(0x9888), 0x17818000 },
+	{ _MMIO(0x9888), 0x0b820030 },
+	{ _MMIO(0x9888), 0x07828000 },
+	{ _MMIO(0x9888), 0x0d824000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x05824000 },
+	{ _MMIO(0x9888), 0x0d830003 },
+	{ _MMIO(0x9888), 0x0583000c },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x03838000 },
+	{ _MMIO(0x9888), 0x07838000 },
+	{ _MMIO(0x9888), 0x0b840980 },
+	{ _MMIO(0x9888), 0x03844d80 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x09848000 },
+	{ _MMIO(0x9888), 0x09850080 },
+	{ _MMIO(0x9888), 0x03850003 },
+	{ _MMIO(0x9888), 0x01850000 },
+	{ _MMIO(0x9888), 0x07860000 },
+	{ _MMIO(0x9888), 0x0f860400 },
+	{ _MMIO(0x9888), 0x09870032 },
+	{ _MMIO(0x9888), 0x01888052 },
+	{ _MMIO(0x9888), 0x11880000 },
+	{ _MMIO(0x9888), 0x09884000 },
+	{ _MMIO(0x9888), 0x1b931001 },
+	{ _MMIO(0x9888), 0x1d930001 },
+	{ _MMIO(0x9888), 0x19934000 },
+	{ _MMIO(0x9888), 0x1b958000 },
+	{ _MMIO(0x9888), 0x1d950094 },
+	{ _MMIO(0x9888), 0x19958000 },
+	{ _MMIO(0x9888), 0x09e58000 },
+	{ _MMIO(0x9888), 0x0be58000 },
+	{ _MMIO(0x9888), 0x03e5c000 },
+	{ _MMIO(0x9888), 0x0592c000 },
+	{ _MMIO(0x9888), 0x0b928000 },
+	{ _MMIO(0x9888), 0x0d924000 },
+	{ _MMIO(0x9888), 0x0f924000 },
+	{ _MMIO(0x9888), 0x11928000 },
+	{ _MMIO(0x9888), 0x1392c000 },
+	{ _MMIO(0x9888), 0x09924000 },
+	{ _MMIO(0x9888), 0x01985000 },
+	{ _MMIO(0x9888), 0x07988000 },
+	{ _MMIO(0x9888), 0x09981000 },
+	{ _MMIO(0x9888), 0x0b982000 },
+	{ _MMIO(0x9888), 0x0d982000 },
+	{ _MMIO(0x9888), 0x0f989000 },
+	{ _MMIO(0x9888), 0x05982000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1190c080 },
+	{ _MMIO(0x9888), 0x51901150 },
+	{ _MMIO(0x9888), 0x41901400 },
+	{ _MMIO(0x9888), 0x55905111 },
+	{ _MMIO(0x9888), 0x45901400 },
+	{ _MMIO(0x9888), 0x479004a5 },
+	{ _MMIO(0x9888), 0x57903455 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b9000a0 },
+	{ _MMIO(0x9888), 0x59900001 },
+	{ _MMIO(0x9888), 0x43900005 },
+	{ _MMIO(0x9888), 0x53900455 },
+};
+
+static int
+get_render_pipe_profile_mux_config(struct drm_i915_private *dev_priv,
+				   const struct i915_oa_reg **regs,
+				   int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_render_pipe_profile;
+	lens[n] = ARRAY_SIZE(mux_config_render_pipe_profile);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_reads[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f872 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_reads[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f900064 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930055 },
+	{ _MMIO(0x9888), 0x03e58000 },
+	{ _MMIO(0x9888), 0x05e5c000 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x13900150 },
+	{ _MMIO(0x9888), 0x21900151 },
+	{ _MMIO(0x9888), 0x23900152 },
+	{ _MMIO(0x9888), 0x25900153 },
+	{ _MMIO(0x9888), 0x27900154 },
+	{ _MMIO(0x9888), 0x29900155 },
+	{ _MMIO(0x9888), 0x2b900156 },
+	{ _MMIO(0x9888), 0x2d900157 },
+	{ _MMIO(0x9888), 0x2f90015f },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c60 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900c63 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c63 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static int
+get_memory_reads_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_memory_reads;
+	lens[n] = ARRAY_SIZE(mux_config_memory_reads);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_writes[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f822 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_writes[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f901000 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930055 },
+	{ _MMIO(0x9888), 0x03e58000 },
+	{ _MMIO(0x9888), 0x05e5c000 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x13900160 },
+	{ _MMIO(0x9888), 0x21900161 },
+	{ _MMIO(0x9888), 0x23900162 },
+	{ _MMIO(0x9888), 0x25900163 },
+	{ _MMIO(0x9888), 0x27900164 },
+	{ _MMIO(0x9888), 0x29900165 },
+	{ _MMIO(0x9888), 0x2b900166 },
+	{ _MMIO(0x9888), 0x2d900167 },
+	{ _MMIO(0x9888), 0x2f900150 },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c60 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900c63 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c63 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static int
+get_memory_writes_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_memory_writes;
+	lens[n] = ARRAY_SIZE(mux_config_memory_writes);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extended[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fc2a },
+	{ _MMIO(0x2774), 0x0000bf00 },
+	{ _MMIO(0x2778), 0x0007fc6a },
+	{ _MMIO(0x277c), 0x0000bf00 },
+	{ _MMIO(0x2780), 0x0007fc92 },
+	{ _MMIO(0x2784), 0x0000bf00 },
+	{ _MMIO(0x2788), 0x0007fca2 },
+	{ _MMIO(0x278c), 0x0000bf00 },
+	{ _MMIO(0x2790), 0x0007fc32 },
+	{ _MMIO(0x2794), 0x0000bf00 },
+	{ _MMIO(0x2798), 0x0007fc9a },
+	{ _MMIO(0x279c), 0x0000bf00 },
+	{ _MMIO(0x27a0), 0x0007fe6a },
+	{ _MMIO(0x27a4), 0x0000bf00 },
+	{ _MMIO(0x27a8), 0x0007fe7a },
+	{ _MMIO(0x27ac), 0x0000bf00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extended[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended[] = {
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x141c8160 },
+	{ _MMIO(0x9888), 0x161c8015 },
+	{ _MMIO(0x9888), 0x181c0120 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4eaaa0 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0e6c0b01 },
+	{ _MMIO(0x9888), 0x006c0200 },
+	{ _MMIO(0x9888), 0x026c000c },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x001b8000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x001c0041 },
+	{ _MMIO(0x9888), 0x061c4200 },
+	{ _MMIO(0x9888), 0x081c4443 },
+	{ _MMIO(0x9888), 0x0a1c4645 },
+	{ _MMIO(0x9888), 0x0c1c7647 },
+	{ _MMIO(0x9888), 0x041c7357 },
+	{ _MMIO(0x9888), 0x1c1c0030 },
+	{ _MMIO(0x9888), 0x101c0000 },
+	{ _MMIO(0x9888), 0x1a1c0000 },
+	{ _MMIO(0x9888), 0x121c8000 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4caa2a },
+	{ _MMIO(0x9888), 0x0c4c02aa },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5515 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162caa00 },
+	{ _MMIO(0x9888), 0x182c00aa },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x11907fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900040 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900802 },
+	{ _MMIO(0x9888), 0x47900842 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900842 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900800 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static int
+get_compute_extended_mux_config(struct drm_i915_private *dev_priv,
+				const struct i915_oa_reg **regs,
+				int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_extended;
+	lens[n] = ARRAY_SIZE(mux_config_compute_extended);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_l3_cache[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fffa },
+	{ _MMIO(0x2774), 0x0000fefe },
+	{ _MMIO(0x2778), 0x0007fffa },
+	{ _MMIO(0x277c), 0x0000fefd },
+	{ _MMIO(0x2790), 0x0007fffa },
+	{ _MMIO(0x2794), 0x0000fbef },
+	{ _MMIO(0x2798), 0x0007fffa },
+	{ _MMIO(0x279c), 0x0000fbdf },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_l3_cache[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00101100 },
+	{ _MMIO(0xe45c), 0x00201200 },
+	{ _MMIO(0xe55c), 0x00301300 },
+	{ _MMIO(0xe65c), 0x00401400 },
+};
+
+static const struct i915_oa_reg mux_config_compute_l3_cache[] = {
+	{ _MMIO(0x9888), 0x166c0760 },
+	{ _MMIO(0x9888), 0x1593001e },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4e8020 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x006c0051 },
+	{ _MMIO(0x9888), 0x066c5000 },
+	{ _MMIO(0x9888), 0x086c5c5d },
+	{ _MMIO(0x9888), 0x0e6c5e5f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x186c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x061b8000 },
+	{ _MMIO(0x9888), 0x081bc000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x101c8000 },
+	{ _MMIO(0x9888), 0x1a1ce000 },
+	{ _MMIO(0x9888), 0x1c1c0030 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4c2a00 },
+	{ _MMIO(0x9888), 0x0c4c0280 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f1500 },
+	{ _MMIO(0x9888), 0x100f0140 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162c0a00 },
+	{ _MMIO(0x9888), 0x182c00a0 },
+	{ _MMIO(0x9888), 0x03933300 },
+	{ _MMIO(0x9888), 0x05930032 },
+	{ _MMIO(0x9888), 0x11930000 },
+	{ _MMIO(0x9888), 0x1b930000 },
+	{ _MMIO(0x9888), 0x1d900157 },
+	{ _MMIO(0x9888), 0x1f900158 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1190030f },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900063 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x53903333 },
+	{ _MMIO(0x9888), 0x43900840 },
+};
+
+static int
+get_compute_l3_cache_mux_config(struct drm_i915_private *dev_priv,
+				const struct i915_oa_reg **regs,
+				int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_l3_cache;
+	lens[n] = ARRAY_SIZE(mux_config_compute_l3_cache);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_hdc_and_sf[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x10800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+};
+
+static const struct i915_oa_reg flex_eu_config_hdc_and_sf[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_hdc_and_sf[] = {
+	{ _MMIO(0x9888), 0x104f0232 },
+	{ _MMIO(0x9888), 0x124f4640 },
+	{ _MMIO(0x9888), 0x106c0232 },
+	{ _MMIO(0x9888), 0x11834400 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x004f1880 },
+	{ _MMIO(0x9888), 0x024f08bb },
+	{ _MMIO(0x9888), 0x044f001b },
+	{ _MMIO(0x9888), 0x046c0100 },
+	{ _MMIO(0x9888), 0x066c000b },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x041b8000 },
+	{ _MMIO(0x9888), 0x061b4000 },
+	{ _MMIO(0x9888), 0x1a1c1800 },
+	{ _MMIO(0x9888), 0x005b8000 },
+	{ _MMIO(0x9888), 0x025bc000 },
+	{ _MMIO(0x9888), 0x045b4000 },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x145c8000 },
+	{ _MMIO(0x9888), 0x165c8000 },
+	{ _MMIO(0x9888), 0x185c8000 },
+	{ _MMIO(0x9888), 0x0a4c00a0 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042cc000 },
+	{ _MMIO(0x9888), 0x062cc000 },
+	{ _MMIO(0x9888), 0x082cc000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x0f8305c0 },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x07830000 },
+	{ _MMIO(0x9888), 0x1d950080 },
+	{ _MMIO(0x9888), 0x13928000 },
+	{ _MMIO(0x9888), 0x0f988000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x59900005 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_hdc_and_sf_mux_config(struct drm_i915_private *dev_priv,
+			  const struct i915_oa_reg **regs,
+			  int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_hdc_and_sf;
+	lens[n] = ARRAY_SIZE(mux_config_hdc_and_sf);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1[] = {
+	{ _MMIO(0x9888), 0x126c7b40 },
+	{ _MMIO(0x9888), 0x166c0020 },
+	{ _MMIO(0x9888), 0x0a603444 },
+	{ _MMIO(0x9888), 0x0a613400 },
+	{ _MMIO(0x9888), 0x1a4ea800 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x064f4000 },
+	{ _MMIO(0x9888), 0x0c6c5327 },
+	{ _MMIO(0x9888), 0x0e6c5425 },
+	{ _MMIO(0x9888), 0x006c2a00 },
+	{ _MMIO(0x9888), 0x026c285b },
+	{ _MMIO(0x9888), 0x046c005c },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0800 },
+	{ _MMIO(0x9888), 0x0c1bc000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x001b8000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x1c1c003c },
+	{ _MMIO(0x9888), 0x121c8000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c0800 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x1a5c1000 },
+	{ _MMIO(0x9888), 0x10600000 },
+	{ _MMIO(0x9888), 0x04600000 },
+	{ _MMIO(0x9888), 0x0c610044 },
+	{ _MMIO(0x9888), 0x10610000 },
+	{ _MMIO(0x9888), 0x06610000 },
+	{ _MMIO(0x9888), 0x0c4c02a8 },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x0a4c002a },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0154 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x182c00aa },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2cc000 },
+	{ _MMIO(0x9888), 0x1190ffc0 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900420 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900021 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900400 },
+	{ _MMIO(0x9888), 0x43900421 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+};
+
+static int
+get_l3_1_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_1;
+	lens[n] = ARRAY_SIZE(mux_config_l3_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00028002 },
+	{ _MMIO(0x277c), 0x000087ff },
+	{ _MMIO(0x2780), 0x00020002 },
+	{ _MMIO(0x2784), 0x00008fff },
+	{ _MMIO(0x2788), 0x00008002 },
+	{ _MMIO(0x278c), 0x0000a7ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_2[] = {
+	{ _MMIO(0x9888), 0x126c02e0 },
+	{ _MMIO(0x9888), 0x146c0001 },
+	{ _MMIO(0x9888), 0x0a623400 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x064f4000 },
+	{ _MMIO(0x9888), 0x026c3324 },
+	{ _MMIO(0x9888), 0x046c3422 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c0800 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x1a5c1000 },
+	{ _MMIO(0x9888), 0x06614000 },
+	{ _MMIO(0x9888), 0x0c620044 },
+	{ _MMIO(0x9888), 0x10620000 },
+	{ _MMIO(0x9888), 0x06620000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c002a },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f4000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2cc000 },
+	{ _MMIO(0x9888), 0x1190f800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_l3_2_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_2;
+	lens[n] = ARRAY_SIZE(mux_config_l3_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_3[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00028002 },
+	{ _MMIO(0x277c), 0x000087ff },
+	{ _MMIO(0x2780), 0x00020002 },
+	{ _MMIO(0x2784), 0x00008fff },
+	{ _MMIO(0x2788), 0x00008002 },
+	{ _MMIO(0x278c), 0x0000a7ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_3[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_3[] = {
+	{ _MMIO(0x9888), 0x126c4e80 },
+	{ _MMIO(0x9888), 0x146c0000 },
+	{ _MMIO(0x9888), 0x0a633400 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x026c3321 },
+	{ _MMIO(0x9888), 0x046c342f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1a6c2000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x061b4000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c1800 },
+	{ _MMIO(0x9888), 0x06604000 },
+	{ _MMIO(0x9888), 0x0c630044 },
+	{ _MMIO(0x9888), 0x10630000 },
+	{ _MMIO(0x9888), 0x06630000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c00aa },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f4000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x1190f800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900002 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_l3_3_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_3;
+	lens[n] = ARRAY_SIZE(mux_config_l3_3);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000efff },
+	{ _MMIO(0x2778), 0x00006000 },
+	{ _MMIO(0x277c), 0x0000f3ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x9888), 0x102f3800 },
+	{ _MMIO(0x9888), 0x144d0500 },
+	{ _MMIO(0x9888), 0x120d03c0 },
+	{ _MMIO(0x9888), 0x140d03cf },
+	{ _MMIO(0x9888), 0x0c0f0004 },
+	{ _MMIO(0x9888), 0x0c4e4000 },
+	{ _MMIO(0x9888), 0x042f0480 },
+	{ _MMIO(0x9888), 0x082f0000 },
+	{ _MMIO(0x9888), 0x022f0000 },
+	{ _MMIO(0x9888), 0x0a4c0090 },
+	{ _MMIO(0x9888), 0x064d0027 },
+	{ _MMIO(0x9888), 0x004d0000 },
+	{ _MMIO(0x9888), 0x000d0d40 },
+	{ _MMIO(0x9888), 0x020d803f },
+	{ _MMIO(0x9888), 0x040d8023 },
+	{ _MMIO(0x9888), 0x100d0000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020f0010 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x0e0f0050 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41901400 },
+	{ _MMIO(0x9888), 0x43901485 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900001 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_rasterizer_and_pixel_backend_mux_config(struct drm_i915_private *dev_priv,
+					    const struct i915_oa_reg **regs,
+					    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_rasterizer_and_pixel_backend;
+	lens[n] = ARRAY_SIZE(mux_config_rasterizer_and_pixel_backend);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler[] = {
+	{ _MMIO(0x9888), 0x14152c00 },
+	{ _MMIO(0x9888), 0x16150005 },
+	{ _MMIO(0x9888), 0x121600a0 },
+	{ _MMIO(0x9888), 0x14352c00 },
+	{ _MMIO(0x9888), 0x16350005 },
+	{ _MMIO(0x9888), 0x123600a0 },
+	{ _MMIO(0x9888), 0x14552c00 },
+	{ _MMIO(0x9888), 0x16550005 },
+	{ _MMIO(0x9888), 0x125600a0 },
+	{ _MMIO(0x9888), 0x062f6000 },
+	{ _MMIO(0x9888), 0x022f2000 },
+	{ _MMIO(0x9888), 0x0c4c0050 },
+	{ _MMIO(0x9888), 0x0a4c0010 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0350 },
+	{ _MMIO(0x9888), 0x0c0fb000 },
+	{ _MMIO(0x9888), 0x0e0f00da },
+	{ _MMIO(0x9888), 0x182c0028 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x022dc000 },
+	{ _MMIO(0x9888), 0x042d4000 },
+	{ _MMIO(0x9888), 0x0c138000 },
+	{ _MMIO(0x9888), 0x0e132000 },
+	{ _MMIO(0x9888), 0x0413c000 },
+	{ _MMIO(0x9888), 0x1c140018 },
+	{ _MMIO(0x9888), 0x0c157000 },
+	{ _MMIO(0x9888), 0x0e150078 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x04162180 },
+	{ _MMIO(0x9888), 0x02160000 },
+	{ _MMIO(0x9888), 0x04174000 },
+	{ _MMIO(0x9888), 0x0233a000 },
+	{ _MMIO(0x9888), 0x04333000 },
+	{ _MMIO(0x9888), 0x14348000 },
+	{ _MMIO(0x9888), 0x16348000 },
+	{ _MMIO(0x9888), 0x02357870 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x04360043 },
+	{ _MMIO(0x9888), 0x02360000 },
+	{ _MMIO(0x9888), 0x04371000 },
+	{ _MMIO(0x9888), 0x0e538000 },
+	{ _MMIO(0x9888), 0x00538000 },
+	{ _MMIO(0x9888), 0x06533000 },
+	{ _MMIO(0x9888), 0x1c540020 },
+	{ _MMIO(0x9888), 0x12548000 },
+	{ _MMIO(0x9888), 0x0e557000 },
+	{ _MMIO(0x9888), 0x00557800 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x06560043 },
+	{ _MMIO(0x9888), 0x02560000 },
+	{ _MMIO(0x9888), 0x06571000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900060 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900060 },
+};
+
+static int
+get_sampler_mux_config(struct drm_i915_private *dev_priv,
+		       const struct i915_oa_reg **regs,
+		       int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_sampler;
+	lens[n] = ARRAY_SIZE(mux_config_sampler);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x00007fff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x00009fff },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000efff },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000f3ff },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fdff },
+	{ _MMIO(0x2798), 0x00000000 },
+	{ _MMIO(0x279c), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_1[] = {
+	{ _MMIO(0x9888), 0x12120000 },
+	{ _MMIO(0x9888), 0x12320000 },
+	{ _MMIO(0x9888), 0x12520000 },
+	{ _MMIO(0x9888), 0x002f8000 },
+	{ _MMIO(0x9888), 0x022f3000 },
+	{ _MMIO(0x9888), 0x0a4c0015 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f03a0 },
+	{ _MMIO(0x9888), 0x0c0ff000 },
+	{ _MMIO(0x9888), 0x0e0f0095 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2d8000 },
+	{ _MMIO(0x9888), 0x0e2d4000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x02108000 },
+	{ _MMIO(0x9888), 0x0410c000 },
+	{ _MMIO(0x9888), 0x02118000 },
+	{ _MMIO(0x9888), 0x0411c000 },
+	{ _MMIO(0x9888), 0x02121880 },
+	{ _MMIO(0x9888), 0x041219b5 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x02134000 },
+	{ _MMIO(0x9888), 0x04135000 },
+	{ _MMIO(0x9888), 0x0c308000 },
+	{ _MMIO(0x9888), 0x0e304000 },
+	{ _MMIO(0x9888), 0x06304000 },
+	{ _MMIO(0x9888), 0x0c318000 },
+	{ _MMIO(0x9888), 0x0e314000 },
+	{ _MMIO(0x9888), 0x06314000 },
+	{ _MMIO(0x9888), 0x0c321a80 },
+	{ _MMIO(0x9888), 0x0e320033 },
+	{ _MMIO(0x9888), 0x06320031 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x0c334000 },
+	{ _MMIO(0x9888), 0x0e331000 },
+	{ _MMIO(0x9888), 0x06331000 },
+	{ _MMIO(0x9888), 0x0e508000 },
+	{ _MMIO(0x9888), 0x00508000 },
+	{ _MMIO(0x9888), 0x02504000 },
+	{ _MMIO(0x9888), 0x0e518000 },
+	{ _MMIO(0x9888), 0x00518000 },
+	{ _MMIO(0x9888), 0x02514000 },
+	{ _MMIO(0x9888), 0x0e521880 },
+	{ _MMIO(0x9888), 0x00521a80 },
+	{ _MMIO(0x9888), 0x02520033 },
+	{ _MMIO(0x9888), 0x0e534000 },
+	{ _MMIO(0x9888), 0x00534000 },
+	{ _MMIO(0x9888), 0x02531000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900062 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+};
+
+static int
+get_tdl_1_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_1;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_2[] = {
+	{ _MMIO(0x9888), 0x12124d60 },
+	{ _MMIO(0x9888), 0x12322e60 },
+	{ _MMIO(0x9888), 0x12524d60 },
+	{ _MMIO(0x9888), 0x022f3000 },
+	{ _MMIO(0x9888), 0x0a4c0014 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0fe000 },
+	{ _MMIO(0x9888), 0x0e0f0097 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x002d8000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x0410c000 },
+	{ _MMIO(0x9888), 0x0411c000 },
+	{ _MMIO(0x9888), 0x04121fb7 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x04135000 },
+	{ _MMIO(0x9888), 0x00308000 },
+	{ _MMIO(0x9888), 0x06304000 },
+	{ _MMIO(0x9888), 0x00318000 },
+	{ _MMIO(0x9888), 0x06314000 },
+	{ _MMIO(0x9888), 0x00321b80 },
+	{ _MMIO(0x9888), 0x0632003f },
+	{ _MMIO(0x9888), 0x00334000 },
+	{ _MMIO(0x9888), 0x06331000 },
+	{ _MMIO(0x9888), 0x0250c000 },
+	{ _MMIO(0x9888), 0x0251c000 },
+	{ _MMIO(0x9888), 0x02521fb7 },
+	{ _MMIO(0x9888), 0x00520000 },
+	{ _MMIO(0x9888), 0x02535000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x43900063 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_tdl_2_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_2;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extra[] = {
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extra[] = {
+};
+
+static const struct i915_oa_reg mux_config_compute_extra[] = {
+	{ _MMIO(0x9888), 0x121203e0 },
+	{ _MMIO(0x9888), 0x123203e0 },
+	{ _MMIO(0x9888), 0x125203e0 },
+	{ _MMIO(0x9888), 0x129203e0 },
+	{ _MMIO(0x9888), 0x12b203e0 },
+	{ _MMIO(0x9888), 0x12d203e0 },
+	{ _MMIO(0x9888), 0x024ec000 },
+	{ _MMIO(0x9888), 0x044ec000 },
+	{ _MMIO(0x9888), 0x064ec000 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x0a4c0042 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f006d },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x042d8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06114000 },
+	{ _MMIO(0x9888), 0x06120033 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x04308000 },
+	{ _MMIO(0x9888), 0x04318000 },
+	{ _MMIO(0x9888), 0x04321980 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x04334000 },
+	{ _MMIO(0x9888), 0x04504000 },
+	{ _MMIO(0x9888), 0x04514000 },
+	{ _MMIO(0x9888), 0x04520033 },
+	{ _MMIO(0x9888), 0x00520000 },
+	{ _MMIO(0x9888), 0x04531000 },
+	{ _MMIO(0x9888), 0x00af8000 },
+	{ _MMIO(0x9888), 0x0acc0001 },
+	{ _MMIO(0x9888), 0x008d8000 },
+	{ _MMIO(0x9888), 0x028da000 },
+	{ _MMIO(0x9888), 0x0c8fb000 },
+	{ _MMIO(0x9888), 0x0e8f0001 },
+	{ _MMIO(0x9888), 0x06ac8000 },
+	{ _MMIO(0x9888), 0x02ad4000 },
+	{ _MMIO(0x9888), 0x02908000 },
+	{ _MMIO(0x9888), 0x02918000 },
+	{ _MMIO(0x9888), 0x02921980 },
+	{ _MMIO(0x9888), 0x00920000 },
+	{ _MMIO(0x9888), 0x02934000 },
+	{ _MMIO(0x9888), 0x02b04000 },
+	{ _MMIO(0x9888), 0x02b14000 },
+	{ _MMIO(0x9888), 0x02b20033 },
+	{ _MMIO(0x9888), 0x00b20000 },
+	{ _MMIO(0x9888), 0x02b31000 },
+	{ _MMIO(0x9888), 0x00d08000 },
+	{ _MMIO(0x9888), 0x00d18000 },
+	{ _MMIO(0x9888), 0x00d21980 },
+	{ _MMIO(0x9888), 0x00d34000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900402 },
+	{ _MMIO(0x9888), 0x53901550 },
+	{ _MMIO(0x9888), 0x45900080 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_compute_extra_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_extra;
+	lens[n] = ARRAY_SIZE(mux_config_compute_extra);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_vme_pipe[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00100030 },
+	{ _MMIO(0x2774), 0x0000fff9 },
+	{ _MMIO(0x2778), 0x00000002 },
+	{ _MMIO(0x277c), 0x0000fffc },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000fff3 },
+	{ _MMIO(0x2788), 0x00100180 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00000002 },
+	{ _MMIO(0x279c), 0x0000ff3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_vme_pipe[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00008003 },
+};
+
+static const struct i915_oa_reg mux_config_vme_pipe[] = {
+	{ _MMIO(0x9888), 0x141a5800 },
+	{ _MMIO(0x9888), 0x161a00c0 },
+	{ _MMIO(0x9888), 0x12180240 },
+	{ _MMIO(0x9888), 0x14180002 },
+	{ _MMIO(0x9888), 0x149a5800 },
+	{ _MMIO(0x9888), 0x169a00c0 },
+	{ _MMIO(0x9888), 0x12980240 },
+	{ _MMIO(0x9888), 0x14980002 },
+	{ _MMIO(0x9888), 0x1a4e3fc0 },
+	{ _MMIO(0x9888), 0x002f1000 },
+	{ _MMIO(0x9888), 0x022f8000 },
+	{ _MMIO(0x9888), 0x042f3000 },
+	{ _MMIO(0x9888), 0x004c4000 },
+	{ _MMIO(0x9888), 0x0a4c9500 },
+	{ _MMIO(0x9888), 0x0c4c002a },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f5500 },
+	{ _MMIO(0x9888), 0x100f0015 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162caa00 },
+	{ _MMIO(0x9888), 0x182c000a },
+	{ _MMIO(0x9888), 0x04193000 },
+	{ _MMIO(0x9888), 0x081a28c1 },
+	{ _MMIO(0x9888), 0x001a0000 },
+	{ _MMIO(0x9888), 0x00133000 },
+	{ _MMIO(0x9888), 0x0613c000 },
+	{ _MMIO(0x9888), 0x0813f000 },
+	{ _MMIO(0x9888), 0x00172000 },
+	{ _MMIO(0x9888), 0x06178000 },
+	{ _MMIO(0x9888), 0x0817a000 },
+	{ _MMIO(0x9888), 0x00180037 },
+	{ _MMIO(0x9888), 0x06180940 },
+	{ _MMIO(0x9888), 0x08180000 },
+	{ _MMIO(0x9888), 0x02180000 },
+	{ _MMIO(0x9888), 0x04183000 },
+	{ _MMIO(0x9888), 0x04afc000 },
+	{ _MMIO(0x9888), 0x06af3000 },
+	{ _MMIO(0x9888), 0x0acc4000 },
+	{ _MMIO(0x9888), 0x0ccc0015 },
+	{ _MMIO(0x9888), 0x0a8da000 },
+	{ _MMIO(0x9888), 0x0c8da000 },
+	{ _MMIO(0x9888), 0x0e8f4000 },
+	{ _MMIO(0x9888), 0x108f0015 },
+	{ _MMIO(0x9888), 0x16aca000 },
+	{ _MMIO(0x9888), 0x18ac000a },
+	{ _MMIO(0x9888), 0x06993000 },
+	{ _MMIO(0x9888), 0x0c9a28c1 },
+	{ _MMIO(0x9888), 0x009a0000 },
+	{ _MMIO(0x9888), 0x0a93f000 },
+	{ _MMIO(0x9888), 0x0c93f000 },
+	{ _MMIO(0x9888), 0x0a97a000 },
+	{ _MMIO(0x9888), 0x0c97a000 },
+	{ _MMIO(0x9888), 0x0a980977 },
+	{ _MMIO(0x9888), 0x08980000 },
+	{ _MMIO(0x9888), 0x04980000 },
+	{ _MMIO(0x9888), 0x06983000 },
+	{ _MMIO(0x9888), 0x119000ff },
+	{ _MMIO(0x9888), 0x51900050 },
+	{ _MMIO(0x9888), 0x41900000 },
+	{ _MMIO(0x9888), 0x55900115 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x47900884 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900002 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_vme_pipe_mux_config(struct drm_i915_private *dev_priv,
+			const struct i915_oa_reg **regs,
+			int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_vme_pipe;
+	lens[n] = ARRAY_SIZE(mux_config_vme_pipe);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_test_oa[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2770), 0x00000004 },
+	{ _MMIO(0x2774), 0x00000000 },
+	{ _MMIO(0x2778), 0x00000003 },
+	{ _MMIO(0x277c), 0x00000000 },
+	{ _MMIO(0x2780), 0x00000007 },
+	{ _MMIO(0x2784), 0x00000000 },
+	{ _MMIO(0x2788), 0x00100002 },
+	{ _MMIO(0x278c), 0x0000fff7 },
+	{ _MMIO(0x2790), 0x00100002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00100082 },
+	{ _MMIO(0x279c), 0x0000ffef },
+	{ _MMIO(0x27a0), 0x001000c2 },
+	{ _MMIO(0x27a4), 0x0000ffe7 },
+	{ _MMIO(0x27a8), 0x00100001 },
+	{ _MMIO(0x27ac), 0x0000ffe7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_test_oa[] = {
+};
+
+static const struct i915_oa_reg mux_config_test_oa[] = {
+	{ _MMIO(0x9888), 0x11810000 },
+	{ _MMIO(0x9888), 0x07810013 },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930040 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_test_oa_mux_config(struct drm_i915_private *dev_priv,
+		       const struct i915_oa_reg **regs,
+		       int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_test_oa;
+	lens[n] = ARRAY_SIZE(mux_config_test_oa);
+	n++;
+
+	return n;
+}
+
 int i915_oa_select_metric_set_sklgt3(struct drm_i915_private *dev_priv)
 {
-	dev_priv->perf.oa.n_mux_configs = 0;
-	dev_priv->perf.oa.b_counter_regs = NULL;
-	dev_priv->perf.oa.b_counter_regs_len = 0;
-	dev_priv->perf.oa.flex_regs = NULL;
-	dev_priv->perf.oa.flex_regs_len = 0;
+	dev_priv->perf.oa.n_mux_configs = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_basic_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_basic_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
+		return 0;
+	case METRIC_SET_ID_RENDER_PIPE_PROFILE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_pipe_profile_mux_config(dev_priv,
+							   dev_priv->perf.oa.mux_regs,
+							   dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_PIPE_PROFILE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_pipe_profile;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_pipe_profile);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_pipe_profile;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_pipe_profile);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_READS:
+		dev_priv->perf.oa.n_mux_configs =
+			get_memory_reads_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_READS\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_reads;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_reads);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_reads;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_reads);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_WRITES:
+		dev_priv->perf.oa.n_mux_configs =
+			get_memory_writes_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_WRITES\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_writes;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_writes);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_writes;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_writes);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTENDED:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_extended_mux_config(dev_priv,
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTENDED\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extended;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extended);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extended;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extended);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_L3_CACHE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_l3_cache_mux_config(dev_priv,
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_L3_CACHE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_l3_cache;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_l3_cache);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_l3_cache;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_l3_cache);
+
+		return 0;
+	case METRIC_SET_ID_HDC_AND_SF:
+		dev_priv->perf.oa.n_mux_configs =
+			get_hdc_and_sf_mux_config(dev_priv,
+						  dev_priv->perf.oa.mux_regs,
+						  dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"HDC_AND_SF\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_hdc_and_sf;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_hdc_and_sf);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_hdc_and_sf;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_hdc_and_sf);
+
+		return 0;
+	case METRIC_SET_ID_L3_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_1_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_1);
+
+		return 0;
+	case METRIC_SET_ID_L3_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_2_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_2);
+
+		return 0;
+	case METRIC_SET_ID_L3_3:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_3_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_3\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_3;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_3);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_3;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_3);
+
+		return 0;
+	case METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND:
+		dev_priv->perf.oa.n_mux_configs =
+			get_rasterizer_and_pixel_backend_mux_config(dev_priv,
+								    dev_priv->perf.oa.mux_regs,
+								    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RASTERIZER_AND_PIXEL_BACKEND\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_rasterizer_and_pixel_backend);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_rasterizer_and_pixel_backend);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER:
+		dev_priv->perf.oa.n_mux_configs =
+			get_sampler_mux_config(dev_priv,
+					       dev_priv->perf.oa.mux_regs,
+					       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler);
+
+		return 0;
+	case METRIC_SET_ID_TDL_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_1_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_1);
+
+		return 0;
+	case METRIC_SET_ID_TDL_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_2_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_2);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTRA:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_extra_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTRA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extra;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extra);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extra;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extra);
+
+		return 0;
+	case METRIC_SET_ID_VME_PIPE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_vme_pipe_mux_config(dev_priv,
+						dev_priv->perf.oa.mux_regs,
+						dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"VME_PIPE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_vme_pipe;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_vme_pipe);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_vme_pipe;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_vme_pipe);
+
+		return 0;
+	case METRIC_SET_ID_TEST_OA:
+		dev_priv->perf.oa.n_mux_configs =
+			get_test_oa_mux_config(dev_priv,
+					       dev_priv->perf.oa.mux_regs,
+					       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TEST_OA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_test_oa;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_test_oa);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_test_oa;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_test_oa);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "4616d450-2393-4836-8146-53c5ed84d359",
+	.attrs =  attrs_render_basic,
+};
+
+static ssize_t
+show_compute_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_BASIC);
+}
+
+static struct device_attribute dev_attr_compute_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_basic[] = {
+	&dev_attr_compute_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_basic = {
+	.name = "4320492b-fd03-42ac-922f-dbe1ef3b7b58",
+	.attrs =  attrs_compute_basic,
+};
+
+static ssize_t
+show_render_pipe_profile_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_PIPE_PROFILE);
+}
+
+static struct device_attribute dev_attr_render_pipe_profile_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_pipe_profile_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_pipe_profile[] = {
+	&dev_attr_render_pipe_profile_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_pipe_profile = {
+	.name = "bd2d9cae-b9ec-4f5b-9d2f-934bed398a2d",
+	.attrs =  attrs_render_pipe_profile,
+};
+
+static ssize_t
+show_memory_reads_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_READS);
+}
 
-	switch (dev_priv->perf.oa.metrics_set) {
-	case METRIC_SET_ID_RENDER_BASIC:
-		dev_priv->perf.oa.n_mux_configs =
-			get_render_basic_mux_config(dev_priv,
-						    dev_priv->perf.oa.mux_regs,
-						    dev_priv->perf.oa.mux_regs_lens);
-		if (dev_priv->perf.oa.n_mux_configs == 0) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+static struct device_attribute dev_attr_memory_reads_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_reads_id,
+	.store = NULL,
+};
 
-			/* EINVAL because *_register_sysfs already checked this
-			 * and so it wouldn't have been advertised to userspace and
-			 * so shouldn't have been requested
-			 */
-			return -EINVAL;
-		}
+static struct attribute *attrs_memory_reads[] = {
+	&dev_attr_memory_reads_id.attr,
+	NULL,
+};
 
-		dev_priv->perf.oa.b_counter_regs =
-			b_counter_config_render_basic;
-		dev_priv->perf.oa.b_counter_regs_len =
-			ARRAY_SIZE(b_counter_config_render_basic);
+static struct attribute_group group_memory_reads = {
+	.name = "4ca0f3fe-7fd3-4924-98cb-1807d9879767",
+	.attrs =  attrs_memory_reads,
+};
 
-		dev_priv->perf.oa.flex_regs =
-			flex_eu_config_render_basic;
-		dev_priv->perf.oa.flex_regs_len =
-			ARRAY_SIZE(flex_eu_config_render_basic);
+static ssize_t
+show_memory_writes_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_WRITES);
+}
 
-		return 0;
-	default:
-		return -ENODEV;
-	}
+static struct device_attribute dev_attr_memory_writes_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_writes_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_writes[] = {
+	&dev_attr_memory_writes_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_writes = {
+	.name = "a0c0172c-ee13-403d-99ff-2bdf6936cf14",
+	.attrs =  attrs_memory_writes,
+};
+
+static ssize_t
+show_compute_extended_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTENDED);
+}
+
+static struct device_attribute dev_attr_compute_extended_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extended_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extended[] = {
+	&dev_attr_compute_extended_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extended = {
+	.name = "52435e0b-f188-42ea-8680-21a56ee20dee",
+	.attrs =  attrs_compute_extended,
+};
+
+static ssize_t
+show_compute_l3_cache_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_L3_CACHE);
 }
 
+static struct device_attribute dev_attr_compute_l3_cache_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_l3_cache_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_l3_cache[] = {
+	&dev_attr_compute_l3_cache_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_l3_cache = {
+	.name = "27076eeb-49f3-4fed-8423-c66506005c63",
+	.attrs =  attrs_compute_l3_cache,
+};
+
 static ssize_t
-show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+show_hdc_and_sf_id(struct device *kdev, struct device_attribute *attr, char *buf)
 {
-	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+	return sprintf(buf, "%d\n", METRIC_SET_ID_HDC_AND_SF);
 }
 
-static struct device_attribute dev_attr_render_basic_id = {
+static struct device_attribute dev_attr_hdc_and_sf_id = {
 	.attr = { .name = "id", .mode = 0444 },
-	.show = show_render_basic_id,
+	.show = show_hdc_and_sf_id,
 	.store = NULL,
 };
 
-static struct attribute *attrs_render_basic[] = {
-	&dev_attr_render_basic_id.attr,
+static struct attribute *attrs_hdc_and_sf[] = {
+	&dev_attr_hdc_and_sf_id.attr,
 	NULL,
 };
 
-static struct attribute_group group_render_basic = {
-	.name = "4616d450-2393-4836-8146-53c5ed84d359",
-	.attrs =  attrs_render_basic,
+static struct attribute_group group_hdc_and_sf = {
+	.name = "8071b409-c39a-4674-94d7-32962ecfb512",
+	.attrs =  attrs_hdc_and_sf,
+};
+
+static ssize_t
+show_l3_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_1);
+}
+
+static struct device_attribute dev_attr_l3_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_1[] = {
+	&dev_attr_l3_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_1 = {
+	.name = "5e0b391e-9ea8-4901-b2ff-b64ff616c7ed",
+	.attrs =  attrs_l3_1,
+};
+
+static ssize_t
+show_l3_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_2);
+}
+
+static struct device_attribute dev_attr_l3_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_2[] = {
+	&dev_attr_l3_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_2 = {
+	.name = "25dc828e-1d2d-426e-9546-a1d4233cdf16",
+	.attrs =  attrs_l3_2,
+};
+
+static ssize_t
+show_l3_3_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_3);
+}
+
+static struct device_attribute dev_attr_l3_3_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_3_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_3[] = {
+	&dev_attr_l3_3_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_3 = {
+	.name = "3dba9405-2d7e-4d70-8199-e734e82fd6bf",
+	.attrs =  attrs_l3_3,
+};
+
+static ssize_t
+show_rasterizer_and_pixel_backend_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND);
+}
+
+static struct device_attribute dev_attr_rasterizer_and_pixel_backend_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_rasterizer_and_pixel_backend_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_rasterizer_and_pixel_backend[] = {
+	&dev_attr_rasterizer_and_pixel_backend_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_rasterizer_and_pixel_backend = {
+	.name = "76935d7b-09c9-46bf-87f1-c18b4a86ebe5",
+	.attrs =  attrs_rasterizer_and_pixel_backend,
+};
+
+static ssize_t
+show_sampler_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER);
+}
+
+static struct device_attribute dev_attr_sampler_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler[] = {
+	&dev_attr_sampler_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler = {
+	.name = "1b34c0d6-4f4c-4d7b-833f-4aaf236d87a6",
+	.attrs =  attrs_sampler,
+};
+
+static ssize_t
+show_tdl_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_1);
+}
+
+static struct device_attribute dev_attr_tdl_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_1[] = {
+	&dev_attr_tdl_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_1 = {
+	.name = "b375c985-9953-455b-bda2-b03f7594e9db",
+	.attrs =  attrs_tdl_1,
+};
+
+static ssize_t
+show_tdl_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_2);
+}
+
+static struct device_attribute dev_attr_tdl_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_2[] = {
+	&dev_attr_tdl_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_2 = {
+	.name = "3e2be2bb-884a-49bb-82c5-2358e6bd5f2d",
+	.attrs =  attrs_tdl_2,
+};
+
+static ssize_t
+show_compute_extra_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTRA);
+}
+
+static struct device_attribute dev_attr_compute_extra_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extra_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extra[] = {
+	&dev_attr_compute_extra_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extra = {
+	.name = "2d80a648-7b5a-4e92-bbe7-3b5c76f2e221",
+	.attrs =  attrs_compute_extra,
+};
+
+static ssize_t
+show_vme_pipe_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_VME_PIPE);
+}
+
+static struct device_attribute dev_attr_vme_pipe_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_vme_pipe_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_vme_pipe[] = {
+	&dev_attr_vme_pipe_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_vme_pipe = {
+	.name = "cfae9232-6ffc-42cc-a703-9790016925f0",
+	.attrs =  attrs_vme_pipe,
+};
+
+static ssize_t
+show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TEST_OA);
+}
+
+static struct device_attribute dev_attr_test_oa_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_test_oa_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_test_oa[] = {
+	&dev_attr_test_oa_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_test_oa = {
+	.name = "2b985803-d3c9-4629-8a4f-634bfecba0e8",
+	.attrs =  attrs_test_oa,
 };
 
 int
@@ -231,9 +2851,145 @@ i915_perf_register_sysfs_sklgt3(struct drm_i915_private *dev_priv)
 		if (ret)
 			goto error_render_basic;
 	}
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+		if (ret)
+			goto error_compute_basic;
+	}
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+		if (ret)
+			goto error_render_pipe_profile;
+	}
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+		if (ret)
+			goto error_memory_reads;
+	}
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+		if (ret)
+			goto error_memory_writes;
+	}
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+		if (ret)
+			goto error_compute_extended;
+	}
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+		if (ret)
+			goto error_compute_l3_cache;
+	}
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+		if (ret)
+			goto error_hdc_and_sf;
+	}
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+		if (ret)
+			goto error_l3_1;
+	}
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+		if (ret)
+			goto error_l3_2;
+	}
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+		if (ret)
+			goto error_l3_3;
+	}
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+		if (ret)
+			goto error_rasterizer_and_pixel_backend;
+	}
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler);
+		if (ret)
+			goto error_sampler;
+	}
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+		if (ret)
+			goto error_tdl_1;
+	}
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+		if (ret)
+			goto error_tdl_2;
+	}
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+		if (ret)
+			goto error_compute_extra;
+	}
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+		if (ret)
+			goto error_vme_pipe;
+	}
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+		if (ret)
+			goto error_test_oa;
+	}
 
 	return 0;
 
+error_test_oa:
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+error_vme_pipe:
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+error_compute_extra:
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+error_tdl_2:
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+error_tdl_1:
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+error_sampler:
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+error_rasterizer_and_pixel_backend:
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+error_l3_3:
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+error_l3_2:
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+error_l3_1:
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+error_hdc_and_sf:
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+error_compute_l3_cache:
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+error_compute_extended:
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+error_memory_writes:
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+error_memory_reads:
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+error_render_pipe_profile:
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+error_compute_basic:
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
 error_render_basic:
 	return ret;
 }
@@ -246,4 +3002,38 @@ i915_perf_unregister_sysfs_sklgt3(struct drm_i915_private *dev_priv)
 
 	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_test_oa);
 }
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt4.c b/drivers/gpu/drm/i915/i915_oa_sklgt4.c
index ed034f190a6c..9ddab43a2176 100644
--- a/drivers/gpu/drm/i915/i915_oa_sklgt4.c
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt4.c
@@ -33,9 +33,26 @@
 
 enum metric_set_id {
 	METRIC_SET_ID_RENDER_BASIC = 1,
+	METRIC_SET_ID_COMPUTE_BASIC,
+	METRIC_SET_ID_RENDER_PIPE_PROFILE,
+	METRIC_SET_ID_MEMORY_READS,
+	METRIC_SET_ID_MEMORY_WRITES,
+	METRIC_SET_ID_COMPUTE_EXTENDED,
+	METRIC_SET_ID_COMPUTE_L3_CACHE,
+	METRIC_SET_ID_HDC_AND_SF,
+	METRIC_SET_ID_L3_1,
+	METRIC_SET_ID_L3_2,
+	METRIC_SET_ID_L3_3,
+	METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND,
+	METRIC_SET_ID_SAMPLER,
+	METRIC_SET_ID_TDL_1,
+	METRIC_SET_ID_TDL_2,
+	METRIC_SET_ID_COMPUTE_EXTRA,
+	METRIC_SET_ID_VME_PIPE,
+	METRIC_SET_ID_TEST_OA,
 };
 
-int i915_oa_n_builtin_metric_sets_sklgt4 = 1;
+int i915_oa_n_builtin_metric_sets_sklgt4 = 18;
 
 static const struct i915_oa_reg b_counter_config_render_basic[] = {
 	{ _MMIO(0x2710), 0x00000000 },
@@ -168,66 +185,2712 @@ get_render_basic_mux_config(struct drm_i915_private *dev_priv,
 	return n;
 }
 
+static const struct i915_oa_reg b_counter_config_compute_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic[] = {
+	{ _MMIO(0x9888), 0x104f00e0 },
+	{ _MMIO(0x9888), 0x124f1c00 },
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x1a4e0820 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x064f0900 },
+	{ _MMIO(0x9888), 0x084f0032 },
+	{ _MMIO(0x9888), 0x0a4f1891 },
+	{ _MMIO(0x9888), 0x0c4f0e00 },
+	{ _MMIO(0x9888), 0x0e4f003c },
+	{ _MMIO(0x9888), 0x004f0d80 },
+	{ _MMIO(0x9888), 0x024f003b },
+	{ _MMIO(0x9888), 0x006c0002 },
+	{ _MMIO(0x9888), 0x086c0100 },
+	{ _MMIO(0x9888), 0x0c6c000c },
+	{ _MMIO(0x9888), 0x0e6c0b00 },
+	{ _MMIO(0x9888), 0x186c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x081b8000 },
+	{ _MMIO(0x9888), 0x0c1b4000 },
+	{ _MMIO(0x9888), 0x0e1b8000 },
+	{ _MMIO(0x9888), 0x101c8000 },
+	{ _MMIO(0x9888), 0x1a1c8000 },
+	{ _MMIO(0x9888), 0x1c1c0024 },
+	{ _MMIO(0x9888), 0x065b8000 },
+	{ _MMIO(0x9888), 0x085b4000 },
+	{ _MMIO(0x9888), 0x0a5bc000 },
+	{ _MMIO(0x9888), 0x0c5b8000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x005b8000 },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x1a5c6000 },
+	{ _MMIO(0x9888), 0x1c5c001b },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x145c8000 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4c2000 },
+	{ _MMIO(0x9888), 0x0c4c0208 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020d2000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5500 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2cc000 },
+	{ _MMIO(0x9888), 0x162cfb00 },
+	{ _MMIO(0x9888), 0x182c00be },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042cc000 },
+	{ _MMIO(0x9888), 0x19900157 },
+	{ _MMIO(0x9888), 0x1b900158 },
+	{ _MMIO(0x9888), 0x1d900105 },
+	{ _MMIO(0x9888), 0x1f900103 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x11900fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900821 },
+	{ _MMIO(0x9888), 0x47900802 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900802 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900002 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900422 },
+	{ _MMIO(0x9888), 0x53905555 },
+};
+
+static int
+get_compute_basic_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_basic;
+	lens[n] = ARRAY_SIZE(mux_config_compute_basic);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_render_pipe_profile[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ffea },
+	{ _MMIO(0x2774), 0x00007ffc },
+	{ _MMIO(0x2778), 0x0007affa },
+	{ _MMIO(0x277c), 0x0000f5fd },
+	{ _MMIO(0x2780), 0x00079ffa },
+	{ _MMIO(0x2784), 0x0000f3fb },
+	{ _MMIO(0x2788), 0x0007bf7a },
+	{ _MMIO(0x278c), 0x0000f7e7 },
+	{ _MMIO(0x2790), 0x0007fefa },
+	{ _MMIO(0x2794), 0x0000f7cf },
+	{ _MMIO(0x2798), 0x00077ffa },
+	{ _MMIO(0x279c), 0x0000efdf },
+	{ _MMIO(0x27a0), 0x0006fffa },
+	{ _MMIO(0x27a4), 0x0000cfbf },
+	{ _MMIO(0x27a8), 0x0003fffa },
+	{ _MMIO(0x27ac), 0x00005f7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_pipe_profile[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile[] = {
+	{ _MMIO(0x9888), 0x0c0e001f },
+	{ _MMIO(0x9888), 0x0a0f0000 },
+	{ _MMIO(0x9888), 0x10116800 },
+	{ _MMIO(0x9888), 0x178a03e0 },
+	{ _MMIO(0x9888), 0x11824c00 },
+	{ _MMIO(0x9888), 0x11830020 },
+	{ _MMIO(0x9888), 0x13840020 },
+	{ _MMIO(0x9888), 0x11850019 },
+	{ _MMIO(0x9888), 0x11860007 },
+	{ _MMIO(0x9888), 0x01870c40 },
+	{ _MMIO(0x9888), 0x17880000 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x0a4c0040 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x040d4000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020e5400 },
+	{ _MMIO(0x9888), 0x000e0000 },
+	{ _MMIO(0x9888), 0x080f0040 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x100f0000 },
+	{ _MMIO(0x9888), 0x0e0f0040 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06110012 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x01898000 },
+	{ _MMIO(0x9888), 0x0d890100 },
+	{ _MMIO(0x9888), 0x03898000 },
+	{ _MMIO(0x9888), 0x09808000 },
+	{ _MMIO(0x9888), 0x0b808000 },
+	{ _MMIO(0x9888), 0x0380c000 },
+	{ _MMIO(0x9888), 0x0f8a0075 },
+	{ _MMIO(0x9888), 0x1d8a0000 },
+	{ _MMIO(0x9888), 0x118a8000 },
+	{ _MMIO(0x9888), 0x1b8a4000 },
+	{ _MMIO(0x9888), 0x138a8000 },
+	{ _MMIO(0x9888), 0x1d81a000 },
+	{ _MMIO(0x9888), 0x15818000 },
+	{ _MMIO(0x9888), 0x17818000 },
+	{ _MMIO(0x9888), 0x0b820030 },
+	{ _MMIO(0x9888), 0x07828000 },
+	{ _MMIO(0x9888), 0x0d824000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x05824000 },
+	{ _MMIO(0x9888), 0x0d830003 },
+	{ _MMIO(0x9888), 0x0583000c },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x03838000 },
+	{ _MMIO(0x9888), 0x07838000 },
+	{ _MMIO(0x9888), 0x0b840980 },
+	{ _MMIO(0x9888), 0x03844d80 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x09848000 },
+	{ _MMIO(0x9888), 0x09850080 },
+	{ _MMIO(0x9888), 0x03850003 },
+	{ _MMIO(0x9888), 0x01850000 },
+	{ _MMIO(0x9888), 0x07860000 },
+	{ _MMIO(0x9888), 0x0f860400 },
+	{ _MMIO(0x9888), 0x09870032 },
+	{ _MMIO(0x9888), 0x01888052 },
+	{ _MMIO(0x9888), 0x11880000 },
+	{ _MMIO(0x9888), 0x09884000 },
+	{ _MMIO(0x9888), 0x1b931001 },
+	{ _MMIO(0x9888), 0x1d930001 },
+	{ _MMIO(0x9888), 0x19934000 },
+	{ _MMIO(0x9888), 0x1b958000 },
+	{ _MMIO(0x9888), 0x1d950094 },
+	{ _MMIO(0x9888), 0x19958000 },
+	{ _MMIO(0x9888), 0x09e58000 },
+	{ _MMIO(0x9888), 0x0be58000 },
+	{ _MMIO(0x9888), 0x03e5c000 },
+	{ _MMIO(0x9888), 0x0592c000 },
+	{ _MMIO(0x9888), 0x0b928000 },
+	{ _MMIO(0x9888), 0x0d924000 },
+	{ _MMIO(0x9888), 0x0f924000 },
+	{ _MMIO(0x9888), 0x11928000 },
+	{ _MMIO(0x9888), 0x1392c000 },
+	{ _MMIO(0x9888), 0x09924000 },
+	{ _MMIO(0x9888), 0x01985000 },
+	{ _MMIO(0x9888), 0x07988000 },
+	{ _MMIO(0x9888), 0x09981000 },
+	{ _MMIO(0x9888), 0x0b982000 },
+	{ _MMIO(0x9888), 0x0d982000 },
+	{ _MMIO(0x9888), 0x0f989000 },
+	{ _MMIO(0x9888), 0x05982000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1190c080 },
+	{ _MMIO(0x9888), 0x51901110 },
+	{ _MMIO(0x9888), 0x41900440 },
+	{ _MMIO(0x9888), 0x55901111 },
+	{ _MMIO(0x9888), 0x45900400 },
+	{ _MMIO(0x9888), 0x47900c21 },
+	{ _MMIO(0x9888), 0x57901411 },
+	{ _MMIO(0x9888), 0x49900042 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900024 },
+	{ _MMIO(0x9888), 0x59900001 },
+	{ _MMIO(0x9888), 0x43900841 },
+	{ _MMIO(0x9888), 0x53900411 },
+};
+
+static int
+get_render_pipe_profile_mux_config(struct drm_i915_private *dev_priv,
+				   const struct i915_oa_reg **regs,
+				   int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_render_pipe_profile;
+	lens[n] = ARRAY_SIZE(mux_config_render_pipe_profile);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_reads[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f872 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_reads[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f900064 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930055 },
+	{ _MMIO(0x9888), 0x03e58000 },
+	{ _MMIO(0x9888), 0x05e5c000 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x13900150 },
+	{ _MMIO(0x9888), 0x21900151 },
+	{ _MMIO(0x9888), 0x23900152 },
+	{ _MMIO(0x9888), 0x25900153 },
+	{ _MMIO(0x9888), 0x27900154 },
+	{ _MMIO(0x9888), 0x29900155 },
+	{ _MMIO(0x9888), 0x2b900156 },
+	{ _MMIO(0x9888), 0x2d900157 },
+	{ _MMIO(0x9888), 0x2f90015f },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c60 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900c63 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c63 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static int
+get_memory_reads_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_memory_reads;
+	lens[n] = ARRAY_SIZE(mux_config_memory_reads);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_writes[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f822 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_writes[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f901000 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930055 },
+	{ _MMIO(0x9888), 0x03e58000 },
+	{ _MMIO(0x9888), 0x05e5c000 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x13900160 },
+	{ _MMIO(0x9888), 0x21900161 },
+	{ _MMIO(0x9888), 0x23900162 },
+	{ _MMIO(0x9888), 0x25900163 },
+	{ _MMIO(0x9888), 0x27900164 },
+	{ _MMIO(0x9888), 0x29900165 },
+	{ _MMIO(0x9888), 0x2b900166 },
+	{ _MMIO(0x9888), 0x2d900167 },
+	{ _MMIO(0x9888), 0x2f900150 },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c60 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900c63 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c63 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static int
+get_memory_writes_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_memory_writes;
+	lens[n] = ARRAY_SIZE(mux_config_memory_writes);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extended[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fc2a },
+	{ _MMIO(0x2774), 0x0000bf00 },
+	{ _MMIO(0x2778), 0x0007fc6a },
+	{ _MMIO(0x277c), 0x0000bf00 },
+	{ _MMIO(0x2780), 0x0007fc92 },
+	{ _MMIO(0x2784), 0x0000bf00 },
+	{ _MMIO(0x2788), 0x0007fca2 },
+	{ _MMIO(0x278c), 0x0000bf00 },
+	{ _MMIO(0x2790), 0x0007fc32 },
+	{ _MMIO(0x2794), 0x0000bf00 },
+	{ _MMIO(0x2798), 0x0007fc9a },
+	{ _MMIO(0x279c), 0x0000bf00 },
+	{ _MMIO(0x27a0), 0x0007fe6a },
+	{ _MMIO(0x27a4), 0x0000bf00 },
+	{ _MMIO(0x27a8), 0x0007fe7a },
+	{ _MMIO(0x27ac), 0x0000bf00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extended[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended[] = {
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x141c8160 },
+	{ _MMIO(0x9888), 0x161c8015 },
+	{ _MMIO(0x9888), 0x181c0120 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4eaaa0 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0e6c0b01 },
+	{ _MMIO(0x9888), 0x006c0200 },
+	{ _MMIO(0x9888), 0x026c000c },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x001b8000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x001c0041 },
+	{ _MMIO(0x9888), 0x061c4200 },
+	{ _MMIO(0x9888), 0x081c4443 },
+	{ _MMIO(0x9888), 0x0a1c4645 },
+	{ _MMIO(0x9888), 0x0c1c7647 },
+	{ _MMIO(0x9888), 0x041c7357 },
+	{ _MMIO(0x9888), 0x1c1c0030 },
+	{ _MMIO(0x9888), 0x101c0000 },
+	{ _MMIO(0x9888), 0x1a1c0000 },
+	{ _MMIO(0x9888), 0x121c8000 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4caa2a },
+	{ _MMIO(0x9888), 0x0c4c02aa },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5515 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162caa00 },
+	{ _MMIO(0x9888), 0x182c00aa },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x11907fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900040 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900802 },
+	{ _MMIO(0x9888), 0x47900842 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900842 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900800 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static int
+get_compute_extended_mux_config(struct drm_i915_private *dev_priv,
+				const struct i915_oa_reg **regs,
+				int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_extended;
+	lens[n] = ARRAY_SIZE(mux_config_compute_extended);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_l3_cache[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fffa },
+	{ _MMIO(0x2774), 0x0000fefe },
+	{ _MMIO(0x2778), 0x0007fffa },
+	{ _MMIO(0x277c), 0x0000fefd },
+	{ _MMIO(0x2790), 0x0007fffa },
+	{ _MMIO(0x2794), 0x0000fbef },
+	{ _MMIO(0x2798), 0x0007fffa },
+	{ _MMIO(0x279c), 0x0000fbdf },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_l3_cache[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00101100 },
+	{ _MMIO(0xe45c), 0x00201200 },
+	{ _MMIO(0xe55c), 0x00301300 },
+	{ _MMIO(0xe65c), 0x00401400 },
+};
+
+static const struct i915_oa_reg mux_config_compute_l3_cache[] = {
+	{ _MMIO(0x9888), 0x166c0760 },
+	{ _MMIO(0x9888), 0x1593001e },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4e8020 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x006c0051 },
+	{ _MMIO(0x9888), 0x066c5000 },
+	{ _MMIO(0x9888), 0x086c5c5d },
+	{ _MMIO(0x9888), 0x0e6c5e5f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x186c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x061b8000 },
+	{ _MMIO(0x9888), 0x081bc000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x101c8000 },
+	{ _MMIO(0x9888), 0x1a1ce000 },
+	{ _MMIO(0x9888), 0x1c1c0030 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4c2a00 },
+	{ _MMIO(0x9888), 0x0c4c0280 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f1500 },
+	{ _MMIO(0x9888), 0x100f0140 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162c0a00 },
+	{ _MMIO(0x9888), 0x182c00a0 },
+	{ _MMIO(0x9888), 0x03933300 },
+	{ _MMIO(0x9888), 0x05930032 },
+	{ _MMIO(0x9888), 0x11930000 },
+	{ _MMIO(0x9888), 0x1b930000 },
+	{ _MMIO(0x9888), 0x1d900157 },
+	{ _MMIO(0x9888), 0x1f900158 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1190030f },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900021 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x53905555 },
+	{ _MMIO(0x9888), 0x43900000 },
+};
+
+static int
+get_compute_l3_cache_mux_config(struct drm_i915_private *dev_priv,
+				const struct i915_oa_reg **regs,
+				int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_l3_cache;
+	lens[n] = ARRAY_SIZE(mux_config_compute_l3_cache);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_hdc_and_sf[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x10800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+};
+
+static const struct i915_oa_reg flex_eu_config_hdc_and_sf[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_hdc_and_sf[] = {
+	{ _MMIO(0x9888), 0x104f0232 },
+	{ _MMIO(0x9888), 0x124f4640 },
+	{ _MMIO(0x9888), 0x106c0232 },
+	{ _MMIO(0x9888), 0x11834400 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x004f1880 },
+	{ _MMIO(0x9888), 0x024f08bb },
+	{ _MMIO(0x9888), 0x044f001b },
+	{ _MMIO(0x9888), 0x046c0100 },
+	{ _MMIO(0x9888), 0x066c000b },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x041b8000 },
+	{ _MMIO(0x9888), 0x061b4000 },
+	{ _MMIO(0x9888), 0x1a1c1800 },
+	{ _MMIO(0x9888), 0x005b8000 },
+	{ _MMIO(0x9888), 0x025bc000 },
+	{ _MMIO(0x9888), 0x045b4000 },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x145c8000 },
+	{ _MMIO(0x9888), 0x165c8000 },
+	{ _MMIO(0x9888), 0x185c8000 },
+	{ _MMIO(0x9888), 0x0a4c00a0 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042cc000 },
+	{ _MMIO(0x9888), 0x062cc000 },
+	{ _MMIO(0x9888), 0x082cc000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x0f8305c0 },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x07830000 },
+	{ _MMIO(0x9888), 0x1d950080 },
+	{ _MMIO(0x9888), 0x13928000 },
+	{ _MMIO(0x9888), 0x0f988000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x59900001 },
+	{ _MMIO(0x9888), 0x4b900040 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_hdc_and_sf_mux_config(struct drm_i915_private *dev_priv,
+			  const struct i915_oa_reg **regs,
+			  int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_hdc_and_sf;
+	lens[n] = ARRAY_SIZE(mux_config_hdc_and_sf);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1[] = {
+	{ _MMIO(0x9888), 0x126c7b40 },
+	{ _MMIO(0x9888), 0x166c0020 },
+	{ _MMIO(0x9888), 0x0a603444 },
+	{ _MMIO(0x9888), 0x0a613400 },
+	{ _MMIO(0x9888), 0x1a4ea800 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x064f4000 },
+	{ _MMIO(0x9888), 0x0c6c5327 },
+	{ _MMIO(0x9888), 0x0e6c5425 },
+	{ _MMIO(0x9888), 0x006c2a00 },
+	{ _MMIO(0x9888), 0x026c285b },
+	{ _MMIO(0x9888), 0x046c005c },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0800 },
+	{ _MMIO(0x9888), 0x0c1bc000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x001b8000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x1c1c003c },
+	{ _MMIO(0x9888), 0x121c8000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c0800 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x1a5c1000 },
+	{ _MMIO(0x9888), 0x10600000 },
+	{ _MMIO(0x9888), 0x04600000 },
+	{ _MMIO(0x9888), 0x0c610044 },
+	{ _MMIO(0x9888), 0x10610000 },
+	{ _MMIO(0x9888), 0x06610000 },
+	{ _MMIO(0x9888), 0x0c4c02a8 },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x0a4c002a },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0154 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x182c00aa },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2cc000 },
+	{ _MMIO(0x9888), 0x1190ffc0 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900420 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900021 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900400 },
+	{ _MMIO(0x9888), 0x43900421 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+};
+
+static int
+get_l3_1_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_1;
+	lens[n] = ARRAY_SIZE(mux_config_l3_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00028002 },
+	{ _MMIO(0x277c), 0x000087ff },
+	{ _MMIO(0x2780), 0x00020002 },
+	{ _MMIO(0x2784), 0x00008fff },
+	{ _MMIO(0x2788), 0x00008002 },
+	{ _MMIO(0x278c), 0x0000a7ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_2[] = {
+	{ _MMIO(0x9888), 0x126c02e0 },
+	{ _MMIO(0x9888), 0x146c0001 },
+	{ _MMIO(0x9888), 0x0a623400 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x064f4000 },
+	{ _MMIO(0x9888), 0x026c3324 },
+	{ _MMIO(0x9888), 0x046c3422 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c0800 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x1a5c1000 },
+	{ _MMIO(0x9888), 0x06614000 },
+	{ _MMIO(0x9888), 0x0c620044 },
+	{ _MMIO(0x9888), 0x10620000 },
+	{ _MMIO(0x9888), 0x06620000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c002a },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f4000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2cc000 },
+	{ _MMIO(0x9888), 0x1190f800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_l3_2_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_2;
+	lens[n] = ARRAY_SIZE(mux_config_l3_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_3[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00028002 },
+	{ _MMIO(0x277c), 0x000087ff },
+	{ _MMIO(0x2780), 0x00020002 },
+	{ _MMIO(0x2784), 0x00008fff },
+	{ _MMIO(0x2788), 0x00008002 },
+	{ _MMIO(0x278c), 0x0000a7ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_3[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_3[] = {
+	{ _MMIO(0x9888), 0x126c4e80 },
+	{ _MMIO(0x9888), 0x146c0000 },
+	{ _MMIO(0x9888), 0x0a633400 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x026c3321 },
+	{ _MMIO(0x9888), 0x046c342f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1a6c2000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x061b4000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c1800 },
+	{ _MMIO(0x9888), 0x06604000 },
+	{ _MMIO(0x9888), 0x0c630044 },
+	{ _MMIO(0x9888), 0x10630000 },
+	{ _MMIO(0x9888), 0x06630000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c00aa },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f4000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x1190f800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900002 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_l3_3_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_3;
+	lens[n] = ARRAY_SIZE(mux_config_l3_3);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000efff },
+	{ _MMIO(0x2778), 0x00006000 },
+	{ _MMIO(0x277c), 0x0000f3ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x9888), 0x102f3800 },
+	{ _MMIO(0x9888), 0x144d0500 },
+	{ _MMIO(0x9888), 0x120d03c0 },
+	{ _MMIO(0x9888), 0x140d03cf },
+	{ _MMIO(0x9888), 0x0c0f0004 },
+	{ _MMIO(0x9888), 0x0c4e4000 },
+	{ _MMIO(0x9888), 0x042f0480 },
+	{ _MMIO(0x9888), 0x082f0000 },
+	{ _MMIO(0x9888), 0x022f0000 },
+	{ _MMIO(0x9888), 0x0a4c0090 },
+	{ _MMIO(0x9888), 0x064d0027 },
+	{ _MMIO(0x9888), 0x004d0000 },
+	{ _MMIO(0x9888), 0x000d0d40 },
+	{ _MMIO(0x9888), 0x020d803f },
+	{ _MMIO(0x9888), 0x040d8023 },
+	{ _MMIO(0x9888), 0x100d0000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020f0010 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x0e0f0050 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41901400 },
+	{ _MMIO(0x9888), 0x43901485 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900001 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_rasterizer_and_pixel_backend_mux_config(struct drm_i915_private *dev_priv,
+					    const struct i915_oa_reg **regs,
+					    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_rasterizer_and_pixel_backend;
+	lens[n] = ARRAY_SIZE(mux_config_rasterizer_and_pixel_backend);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler[] = {
+	{ _MMIO(0x9888), 0x14152c00 },
+	{ _MMIO(0x9888), 0x16150005 },
+	{ _MMIO(0x9888), 0x121600a0 },
+	{ _MMIO(0x9888), 0x14352c00 },
+	{ _MMIO(0x9888), 0x16350005 },
+	{ _MMIO(0x9888), 0x123600a0 },
+	{ _MMIO(0x9888), 0x14552c00 },
+	{ _MMIO(0x9888), 0x16550005 },
+	{ _MMIO(0x9888), 0x125600a0 },
+	{ _MMIO(0x9888), 0x062f6000 },
+	{ _MMIO(0x9888), 0x022f2000 },
+	{ _MMIO(0x9888), 0x0c4c0050 },
+	{ _MMIO(0x9888), 0x0a4c0010 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0350 },
+	{ _MMIO(0x9888), 0x0c0fb000 },
+	{ _MMIO(0x9888), 0x0e0f00da },
+	{ _MMIO(0x9888), 0x182c0028 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x022dc000 },
+	{ _MMIO(0x9888), 0x042d4000 },
+	{ _MMIO(0x9888), 0x0c138000 },
+	{ _MMIO(0x9888), 0x0e132000 },
+	{ _MMIO(0x9888), 0x0413c000 },
+	{ _MMIO(0x9888), 0x1c140018 },
+	{ _MMIO(0x9888), 0x0c157000 },
+	{ _MMIO(0x9888), 0x0e150078 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x04162180 },
+	{ _MMIO(0x9888), 0x02160000 },
+	{ _MMIO(0x9888), 0x04174000 },
+	{ _MMIO(0x9888), 0x0233a000 },
+	{ _MMIO(0x9888), 0x04333000 },
+	{ _MMIO(0x9888), 0x14348000 },
+	{ _MMIO(0x9888), 0x16348000 },
+	{ _MMIO(0x9888), 0x02357870 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x04360043 },
+	{ _MMIO(0x9888), 0x02360000 },
+	{ _MMIO(0x9888), 0x04371000 },
+	{ _MMIO(0x9888), 0x0e538000 },
+	{ _MMIO(0x9888), 0x00538000 },
+	{ _MMIO(0x9888), 0x06533000 },
+	{ _MMIO(0x9888), 0x1c540020 },
+	{ _MMIO(0x9888), 0x12548000 },
+	{ _MMIO(0x9888), 0x0e557000 },
+	{ _MMIO(0x9888), 0x00557800 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x06560043 },
+	{ _MMIO(0x9888), 0x02560000 },
+	{ _MMIO(0x9888), 0x06571000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900060 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900060 },
+};
+
+static int
+get_sampler_mux_config(struct drm_i915_private *dev_priv,
+		       const struct i915_oa_reg **regs,
+		       int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_sampler;
+	lens[n] = ARRAY_SIZE(mux_config_sampler);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x00007fff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x00009fff },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000efff },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000f3ff },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fdff },
+	{ _MMIO(0x2798), 0x00000000 },
+	{ _MMIO(0x279c), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_1[] = {
+	{ _MMIO(0x9888), 0x12120000 },
+	{ _MMIO(0x9888), 0x12320000 },
+	{ _MMIO(0x9888), 0x12520000 },
+	{ _MMIO(0x9888), 0x002f8000 },
+	{ _MMIO(0x9888), 0x022f3000 },
+	{ _MMIO(0x9888), 0x0a4c0015 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f03a0 },
+	{ _MMIO(0x9888), 0x0c0ff000 },
+	{ _MMIO(0x9888), 0x0e0f0095 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2d8000 },
+	{ _MMIO(0x9888), 0x0e2d4000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x02108000 },
+	{ _MMIO(0x9888), 0x0410c000 },
+	{ _MMIO(0x9888), 0x02118000 },
+	{ _MMIO(0x9888), 0x0411c000 },
+	{ _MMIO(0x9888), 0x02121880 },
+	{ _MMIO(0x9888), 0x041219b5 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x02134000 },
+	{ _MMIO(0x9888), 0x04135000 },
+	{ _MMIO(0x9888), 0x0c308000 },
+	{ _MMIO(0x9888), 0x0e304000 },
+	{ _MMIO(0x9888), 0x06304000 },
+	{ _MMIO(0x9888), 0x0c318000 },
+	{ _MMIO(0x9888), 0x0e314000 },
+	{ _MMIO(0x9888), 0x06314000 },
+	{ _MMIO(0x9888), 0x0c321a80 },
+	{ _MMIO(0x9888), 0x0e320033 },
+	{ _MMIO(0x9888), 0x06320031 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x0c334000 },
+	{ _MMIO(0x9888), 0x0e331000 },
+	{ _MMIO(0x9888), 0x06331000 },
+	{ _MMIO(0x9888), 0x0e508000 },
+	{ _MMIO(0x9888), 0x00508000 },
+	{ _MMIO(0x9888), 0x02504000 },
+	{ _MMIO(0x9888), 0x0e518000 },
+	{ _MMIO(0x9888), 0x00518000 },
+	{ _MMIO(0x9888), 0x02514000 },
+	{ _MMIO(0x9888), 0x0e521880 },
+	{ _MMIO(0x9888), 0x00521a80 },
+	{ _MMIO(0x9888), 0x02520033 },
+	{ _MMIO(0x9888), 0x0e534000 },
+	{ _MMIO(0x9888), 0x00534000 },
+	{ _MMIO(0x9888), 0x02531000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900062 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+};
+
+static int
+get_tdl_1_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_1;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_2[] = {
+	{ _MMIO(0x9888), 0x12124d60 },
+	{ _MMIO(0x9888), 0x12322e60 },
+	{ _MMIO(0x9888), 0x12524d60 },
+	{ _MMIO(0x9888), 0x022f3000 },
+	{ _MMIO(0x9888), 0x0a4c0014 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0fe000 },
+	{ _MMIO(0x9888), 0x0e0f0097 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x002d8000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x0410c000 },
+	{ _MMIO(0x9888), 0x0411c000 },
+	{ _MMIO(0x9888), 0x04121fb7 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x04135000 },
+	{ _MMIO(0x9888), 0x00308000 },
+	{ _MMIO(0x9888), 0x06304000 },
+	{ _MMIO(0x9888), 0x00318000 },
+	{ _MMIO(0x9888), 0x06314000 },
+	{ _MMIO(0x9888), 0x00321b80 },
+	{ _MMIO(0x9888), 0x0632003f },
+	{ _MMIO(0x9888), 0x00334000 },
+	{ _MMIO(0x9888), 0x06331000 },
+	{ _MMIO(0x9888), 0x0250c000 },
+	{ _MMIO(0x9888), 0x0251c000 },
+	{ _MMIO(0x9888), 0x02521fb7 },
+	{ _MMIO(0x9888), 0x00520000 },
+	{ _MMIO(0x9888), 0x02535000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x43900063 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_tdl_2_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_2;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extra[] = {
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extra[] = {
+};
+
+static const struct i915_oa_reg mux_config_compute_extra[] = {
+	{ _MMIO(0x9888), 0x121203e0 },
+	{ _MMIO(0x9888), 0x123203e0 },
+	{ _MMIO(0x9888), 0x125203e0 },
+	{ _MMIO(0x9888), 0x129203e0 },
+	{ _MMIO(0x9888), 0x12b203e0 },
+	{ _MMIO(0x9888), 0x12d203e0 },
+	{ _MMIO(0x9888), 0x131203e0 },
+	{ _MMIO(0x9888), 0x133203e0 },
+	{ _MMIO(0x9888), 0x135203e0 },
+	{ _MMIO(0x9888), 0x1a4ef000 },
+	{ _MMIO(0x9888), 0x1c4e0003 },
+	{ _MMIO(0x9888), 0x024ec000 },
+	{ _MMIO(0x9888), 0x044ec000 },
+	{ _MMIO(0x9888), 0x064ec000 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x0c4c02a0 },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x0a4c0042 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0150 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f006d },
+	{ _MMIO(0x9888), 0x182c00a8 },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x042d8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06114000 },
+	{ _MMIO(0x9888), 0x06120033 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x04308000 },
+	{ _MMIO(0x9888), 0x04318000 },
+	{ _MMIO(0x9888), 0x04321980 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x04334000 },
+	{ _MMIO(0x9888), 0x04504000 },
+	{ _MMIO(0x9888), 0x04514000 },
+	{ _MMIO(0x9888), 0x04520033 },
+	{ _MMIO(0x9888), 0x00520000 },
+	{ _MMIO(0x9888), 0x04531000 },
+	{ _MMIO(0x9888), 0x1acef000 },
+	{ _MMIO(0x9888), 0x1cce0003 },
+	{ _MMIO(0x9888), 0x00af8000 },
+	{ _MMIO(0x9888), 0x0ccc02a0 },
+	{ _MMIO(0x9888), 0x0acc0001 },
+	{ _MMIO(0x9888), 0x0c8d8000 },
+	{ _MMIO(0x9888), 0x0e8da000 },
+	{ _MMIO(0x9888), 0x008d8000 },
+	{ _MMIO(0x9888), 0x028da000 },
+	{ _MMIO(0x9888), 0x108f0150 },
+	{ _MMIO(0x9888), 0x0c8fb000 },
+	{ _MMIO(0x9888), 0x0e8f0001 },
+	{ _MMIO(0x9888), 0x18ac00a8 },
+	{ _MMIO(0x9888), 0x06ac8000 },
+	{ _MMIO(0x9888), 0x02ad4000 },
+	{ _MMIO(0x9888), 0x02908000 },
+	{ _MMIO(0x9888), 0x02918000 },
+	{ _MMIO(0x9888), 0x02921980 },
+	{ _MMIO(0x9888), 0x00920000 },
+	{ _MMIO(0x9888), 0x02934000 },
+	{ _MMIO(0x9888), 0x02b04000 },
+	{ _MMIO(0x9888), 0x02b14000 },
+	{ _MMIO(0x9888), 0x02b20033 },
+	{ _MMIO(0x9888), 0x00b20000 },
+	{ _MMIO(0x9888), 0x02b31000 },
+	{ _MMIO(0x9888), 0x00d08000 },
+	{ _MMIO(0x9888), 0x00d18000 },
+	{ _MMIO(0x9888), 0x00d21980 },
+	{ _MMIO(0x9888), 0x00d34000 },
+	{ _MMIO(0x9888), 0x072f8000 },
+	{ _MMIO(0x9888), 0x0d4c0100 },
+	{ _MMIO(0x9888), 0x0d0d8000 },
+	{ _MMIO(0x9888), 0x0f0da000 },
+	{ _MMIO(0x9888), 0x110f01b0 },
+	{ _MMIO(0x9888), 0x192c0080 },
+	{ _MMIO(0x9888), 0x0f2d4000 },
+	{ _MMIO(0x9888), 0x0f108000 },
+	{ _MMIO(0x9888), 0x0f118000 },
+	{ _MMIO(0x9888), 0x0f121980 },
+	{ _MMIO(0x9888), 0x01120000 },
+	{ _MMIO(0x9888), 0x0f134000 },
+	{ _MMIO(0x9888), 0x0f304000 },
+	{ _MMIO(0x9888), 0x0f314000 },
+	{ _MMIO(0x9888), 0x0f320033 },
+	{ _MMIO(0x9888), 0x01320000 },
+	{ _MMIO(0x9888), 0x0f331000 },
+	{ _MMIO(0x9888), 0x0d508000 },
+	{ _MMIO(0x9888), 0x0d518000 },
+	{ _MMIO(0x9888), 0x0d521980 },
+	{ _MMIO(0x9888), 0x01520000 },
+	{ _MMIO(0x9888), 0x0d534000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900002 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51901100 },
+	{ _MMIO(0x9888), 0x41901000 },
+	{ _MMIO(0x9888), 0x43901423 },
+	{ _MMIO(0x9888), 0x53903331 },
+	{ _MMIO(0x9888), 0x45900044 },
+};
+
+static int
+get_compute_extra_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_extra;
+	lens[n] = ARRAY_SIZE(mux_config_compute_extra);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_vme_pipe[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00100030 },
+	{ _MMIO(0x2774), 0x0000fff9 },
+	{ _MMIO(0x2778), 0x00000002 },
+	{ _MMIO(0x277c), 0x0000fffc },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000fff3 },
+	{ _MMIO(0x2788), 0x00100180 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00000002 },
+	{ _MMIO(0x279c), 0x0000ff3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_vme_pipe[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00008003 },
+};
+
+static const struct i915_oa_reg mux_config_vme_pipe[] = {
+	{ _MMIO(0x9888), 0x141a5800 },
+	{ _MMIO(0x9888), 0x161a00c0 },
+	{ _MMIO(0x9888), 0x12180240 },
+	{ _MMIO(0x9888), 0x14180002 },
+	{ _MMIO(0x9888), 0x149a5800 },
+	{ _MMIO(0x9888), 0x169a00c0 },
+	{ _MMIO(0x9888), 0x12980240 },
+	{ _MMIO(0x9888), 0x14980002 },
+	{ _MMIO(0x9888), 0x1a4e3fc0 },
+	{ _MMIO(0x9888), 0x002f1000 },
+	{ _MMIO(0x9888), 0x022f8000 },
+	{ _MMIO(0x9888), 0x042f3000 },
+	{ _MMIO(0x9888), 0x004c4000 },
+	{ _MMIO(0x9888), 0x0a4c9500 },
+	{ _MMIO(0x9888), 0x0c4c002a },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f5500 },
+	{ _MMIO(0x9888), 0x100f0015 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162caa00 },
+	{ _MMIO(0x9888), 0x182c000a },
+	{ _MMIO(0x9888), 0x04193000 },
+	{ _MMIO(0x9888), 0x081a28c1 },
+	{ _MMIO(0x9888), 0x001a0000 },
+	{ _MMIO(0x9888), 0x00133000 },
+	{ _MMIO(0x9888), 0x0613c000 },
+	{ _MMIO(0x9888), 0x0813f000 },
+	{ _MMIO(0x9888), 0x00172000 },
+	{ _MMIO(0x9888), 0x06178000 },
+	{ _MMIO(0x9888), 0x0817a000 },
+	{ _MMIO(0x9888), 0x00180037 },
+	{ _MMIO(0x9888), 0x06180940 },
+	{ _MMIO(0x9888), 0x08180000 },
+	{ _MMIO(0x9888), 0x02180000 },
+	{ _MMIO(0x9888), 0x04183000 },
+	{ _MMIO(0x9888), 0x04afc000 },
+	{ _MMIO(0x9888), 0x06af3000 },
+	{ _MMIO(0x9888), 0x0acc4000 },
+	{ _MMIO(0x9888), 0x0ccc0015 },
+	{ _MMIO(0x9888), 0x0a8da000 },
+	{ _MMIO(0x9888), 0x0c8da000 },
+	{ _MMIO(0x9888), 0x0e8f4000 },
+	{ _MMIO(0x9888), 0x108f0015 },
+	{ _MMIO(0x9888), 0x16aca000 },
+	{ _MMIO(0x9888), 0x18ac000a },
+	{ _MMIO(0x9888), 0x06993000 },
+	{ _MMIO(0x9888), 0x0c9a28c1 },
+	{ _MMIO(0x9888), 0x009a0000 },
+	{ _MMIO(0x9888), 0x0a93f000 },
+	{ _MMIO(0x9888), 0x0c93f000 },
+	{ _MMIO(0x9888), 0x0a97a000 },
+	{ _MMIO(0x9888), 0x0c97a000 },
+	{ _MMIO(0x9888), 0x0a980977 },
+	{ _MMIO(0x9888), 0x08980000 },
+	{ _MMIO(0x9888), 0x04980000 },
+	{ _MMIO(0x9888), 0x06983000 },
+	{ _MMIO(0x9888), 0x119000ff },
+	{ _MMIO(0x9888), 0x51900010 },
+	{ _MMIO(0x9888), 0x41900060 },
+	{ _MMIO(0x9888), 0x55900111 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900821 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900002 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_vme_pipe_mux_config(struct drm_i915_private *dev_priv,
+			const struct i915_oa_reg **regs,
+			int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_vme_pipe;
+	lens[n] = ARRAY_SIZE(mux_config_vme_pipe);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_test_oa[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2770), 0x00000004 },
+	{ _MMIO(0x2774), 0x00000000 },
+	{ _MMIO(0x2778), 0x00000003 },
+	{ _MMIO(0x277c), 0x00000000 },
+	{ _MMIO(0x2780), 0x00000007 },
+	{ _MMIO(0x2784), 0x00000000 },
+	{ _MMIO(0x2788), 0x00100002 },
+	{ _MMIO(0x278c), 0x0000fff7 },
+	{ _MMIO(0x2790), 0x00100002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00100082 },
+	{ _MMIO(0x279c), 0x0000ffef },
+	{ _MMIO(0x27a0), 0x001000c2 },
+	{ _MMIO(0x27a4), 0x0000ffe7 },
+	{ _MMIO(0x27a8), 0x00100001 },
+	{ _MMIO(0x27ac), 0x0000ffe7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_test_oa[] = {
+};
+
+static const struct i915_oa_reg mux_config_test_oa[] = {
+	{ _MMIO(0x9888), 0x11810000 },
+	{ _MMIO(0x9888), 0x07810013 },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930040 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_test_oa_mux_config(struct drm_i915_private *dev_priv,
+		       const struct i915_oa_reg **regs,
+		       int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_test_oa;
+	lens[n] = ARRAY_SIZE(mux_config_test_oa);
+	n++;
+
+	return n;
+}
+
 int i915_oa_select_metric_set_sklgt4(struct drm_i915_private *dev_priv)
 {
-	dev_priv->perf.oa.n_mux_configs = 0;
-	dev_priv->perf.oa.b_counter_regs = NULL;
-	dev_priv->perf.oa.b_counter_regs_len = 0;
-	dev_priv->perf.oa.flex_regs = NULL;
-	dev_priv->perf.oa.flex_regs_len = 0;
+	dev_priv->perf.oa.n_mux_configs = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_basic_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_basic_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
+		return 0;
+	case METRIC_SET_ID_RENDER_PIPE_PROFILE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_pipe_profile_mux_config(dev_priv,
+							   dev_priv->perf.oa.mux_regs,
+							   dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_PIPE_PROFILE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_pipe_profile;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_pipe_profile);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_pipe_profile;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_pipe_profile);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_READS:
+		dev_priv->perf.oa.n_mux_configs =
+			get_memory_reads_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_READS\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_reads;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_reads);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_reads;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_reads);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_WRITES:
+		dev_priv->perf.oa.n_mux_configs =
+			get_memory_writes_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_WRITES\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_writes;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_writes);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_writes;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_writes);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTENDED:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_extended_mux_config(dev_priv,
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTENDED\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extended;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extended);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extended;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extended);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_L3_CACHE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_l3_cache_mux_config(dev_priv,
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_L3_CACHE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_l3_cache;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_l3_cache);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_l3_cache;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_l3_cache);
+
+		return 0;
+	case METRIC_SET_ID_HDC_AND_SF:
+		dev_priv->perf.oa.n_mux_configs =
+			get_hdc_and_sf_mux_config(dev_priv,
+						  dev_priv->perf.oa.mux_regs,
+						  dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"HDC_AND_SF\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_hdc_and_sf;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_hdc_and_sf);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_hdc_and_sf;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_hdc_and_sf);
+
+		return 0;
+	case METRIC_SET_ID_L3_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_1_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_1);
+
+		return 0;
+	case METRIC_SET_ID_L3_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_2_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_2);
+
+		return 0;
+	case METRIC_SET_ID_L3_3:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_3_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_3\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_3;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_3);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_3;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_3);
+
+		return 0;
+	case METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND:
+		dev_priv->perf.oa.n_mux_configs =
+			get_rasterizer_and_pixel_backend_mux_config(dev_priv,
+								    dev_priv->perf.oa.mux_regs,
+								    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RASTERIZER_AND_PIXEL_BACKEND\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_rasterizer_and_pixel_backend);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_rasterizer_and_pixel_backend);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER:
+		dev_priv->perf.oa.n_mux_configs =
+			get_sampler_mux_config(dev_priv,
+					       dev_priv->perf.oa.mux_regs,
+					       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler);
+
+		return 0;
+	case METRIC_SET_ID_TDL_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_1_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_1);
+
+		return 0;
+	case METRIC_SET_ID_TDL_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_2_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_2);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTRA:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_extra_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTRA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extra;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extra);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extra;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extra);
+
+		return 0;
+	case METRIC_SET_ID_VME_PIPE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_vme_pipe_mux_config(dev_priv,
+						dev_priv->perf.oa.mux_regs,
+						dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"VME_PIPE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_vme_pipe;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_vme_pipe);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_vme_pipe;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_vme_pipe);
+
+		return 0;
+	case METRIC_SET_ID_TEST_OA:
+		dev_priv->perf.oa.n_mux_configs =
+			get_test_oa_mux_config(dev_priv,
+					       dev_priv->perf.oa.mux_regs,
+					       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TEST_OA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_test_oa;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_test_oa);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_test_oa;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_test_oa);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "bad77c24-cc64-480d-99bf-e7b740713800",
+	.attrs =  attrs_render_basic,
+};
+
+static ssize_t
+show_compute_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_BASIC);
+}
+
+static struct device_attribute dev_attr_compute_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_basic[] = {
+	&dev_attr_compute_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_basic = {
+	.name = "7277228f-e7f3-4743-945a-6a2049d11377",
+	.attrs =  attrs_compute_basic,
+};
+
+static ssize_t
+show_render_pipe_profile_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_PIPE_PROFILE);
+}
+
+static struct device_attribute dev_attr_render_pipe_profile_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_pipe_profile_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_pipe_profile[] = {
+	&dev_attr_render_pipe_profile_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_pipe_profile = {
+	.name = "463c668c-3f60-49b6-8f85-d995b635b3b2",
+	.attrs =  attrs_render_pipe_profile,
+};
+
+static ssize_t
+show_memory_reads_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_READS);
+}
 
-	switch (dev_priv->perf.oa.metrics_set) {
-	case METRIC_SET_ID_RENDER_BASIC:
-		dev_priv->perf.oa.n_mux_configs =
-			get_render_basic_mux_config(dev_priv,
-						    dev_priv->perf.oa.mux_regs,
-						    dev_priv->perf.oa.mux_regs_lens);
-		if (dev_priv->perf.oa.n_mux_configs == 0) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+static struct device_attribute dev_attr_memory_reads_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_reads_id,
+	.store = NULL,
+};
 
-			/* EINVAL because *_register_sysfs already checked this
-			 * and so it wouldn't have been advertised to userspace and
-			 * so shouldn't have been requested
-			 */
-			return -EINVAL;
-		}
+static struct attribute *attrs_memory_reads[] = {
+	&dev_attr_memory_reads_id.attr,
+	NULL,
+};
 
-		dev_priv->perf.oa.b_counter_regs =
-			b_counter_config_render_basic;
-		dev_priv->perf.oa.b_counter_regs_len =
-			ARRAY_SIZE(b_counter_config_render_basic);
+static struct attribute_group group_memory_reads = {
+	.name = "3ae6e74c-72c3-4040-9bd0-7961430b8cc8",
+	.attrs =  attrs_memory_reads,
+};
 
-		dev_priv->perf.oa.flex_regs =
-			flex_eu_config_render_basic;
-		dev_priv->perf.oa.flex_regs_len =
-			ARRAY_SIZE(flex_eu_config_render_basic);
+static ssize_t
+show_memory_writes_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_WRITES);
+}
 
-		return 0;
-	default:
-		return -ENODEV;
-	}
+static struct device_attribute dev_attr_memory_writes_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_writes_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_writes[] = {
+	&dev_attr_memory_writes_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_writes = {
+	.name = "055f256d-4052-467c-8dec-6064a4806433",
+	.attrs =  attrs_memory_writes,
+};
+
+static ssize_t
+show_compute_extended_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTENDED);
+}
+
+static struct device_attribute dev_attr_compute_extended_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extended_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extended[] = {
+	&dev_attr_compute_extended_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extended = {
+	.name = "753972d4-87cd-4460-824d-754463ac5054",
+	.attrs =  attrs_compute_extended,
+};
+
+static ssize_t
+show_compute_l3_cache_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_L3_CACHE);
 }
 
+static struct device_attribute dev_attr_compute_l3_cache_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_l3_cache_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_l3_cache[] = {
+	&dev_attr_compute_l3_cache_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_l3_cache = {
+	.name = "4e4392e9-8f73-457b-ab44-b49f7a0c733b",
+	.attrs =  attrs_compute_l3_cache,
+};
+
 static ssize_t
-show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+show_hdc_and_sf_id(struct device *kdev, struct device_attribute *attr, char *buf)
 {
-	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+	return sprintf(buf, "%d\n", METRIC_SET_ID_HDC_AND_SF);
 }
 
-static struct device_attribute dev_attr_render_basic_id = {
+static struct device_attribute dev_attr_hdc_and_sf_id = {
 	.attr = { .name = "id", .mode = 0444 },
-	.show = show_render_basic_id,
+	.show = show_hdc_and_sf_id,
 	.store = NULL,
 };
 
-static struct attribute *attrs_render_basic[] = {
-	&dev_attr_render_basic_id.attr,
+static struct attribute *attrs_hdc_and_sf[] = {
+	&dev_attr_hdc_and_sf_id.attr,
 	NULL,
 };
 
-static struct attribute_group group_render_basic = {
-	.name = "bad77c24-cc64-480d-99bf-e7b740713800",
-	.attrs =  attrs_render_basic,
+static struct attribute_group group_hdc_and_sf = {
+	.name = "730d95dd-7da8-4e1c-ab8d-c0eb1e4c1805",
+	.attrs =  attrs_hdc_and_sf,
+};
+
+static ssize_t
+show_l3_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_1);
+}
+
+static struct device_attribute dev_attr_l3_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_1[] = {
+	&dev_attr_l3_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_1 = {
+	.name = "d9e86d70-462b-462a-851e-fd63e8c13d63",
+	.attrs =  attrs_l3_1,
+};
+
+static ssize_t
+show_l3_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_2);
+}
+
+static struct device_attribute dev_attr_l3_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_2[] = {
+	&dev_attr_l3_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_2 = {
+	.name = "52200424-6ee9-48b3-b7fa-0afcf1975e4d",
+	.attrs =  attrs_l3_2,
+};
+
+static ssize_t
+show_l3_3_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_3);
+}
+
+static struct device_attribute dev_attr_l3_3_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_3_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_3[] = {
+	&dev_attr_l3_3_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_3 = {
+	.name = "1988315f-0a26-44df-acb0-df7ec86b1456",
+	.attrs =  attrs_l3_3,
+};
+
+static ssize_t
+show_rasterizer_and_pixel_backend_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND);
+}
+
+static struct device_attribute dev_attr_rasterizer_and_pixel_backend_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_rasterizer_and_pixel_backend_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_rasterizer_and_pixel_backend[] = {
+	&dev_attr_rasterizer_and_pixel_backend_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_rasterizer_and_pixel_backend = {
+	.name = "f1f17ca7-286e-4ae5-9d15-9fccad6c665d",
+	.attrs =  attrs_rasterizer_and_pixel_backend,
+};
+
+static ssize_t
+show_sampler_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER);
+}
+
+static struct device_attribute dev_attr_sampler_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler[] = {
+	&dev_attr_sampler_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler = {
+	.name = "00a9e0fb-3d2e-4405-852c-dce6334ffb3b",
+	.attrs =  attrs_sampler,
+};
+
+static ssize_t
+show_tdl_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_1);
+}
+
+static struct device_attribute dev_attr_tdl_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_1[] = {
+	&dev_attr_tdl_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_1 = {
+	.name = "13dcc50a-7ec0-409b-99d6-a3f932cedcb3",
+	.attrs =  attrs_tdl_1,
+};
+
+static ssize_t
+show_tdl_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_2);
+}
+
+static struct device_attribute dev_attr_tdl_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_2[] = {
+	&dev_attr_tdl_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_2 = {
+	.name = "97875e21-6624-4aee-9191-682feb3eae21",
+	.attrs =  attrs_tdl_2,
+};
+
+static ssize_t
+show_compute_extra_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTRA);
+}
+
+static struct device_attribute dev_attr_compute_extra_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extra_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extra[] = {
+	&dev_attr_compute_extra_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extra = {
+	.name = "a5aa857d-e8f0-4dfa-8981-ce340fa748fd",
+	.attrs =  attrs_compute_extra,
+};
+
+static ssize_t
+show_vme_pipe_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_VME_PIPE);
+}
+
+static struct device_attribute dev_attr_vme_pipe_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_vme_pipe_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_vme_pipe[] = {
+	&dev_attr_vme_pipe_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_vme_pipe = {
+	.name = "0e8d8b86-4ee7-4cdd-aaaa-58adc92cb29e",
+	.attrs =  attrs_vme_pipe,
+};
+
+static ssize_t
+show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TEST_OA);
+}
+
+static struct device_attribute dev_attr_test_oa_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_test_oa_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_test_oa[] = {
+	&dev_attr_test_oa_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_test_oa = {
+	.name = "882fa433-1f4a-4a67-a962-c741888fe5f5",
+	.attrs =  attrs_test_oa,
 };
 
 int
@@ -242,9 +2905,145 @@ i915_perf_register_sysfs_sklgt4(struct drm_i915_private *dev_priv)
 		if (ret)
 			goto error_render_basic;
 	}
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+		if (ret)
+			goto error_compute_basic;
+	}
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+		if (ret)
+			goto error_render_pipe_profile;
+	}
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+		if (ret)
+			goto error_memory_reads;
+	}
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+		if (ret)
+			goto error_memory_writes;
+	}
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+		if (ret)
+			goto error_compute_extended;
+	}
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+		if (ret)
+			goto error_compute_l3_cache;
+	}
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+		if (ret)
+			goto error_hdc_and_sf;
+	}
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+		if (ret)
+			goto error_l3_1;
+	}
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+		if (ret)
+			goto error_l3_2;
+	}
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+		if (ret)
+			goto error_l3_3;
+	}
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+		if (ret)
+			goto error_rasterizer_and_pixel_backend;
+	}
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler);
+		if (ret)
+			goto error_sampler;
+	}
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+		if (ret)
+			goto error_tdl_1;
+	}
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+		if (ret)
+			goto error_tdl_2;
+	}
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+		if (ret)
+			goto error_compute_extra;
+	}
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+		if (ret)
+			goto error_vme_pipe;
+	}
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+		if (ret)
+			goto error_test_oa;
+	}
 
 	return 0;
 
+error_test_oa:
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+error_vme_pipe:
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+error_compute_extra:
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+error_tdl_2:
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+error_tdl_1:
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+error_sampler:
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+error_rasterizer_and_pixel_backend:
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+error_l3_3:
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+error_l3_2:
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+error_l3_1:
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+error_hdc_and_sf:
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+error_compute_l3_cache:
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+error_compute_extended:
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+error_memory_writes:
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+error_memory_reads:
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+error_render_pipe_profile:
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+error_compute_basic:
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
 error_render_basic:
 	return ret;
 }
@@ -257,4 +3056,38 @@ i915_perf_unregister_sysfs_sklgt4(struct drm_i915_private *dev_priv)
 
 	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_test_oa);
 }
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v14 08/14] drm/i915/perf: per-gen timebase for checking sample freq
  2017-05-26 11:56   ` [PATCH v14 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                       ` (6 preceding siblings ...)
  2017-05-26 11:56     ` [PATCH v14 07/14] drm/i915/perf: Add more OA configs for BDW, CHV, SKL + BXT Lionel Landwerlin
@ 2017-05-26 11:56     ` Lionel Landwerlin
  2017-05-26 11:56     ` [PATCH v14 09/14] drm/i915/perf: remove perf.hook_lock Lionel Landwerlin
                       ` (5 subsequent siblings)
  13 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-26 11:56 UTC (permalink / raw)
  To: intel-gfx

From: Robert Bragg <robert@sixbynine.org>

An oa_exponent_to_ns() utility and per-gen timebase constants where
recently removed when updating the tail pointer race condition WA, and
this restores those so we can update the _PROP_OA_EXPONENT validation
done in read_properties_unlocked() to not assume we have a 12.5MHz
timebase as we did for Haswell.

Accordingly the oa_sample_rate_hard_limit value that's referenced by
proc_dointvec_minmax defining the absolute limit for the OA sampling
frequency is now initialized to (timestamp_frequency / 2) instead of the
6.25MHz constant for Haswell.

v2:
    Specify frequency of 19.2MHz for BXT (Ville)
    Initialize oa_sample_rate_hard_limit per-gen too (Lionel)

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h  |  1 +
 drivers/gpu/drm/i915/i915_perf.c | 39 ++++++++++++++++++++++++++++-----------
 2 files changed, 29 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 7741881e9605..b0f5d6fae6b0 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2394,6 +2394,7 @@ struct drm_i915_private {
 
 			bool periodic;
 			int period_exponent;
+			int timestamp_frequency;
 
 			int metrics_set;
 
diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 9f524a0ec727..4fab9974b1b8 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -288,10 +288,12 @@ static u32 i915_perf_stream_paranoid = true;
 
 /* For sysctl proc_dointvec_minmax of i915_oa_max_sample_rate
  *
- * 160ns is the smallest sampling period we can theoretically program the OA
- * unit with on Haswell, corresponding to 6.25MHz.
+ * The highest sampling frequency we can theoretically program the OA unit
+ * with is always half the timestamp frequency: E.g. 6.25Mhz for Haswell.
+ *
+ * Initialized just before we register the sysctl parameter.
  */
-static int oa_sample_rate_hard_limit = 6250000;
+static int oa_sample_rate_hard_limit;
 
 /* Theoretically we can program the OA unit to sample every 160ns but don't
  * allow that by default unless root...
@@ -2613,6 +2615,12 @@ i915_perf_open_ioctl_locked(struct drm_i915_private *dev_priv,
 	return ret;
 }
 
+static u64 oa_exponent_to_ns(struct drm_i915_private *dev_priv, int exponent)
+{
+	return div_u64(1000000000ULL * (2ULL << exponent),
+		       dev_priv->perf.oa.timestamp_frequency);
+}
+
 /**
  * read_properties_unlocked - validate + copy userspace stream open properties
  * @dev_priv: i915 device instance
@@ -2709,16 +2717,13 @@ static int read_properties_unlocked(struct drm_i915_private *dev_priv,
 			}
 
 			/* Theoretically we can program the OA unit to sample
-			 * every 160ns but don't allow that by default unless
-			 * root.
-			 *
-			 * On Haswell the period is derived from the exponent
-			 * as:
-			 *
-			 *   period = 80ns * 2^(exponent + 1)
+			 * e.g. every 160ns for HSW, 167ns for BDW/SKL or 104ns
+			 * for BXT. We don't allow such high sampling
+			 * frequencies by default unless root.
 			 */
+
 			BUILD_BUG_ON(sizeof(oa_period) != 8);
-			oa_period = 80ull * (2ull << value);
+			oa_period = oa_exponent_to_ns(dev_priv, value);
 
 			/* This check is primarily to ensure that oa_period <=
 			 * UINT32_MAX (before passing to do_div which only
@@ -2974,6 +2979,8 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
 		dev_priv->perf.oa.ops.oa_hw_tail_read =
 			gen7_oa_hw_tail_read;
 
+		dev_priv->perf.oa.timestamp_frequency = 12500000;
+
 		dev_priv->perf.oa.oa_formats = hsw_oa_formats;
 
 		dev_priv->perf.oa.n_builtin_sets =
@@ -2989,6 +2996,9 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
 		if (IS_GEN8(dev_priv)) {
 			dev_priv->perf.oa.ctx_oactxctrl_offset = 0x120;
 			dev_priv->perf.oa.ctx_flexeu0_offset = 0x2ce;
+
+			dev_priv->perf.oa.timestamp_frequency = 12500000;
+
 			dev_priv->perf.oa.gen8_valid_ctx_bit = (1<<25);
 
 			if (IS_BROADWELL(dev_priv)) {
@@ -3005,6 +3015,9 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
 		} else if (IS_GEN9(dev_priv)) {
 			dev_priv->perf.oa.ctx_oactxctrl_offset = 0x128;
 			dev_priv->perf.oa.ctx_flexeu0_offset = 0x3de;
+
+			dev_priv->perf.oa.timestamp_frequency = 12000000;
+
 			dev_priv->perf.oa.gen8_valid_ctx_bit = (1<<16);
 
 			if (IS_SKL_GT2(dev_priv)) {
@@ -3023,6 +3036,8 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
 				dev_priv->perf.oa.ops.select_metric_set =
 					i915_oa_select_metric_set_sklgt4;
 			} else if (IS_BROXTON(dev_priv)) {
+				dev_priv->perf.oa.timestamp_frequency = 19200000;
+
 				dev_priv->perf.oa.n_builtin_sets =
 					i915_oa_n_builtin_metric_sets_bxt;
 				dev_priv->perf.oa.ops.select_metric_set =
@@ -3057,6 +3072,8 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
 		spin_lock_init(&dev_priv->perf.hook_lock);
 		spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);
 
+		oa_sample_rate_hard_limit =
+			dev_priv->perf.oa.timestamp_frequency / 2;
 		dev_priv->perf.sysctl_header = register_sysctl_table(dev_root);
 
 		dev_priv->perf.initialized = true;
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v14 09/14] drm/i915/perf: remove perf.hook_lock
  2017-05-26 11:56   ` [PATCH v14 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                       ` (7 preceding siblings ...)
  2017-05-26 11:56     ` [PATCH v14 08/14] drm/i915/perf: per-gen timebase for checking sample freq Lionel Landwerlin
@ 2017-05-26 11:56     ` Lionel Landwerlin
  2017-05-26 11:56     ` [PATCH v14 10/14] drm/i915: add KBL GT2/GT3 check macros Lionel Landwerlin
                       ` (4 subsequent siblings)
  13 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-26 11:56 UTC (permalink / raw)
  To: intel-gfx

From: Robert Bragg <robert@sixbynine.org>

In earlier iterations of the i915-perf driver we had a number of
callbacks/hooks from other parts of the i915 driver to e.g. notify us
when a legacy context was pinned and these could run asynchronously with
respect to the stream file operations and might also run in atomic
context.

dev_priv->perf.hook_lock had been for serialising access to state needed
within these callbacks, but as the code has evolved some of the hooks
have gone away or are implemented to avoid needing to lock any state.

The remaining use of this lock was actually redundant considering how
the gen7 oacontrol state used to be updated as part of a context pin
hook.

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h  |  2 --
 drivers/gpu/drm/i915/i915_perf.c | 32 ++++++++++----------------------
 2 files changed, 10 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index b0f5d6fae6b0..cdd6f8fa105a 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2375,8 +2375,6 @@ struct drm_i915_private {
 		struct mutex lock;
 		struct list_head streams;
 
-		spinlock_t hook_lock;
-
 		struct {
 			struct i915_perf_stream *exclusive_stream;
 
diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 4fab9974b1b8..9e5307bbaab1 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -1810,9 +1810,17 @@ static void gen8_disable_metric_set(struct drm_i915_private *dev_priv)
 	gen8_configure_all_contexts(dev_priv, false);
 }
 
-static void gen7_update_oacontrol_locked(struct drm_i915_private *dev_priv)
+static void gen7_oa_enable(struct drm_i915_private *dev_priv)
 {
-	lockdep_assert_held(&dev_priv->perf.hook_lock);
+	/* Reset buf pointers so we don't forward reports from before now.
+	 *
+	 * Think carefully if considering trying to avoid this, since it
+	 * also ensures status flags and the buffer itself are cleared
+	 * in error paths, and we have checks for invalid reports based
+	 * on the assumption that certain fields are written to zeroed
+	 * memory which this helps maintains.
+	 */
+	gen7_init_oa_buffer(dev_priv);
 
 	if (dev_priv->perf.oa.exclusive_stream->enabled) {
 		struct i915_gem_context *ctx =
@@ -1835,25 +1843,6 @@ static void gen7_update_oacontrol_locked(struct drm_i915_private *dev_priv)
 		I915_WRITE(GEN7_OACONTROL, 0);
 }
 
-static void gen7_oa_enable(struct drm_i915_private *dev_priv)
-{
-	unsigned long flags;
-
-	/* Reset buf pointers so we don't forward reports from before now.
-	 *
-	 * Think carefully if considering trying to avoid this, since it
-	 * also ensures status flags and the buffer itself are cleared
-	 * in error paths, and we have checks for invalid reports based
-	 * on the assumption that certain fields are written to zeroed
-	 * memory which this helps maintains.
-	 */
-	gen7_init_oa_buffer(dev_priv);
-
-	spin_lock_irqsave(&dev_priv->perf.hook_lock, flags);
-	gen7_update_oacontrol_locked(dev_priv);
-	spin_unlock_irqrestore(&dev_priv->perf.hook_lock, flags);
-}
-
 static void gen8_oa_enable(struct drm_i915_private *dev_priv)
 {
 	u32 report_format = dev_priv->perf.oa.oa_buffer.format;
@@ -3069,7 +3058,6 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
 
 		INIT_LIST_HEAD(&dev_priv->perf.streams);
 		mutex_init(&dev_priv->perf.lock);
-		spin_lock_init(&dev_priv->perf.hook_lock);
 		spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);
 
 		oa_sample_rate_hard_limit =
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v14 10/14] drm/i915: add KBL GT2/GT3 check macros
  2017-05-26 11:56   ` [PATCH v14 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                       ` (8 preceding siblings ...)
  2017-05-26 11:56     ` [PATCH v14 09/14] drm/i915/perf: remove perf.hook_lock Lionel Landwerlin
@ 2017-05-26 11:56     ` Lionel Landwerlin
  2017-05-26 11:56     ` [PATCH v14 11/14] drm/i915/perf: add KBL support Lionel Landwerlin
                       ` (3 subsequent siblings)
  13 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-26 11:56 UTC (permalink / raw)
  To: intel-gfx

Add macros to detect GT2/GT3 skus so we can apply the proper OA
configuration later.

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index cdd6f8fa105a..cd1dc9ee05cb 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2802,6 +2802,10 @@ intel_info(const struct drm_i915_private *dev_priv)
 				 (INTEL_DEVID(dev_priv) & 0x00F0) == 0x0020)
 #define IS_SKL_GT4(dev_priv)	(IS_SKYLAKE(dev_priv) && \
 				 (INTEL_DEVID(dev_priv) & 0x00F0) == 0x0030)
+#define IS_KBL_GT2(dev_priv)	(IS_KABYLAKE(dev_priv) && \
+				 (INTEL_DEVID(dev_priv) & 0x00F0) == 0x0010)
+#define IS_KBL_GT3(dev_priv)	(IS_KABYLAKE(dev_priv) && \
+				 (INTEL_DEVID(dev_priv) & 0x00F0) == 0x0020)
 
 #define IS_ALPHA_SUPPORT(intel_info) ((intel_info)->is_alpha_support)
 
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v14 11/14] drm/i915/perf: add KBL support
  2017-05-26 11:56   ` [PATCH v14 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                       ` (9 preceding siblings ...)
  2017-05-26 11:56     ` [PATCH v14 10/14] drm/i915: add KBL GT2/GT3 check macros Lionel Landwerlin
@ 2017-05-26 11:56     ` Lionel Landwerlin
  2017-05-26 11:56     ` [PATCH v14 12/14] drm/i915/perf: add GLK support Lionel Landwerlin
                       ` (2 subsequent siblings)
  13 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-26 11:56 UTC (permalink / raw)
  To: intel-gfx

Add OA support for Kabylake (pretty much identical to Skylake), and
also add the associated OA configurations.

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/Makefile         |    4 +-
 drivers/gpu/drm/i915/i915_oa_kblgt2.c | 2991 ++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_kblgt2.h |   40 +
 drivers/gpu/drm/i915/i915_oa_kblgt3.c | 3040 +++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_kblgt3.h |   40 +
 drivers/gpu/drm/i915/i915_perf.c      |   30 +-
 6 files changed, 6143 insertions(+), 2 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_oa_kblgt2.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_kblgt2.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_kblgt3.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_kblgt3.h

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index aabc660f94cb..859e1751d7ab 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -134,7 +134,9 @@ i915-y += i915_perf.o \
 	  i915_oa_sklgt2.o \
 	  i915_oa_sklgt3.o \
 	  i915_oa_sklgt4.o \
-	  i915_oa_bxt.o
+	  i915_oa_bxt.o \
+	  i915_oa_kblgt2.o \
+	  i915_oa_kblgt3.o
 
 ifeq ($(CONFIG_DRM_I915_GVT),y)
 i915-y += intel_gvt.o
diff --git a/drivers/gpu/drm/i915/i915_oa_kblgt2.c b/drivers/gpu/drm/i915/i915_oa_kblgt2.c
new file mode 100644
index 000000000000..87dbd0a0b076
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_kblgt2.c
@@ -0,0 +1,2991 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/sysfs.h>
+
+#include "i915_drv.h"
+#include "i915_oa_kblgt2.h"
+
+enum metric_set_id {
+	METRIC_SET_ID_RENDER_BASIC = 1,
+	METRIC_SET_ID_COMPUTE_BASIC,
+	METRIC_SET_ID_RENDER_PIPE_PROFILE,
+	METRIC_SET_ID_MEMORY_READS,
+	METRIC_SET_ID_MEMORY_WRITES,
+	METRIC_SET_ID_COMPUTE_EXTENDED,
+	METRIC_SET_ID_COMPUTE_L3_CACHE,
+	METRIC_SET_ID_HDC_AND_SF,
+	METRIC_SET_ID_L3_1,
+	METRIC_SET_ID_L3_2,
+	METRIC_SET_ID_L3_3,
+	METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND,
+	METRIC_SET_ID_SAMPLER,
+	METRIC_SET_ID_TDL_1,
+	METRIC_SET_ID_TDL_2,
+	METRIC_SET_ID_COMPUTE_EXTRA,
+	METRIC_SET_ID_VME_PIPE,
+	METRIC_SET_ID_TEST_OA,
+};
+
+int i915_oa_n_builtin_metric_sets_kblgt2 = 18;
+
+static const struct i915_oa_reg b_counter_config_render_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic[] = {
+	{ _MMIO(0x9888), 0x166c01e0 },
+	{ _MMIO(0x9888), 0x12170280 },
+	{ _MMIO(0x9888), 0x12370280 },
+	{ _MMIO(0x9888), 0x11930317 },
+	{ _MMIO(0x9888), 0x159303df },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x1a4e0080 },
+	{ _MMIO(0x9888), 0x0a6c0053 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x0a1b4000 },
+	{ _MMIO(0x9888), 0x1c1c0001 },
+	{ _MMIO(0x9888), 0x002f1000 },
+	{ _MMIO(0x9888), 0x042f1000 },
+	{ _MMIO(0x9888), 0x004c4000 },
+	{ _MMIO(0x9888), 0x0a4c8400 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0d2000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f6600 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x162c2200 },
+	{ _MMIO(0x9888), 0x062d8000 },
+	{ _MMIO(0x9888), 0x082d8000 },
+	{ _MMIO(0x9888), 0x00133000 },
+	{ _MMIO(0x9888), 0x08133000 },
+	{ _MMIO(0x9888), 0x00170020 },
+	{ _MMIO(0x9888), 0x08170021 },
+	{ _MMIO(0x9888), 0x10170000 },
+	{ _MMIO(0x9888), 0x0633c000 },
+	{ _MMIO(0x9888), 0x0833c000 },
+	{ _MMIO(0x9888), 0x06370800 },
+	{ _MMIO(0x9888), 0x08370840 },
+	{ _MMIO(0x9888), 0x10370000 },
+	{ _MMIO(0x9888), 0x0d933031 },
+	{ _MMIO(0x9888), 0x0f933e3f },
+	{ _MMIO(0x9888), 0x01933d00 },
+	{ _MMIO(0x9888), 0x0393073c },
+	{ _MMIO(0x9888), 0x0593000e },
+	{ _MMIO(0x9888), 0x1d930000 },
+	{ _MMIO(0x9888), 0x19930000 },
+	{ _MMIO(0x9888), 0x1b930000 },
+	{ _MMIO(0x9888), 0x1d900157 },
+	{ _MMIO(0x9888), 0x1f900158 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d908000 },
+	{ _MMIO(0x9888), 0x2f908000 },
+	{ _MMIO(0x9888), 0x31908000 },
+	{ _MMIO(0x9888), 0x15908000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1190001f },
+	{ _MMIO(0x9888), 0x51904400 },
+	{ _MMIO(0x9888), 0x41900020 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c21 },
+	{ _MMIO(0x9888), 0x47900061 },
+	{ _MMIO(0x9888), 0x57904440 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900004 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x53904444 },
+};
+
+static int
+get_render_basic_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_render_basic;
+	lens[n] = ARRAY_SIZE(mux_config_render_basic);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic[] = {
+	{ _MMIO(0x9888), 0x104f00e0 },
+	{ _MMIO(0x9888), 0x124f1c00 },
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x1a4e0820 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x064f0900 },
+	{ _MMIO(0x9888), 0x084f0032 },
+	{ _MMIO(0x9888), 0x0a4f1891 },
+	{ _MMIO(0x9888), 0x0c4f0e00 },
+	{ _MMIO(0x9888), 0x0e4f003c },
+	{ _MMIO(0x9888), 0x004f0d80 },
+	{ _MMIO(0x9888), 0x024f003b },
+	{ _MMIO(0x9888), 0x006c0002 },
+	{ _MMIO(0x9888), 0x086c0100 },
+	{ _MMIO(0x9888), 0x0c6c000c },
+	{ _MMIO(0x9888), 0x0e6c0b00 },
+	{ _MMIO(0x9888), 0x186c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x081b8000 },
+	{ _MMIO(0x9888), 0x0c1b4000 },
+	{ _MMIO(0x9888), 0x0e1b8000 },
+	{ _MMIO(0x9888), 0x101c8000 },
+	{ _MMIO(0x9888), 0x1a1c8000 },
+	{ _MMIO(0x9888), 0x1c1c0024 },
+	{ _MMIO(0x9888), 0x065b8000 },
+	{ _MMIO(0x9888), 0x085b4000 },
+	{ _MMIO(0x9888), 0x0a5bc000 },
+	{ _MMIO(0x9888), 0x0c5b8000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x005b8000 },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x1a5c6000 },
+	{ _MMIO(0x9888), 0x1c5c001b },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x145c8000 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4c2000 },
+	{ _MMIO(0x9888), 0x0c4c0208 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020d2000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5500 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2cc000 },
+	{ _MMIO(0x9888), 0x162cfb00 },
+	{ _MMIO(0x9888), 0x182c00be },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042cc000 },
+	{ _MMIO(0x9888), 0x19900157 },
+	{ _MMIO(0x9888), 0x1b900158 },
+	{ _MMIO(0x9888), 0x1d900105 },
+	{ _MMIO(0x9888), 0x1f900103 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x11900fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900821 },
+	{ _MMIO(0x9888), 0x47900802 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900802 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900002 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900422 },
+	{ _MMIO(0x9888), 0x53904444 },
+};
+
+static int
+get_compute_basic_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_basic;
+	lens[n] = ARRAY_SIZE(mux_config_compute_basic);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_render_pipe_profile[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ffea },
+	{ _MMIO(0x2774), 0x00007ffc },
+	{ _MMIO(0x2778), 0x0007affa },
+	{ _MMIO(0x277c), 0x0000f5fd },
+	{ _MMIO(0x2780), 0x00079ffa },
+	{ _MMIO(0x2784), 0x0000f3fb },
+	{ _MMIO(0x2788), 0x0007bf7a },
+	{ _MMIO(0x278c), 0x0000f7e7 },
+	{ _MMIO(0x2790), 0x0007fefa },
+	{ _MMIO(0x2794), 0x0000f7cf },
+	{ _MMIO(0x2798), 0x00077ffa },
+	{ _MMIO(0x279c), 0x0000efdf },
+	{ _MMIO(0x27a0), 0x0006fffa },
+	{ _MMIO(0x27a4), 0x0000cfbf },
+	{ _MMIO(0x27a8), 0x0003fffa },
+	{ _MMIO(0x27ac), 0x00005f7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_pipe_profile[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile[] = {
+	{ _MMIO(0x9888), 0x0c0e001f },
+	{ _MMIO(0x9888), 0x0a0f0000 },
+	{ _MMIO(0x9888), 0x10116800 },
+	{ _MMIO(0x9888), 0x178a03e0 },
+	{ _MMIO(0x9888), 0x11824c00 },
+	{ _MMIO(0x9888), 0x11830020 },
+	{ _MMIO(0x9888), 0x13840020 },
+	{ _MMIO(0x9888), 0x11850019 },
+	{ _MMIO(0x9888), 0x11860007 },
+	{ _MMIO(0x9888), 0x01870c40 },
+	{ _MMIO(0x9888), 0x17880000 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x0a4c0040 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x040d4000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020e5400 },
+	{ _MMIO(0x9888), 0x000e0000 },
+	{ _MMIO(0x9888), 0x080f0040 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x100f0000 },
+	{ _MMIO(0x9888), 0x0e0f0040 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06110012 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x01898000 },
+	{ _MMIO(0x9888), 0x0d890100 },
+	{ _MMIO(0x9888), 0x03898000 },
+	{ _MMIO(0x9888), 0x09808000 },
+	{ _MMIO(0x9888), 0x0b808000 },
+	{ _MMIO(0x9888), 0x0380c000 },
+	{ _MMIO(0x9888), 0x0f8a0075 },
+	{ _MMIO(0x9888), 0x1d8a0000 },
+	{ _MMIO(0x9888), 0x118a8000 },
+	{ _MMIO(0x9888), 0x1b8a4000 },
+	{ _MMIO(0x9888), 0x138a8000 },
+	{ _MMIO(0x9888), 0x1d81a000 },
+	{ _MMIO(0x9888), 0x15818000 },
+	{ _MMIO(0x9888), 0x17818000 },
+	{ _MMIO(0x9888), 0x0b820030 },
+	{ _MMIO(0x9888), 0x07828000 },
+	{ _MMIO(0x9888), 0x0d824000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x05824000 },
+	{ _MMIO(0x9888), 0x0d830003 },
+	{ _MMIO(0x9888), 0x0583000c },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x03838000 },
+	{ _MMIO(0x9888), 0x07838000 },
+	{ _MMIO(0x9888), 0x0b840980 },
+	{ _MMIO(0x9888), 0x03844d80 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x09848000 },
+	{ _MMIO(0x9888), 0x09850080 },
+	{ _MMIO(0x9888), 0x03850003 },
+	{ _MMIO(0x9888), 0x01850000 },
+	{ _MMIO(0x9888), 0x07860000 },
+	{ _MMIO(0x9888), 0x0f860400 },
+	{ _MMIO(0x9888), 0x09870032 },
+	{ _MMIO(0x9888), 0x01888052 },
+	{ _MMIO(0x9888), 0x11880000 },
+	{ _MMIO(0x9888), 0x09884000 },
+	{ _MMIO(0x9888), 0x1b931001 },
+	{ _MMIO(0x9888), 0x1d930001 },
+	{ _MMIO(0x9888), 0x19934000 },
+	{ _MMIO(0x9888), 0x1b958000 },
+	{ _MMIO(0x9888), 0x1d950094 },
+	{ _MMIO(0x9888), 0x19958000 },
+	{ _MMIO(0x9888), 0x09e58000 },
+	{ _MMIO(0x9888), 0x0be58000 },
+	{ _MMIO(0x9888), 0x03e5c000 },
+	{ _MMIO(0x9888), 0x0592c000 },
+	{ _MMIO(0x9888), 0x0b928000 },
+	{ _MMIO(0x9888), 0x0d924000 },
+	{ _MMIO(0x9888), 0x0f924000 },
+	{ _MMIO(0x9888), 0x11928000 },
+	{ _MMIO(0x9888), 0x1392c000 },
+	{ _MMIO(0x9888), 0x09924000 },
+	{ _MMIO(0x9888), 0x01985000 },
+	{ _MMIO(0x9888), 0x07988000 },
+	{ _MMIO(0x9888), 0x09981000 },
+	{ _MMIO(0x9888), 0x0b982000 },
+	{ _MMIO(0x9888), 0x0d982000 },
+	{ _MMIO(0x9888), 0x0f989000 },
+	{ _MMIO(0x9888), 0x05982000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1190c080 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900440 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900400 },
+	{ _MMIO(0x9888), 0x47900c21 },
+	{ _MMIO(0x9888), 0x57900400 },
+	{ _MMIO(0x9888), 0x49900042 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900024 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900841 },
+	{ _MMIO(0x9888), 0x53900400 },
+};
+
+static int
+get_render_pipe_profile_mux_config(struct drm_i915_private *dev_priv,
+				   const struct i915_oa_reg **regs,
+				   int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_render_pipe_profile;
+	lens[n] = ARRAY_SIZE(mux_config_render_pipe_profile);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_reads[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f872 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_reads[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f900064 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930055 },
+	{ _MMIO(0x9888), 0x03e58000 },
+	{ _MMIO(0x9888), 0x05e5c000 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x13900150 },
+	{ _MMIO(0x9888), 0x21900151 },
+	{ _MMIO(0x9888), 0x23900152 },
+	{ _MMIO(0x9888), 0x25900153 },
+	{ _MMIO(0x9888), 0x27900154 },
+	{ _MMIO(0x9888), 0x29900155 },
+	{ _MMIO(0x9888), 0x2b900156 },
+	{ _MMIO(0x9888), 0x2d900157 },
+	{ _MMIO(0x9888), 0x2f90015f },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c60 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900c63 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c63 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static int
+get_memory_reads_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_memory_reads;
+	lens[n] = ARRAY_SIZE(mux_config_memory_reads);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_writes[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f822 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_writes[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f901000 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930055 },
+	{ _MMIO(0x9888), 0x03e58000 },
+	{ _MMIO(0x9888), 0x05e5c000 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x13900160 },
+	{ _MMIO(0x9888), 0x21900161 },
+	{ _MMIO(0x9888), 0x23900162 },
+	{ _MMIO(0x9888), 0x25900163 },
+	{ _MMIO(0x9888), 0x27900164 },
+	{ _MMIO(0x9888), 0x29900165 },
+	{ _MMIO(0x9888), 0x2b900166 },
+	{ _MMIO(0x9888), 0x2d900167 },
+	{ _MMIO(0x9888), 0x2f900150 },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c60 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900c63 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c63 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static int
+get_memory_writes_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_memory_writes;
+	lens[n] = ARRAY_SIZE(mux_config_memory_writes);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extended[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fc2a },
+	{ _MMIO(0x2774), 0x0000bf00 },
+	{ _MMIO(0x2778), 0x0007fc6a },
+	{ _MMIO(0x277c), 0x0000bf00 },
+	{ _MMIO(0x2780), 0x0007fc92 },
+	{ _MMIO(0x2784), 0x0000bf00 },
+	{ _MMIO(0x2788), 0x0007fca2 },
+	{ _MMIO(0x278c), 0x0000bf00 },
+	{ _MMIO(0x2790), 0x0007fc32 },
+	{ _MMIO(0x2794), 0x0000bf00 },
+	{ _MMIO(0x2798), 0x0007fc9a },
+	{ _MMIO(0x279c), 0x0000bf00 },
+	{ _MMIO(0x27a0), 0x0007fe6a },
+	{ _MMIO(0x27a4), 0x0000bf00 },
+	{ _MMIO(0x27a8), 0x0007fe7a },
+	{ _MMIO(0x27ac), 0x0000bf00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extended[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended[] = {
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x141c8160 },
+	{ _MMIO(0x9888), 0x161c8015 },
+	{ _MMIO(0x9888), 0x181c0120 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4eaaa0 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0e6c0b01 },
+	{ _MMIO(0x9888), 0x006c0200 },
+	{ _MMIO(0x9888), 0x026c000c },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x001b8000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x001c0041 },
+	{ _MMIO(0x9888), 0x061c4200 },
+	{ _MMIO(0x9888), 0x081c4443 },
+	{ _MMIO(0x9888), 0x0a1c4645 },
+	{ _MMIO(0x9888), 0x0c1c7647 },
+	{ _MMIO(0x9888), 0x041c7357 },
+	{ _MMIO(0x9888), 0x1c1c0030 },
+	{ _MMIO(0x9888), 0x101c0000 },
+	{ _MMIO(0x9888), 0x1a1c0000 },
+	{ _MMIO(0x9888), 0x121c8000 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4caa2a },
+	{ _MMIO(0x9888), 0x0c4c02aa },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5515 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162caa00 },
+	{ _MMIO(0x9888), 0x182c00aa },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x11907fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900040 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900802 },
+	{ _MMIO(0x9888), 0x47900842 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900842 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900800 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static int
+get_compute_extended_mux_config(struct drm_i915_private *dev_priv,
+				const struct i915_oa_reg **regs,
+				int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_extended;
+	lens[n] = ARRAY_SIZE(mux_config_compute_extended);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_l3_cache[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fffa },
+	{ _MMIO(0x2774), 0x0000fefe },
+	{ _MMIO(0x2778), 0x0007fffa },
+	{ _MMIO(0x277c), 0x0000fefd },
+	{ _MMIO(0x2790), 0x0007fffa },
+	{ _MMIO(0x2794), 0x0000fbef },
+	{ _MMIO(0x2798), 0x0007fffa },
+	{ _MMIO(0x279c), 0x0000fbdf },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_l3_cache[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00101100 },
+	{ _MMIO(0xe45c), 0x00201200 },
+	{ _MMIO(0xe55c), 0x00301300 },
+	{ _MMIO(0xe65c), 0x00401400 },
+};
+
+static const struct i915_oa_reg mux_config_compute_l3_cache[] = {
+	{ _MMIO(0x9888), 0x166c0760 },
+	{ _MMIO(0x9888), 0x1593001e },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4e8020 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x006c0051 },
+	{ _MMIO(0x9888), 0x066c5000 },
+	{ _MMIO(0x9888), 0x086c5c5d },
+	{ _MMIO(0x9888), 0x0e6c5e5f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x186c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x061b8000 },
+	{ _MMIO(0x9888), 0x081bc000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x101c8000 },
+	{ _MMIO(0x9888), 0x1a1ce000 },
+	{ _MMIO(0x9888), 0x1c1c0030 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4c2a00 },
+	{ _MMIO(0x9888), 0x0c4c0280 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f1500 },
+	{ _MMIO(0x9888), 0x100f0140 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162c0a00 },
+	{ _MMIO(0x9888), 0x182c00a0 },
+	{ _MMIO(0x9888), 0x03933300 },
+	{ _MMIO(0x9888), 0x05930032 },
+	{ _MMIO(0x9888), 0x11930000 },
+	{ _MMIO(0x9888), 0x1b930000 },
+	{ _MMIO(0x9888), 0x1d900157 },
+	{ _MMIO(0x9888), 0x1f900158 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1190030f },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900021 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x53904444 },
+	{ _MMIO(0x9888), 0x43900000 },
+};
+
+static int
+get_compute_l3_cache_mux_config(struct drm_i915_private *dev_priv,
+				const struct i915_oa_reg **regs,
+				int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_l3_cache;
+	lens[n] = ARRAY_SIZE(mux_config_compute_l3_cache);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_hdc_and_sf[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x10800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+};
+
+static const struct i915_oa_reg flex_eu_config_hdc_and_sf[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_hdc_and_sf[] = {
+	{ _MMIO(0x9888), 0x104f0232 },
+	{ _MMIO(0x9888), 0x124f4640 },
+	{ _MMIO(0x9888), 0x106c0232 },
+	{ _MMIO(0x9888), 0x11834400 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x004f1880 },
+	{ _MMIO(0x9888), 0x024f08bb },
+	{ _MMIO(0x9888), 0x044f001b },
+	{ _MMIO(0x9888), 0x046c0100 },
+	{ _MMIO(0x9888), 0x066c000b },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x041b8000 },
+	{ _MMIO(0x9888), 0x061b4000 },
+	{ _MMIO(0x9888), 0x1a1c1800 },
+	{ _MMIO(0x9888), 0x005b8000 },
+	{ _MMIO(0x9888), 0x025bc000 },
+	{ _MMIO(0x9888), 0x045b4000 },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x145c8000 },
+	{ _MMIO(0x9888), 0x165c8000 },
+	{ _MMIO(0x9888), 0x185c8000 },
+	{ _MMIO(0x9888), 0x0a4c00a0 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042cc000 },
+	{ _MMIO(0x9888), 0x062cc000 },
+	{ _MMIO(0x9888), 0x082cc000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x0f8305c0 },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x07830000 },
+	{ _MMIO(0x9888), 0x1d950080 },
+	{ _MMIO(0x9888), 0x13928000 },
+	{ _MMIO(0x9888), 0x0f988000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900040 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_hdc_and_sf_mux_config(struct drm_i915_private *dev_priv,
+			  const struct i915_oa_reg **regs,
+			  int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_hdc_and_sf;
+	lens[n] = ARRAY_SIZE(mux_config_hdc_and_sf);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1[] = {
+	{ _MMIO(0x9888), 0x126c7b40 },
+	{ _MMIO(0x9888), 0x166c0020 },
+	{ _MMIO(0x9888), 0x0a603444 },
+	{ _MMIO(0x9888), 0x0a613400 },
+	{ _MMIO(0x9888), 0x1a4ea800 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x064f4000 },
+	{ _MMIO(0x9888), 0x0c6c5327 },
+	{ _MMIO(0x9888), 0x0e6c5425 },
+	{ _MMIO(0x9888), 0x006c2a00 },
+	{ _MMIO(0x9888), 0x026c285b },
+	{ _MMIO(0x9888), 0x046c005c },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0800 },
+	{ _MMIO(0x9888), 0x0c1bc000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x001b8000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x1c1c003c },
+	{ _MMIO(0x9888), 0x121c8000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c0800 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x1a5c1000 },
+	{ _MMIO(0x9888), 0x10600000 },
+	{ _MMIO(0x9888), 0x04600000 },
+	{ _MMIO(0x9888), 0x0c610044 },
+	{ _MMIO(0x9888), 0x10610000 },
+	{ _MMIO(0x9888), 0x06610000 },
+	{ _MMIO(0x9888), 0x0c4c02a8 },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x0a4c002a },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0154 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x182c00aa },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2cc000 },
+	{ _MMIO(0x9888), 0x1190ffc0 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900420 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900021 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900400 },
+	{ _MMIO(0x9888), 0x43900421 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+};
+
+static int
+get_l3_1_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_1;
+	lens[n] = ARRAY_SIZE(mux_config_l3_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00028002 },
+	{ _MMIO(0x277c), 0x000087ff },
+	{ _MMIO(0x2780), 0x00020002 },
+	{ _MMIO(0x2784), 0x00008fff },
+	{ _MMIO(0x2788), 0x00008002 },
+	{ _MMIO(0x278c), 0x0000a7ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_2[] = {
+	{ _MMIO(0x9888), 0x126c02e0 },
+	{ _MMIO(0x9888), 0x146c0001 },
+	{ _MMIO(0x9888), 0x0a623400 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x064f4000 },
+	{ _MMIO(0x9888), 0x026c3324 },
+	{ _MMIO(0x9888), 0x046c3422 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c0800 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x1a5c1000 },
+	{ _MMIO(0x9888), 0x06614000 },
+	{ _MMIO(0x9888), 0x0c620044 },
+	{ _MMIO(0x9888), 0x10620000 },
+	{ _MMIO(0x9888), 0x06620000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c002a },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f4000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2cc000 },
+	{ _MMIO(0x9888), 0x1190f800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_l3_2_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_2;
+	lens[n] = ARRAY_SIZE(mux_config_l3_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_3[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00028002 },
+	{ _MMIO(0x277c), 0x000087ff },
+	{ _MMIO(0x2780), 0x00020002 },
+	{ _MMIO(0x2784), 0x00008fff },
+	{ _MMIO(0x2788), 0x00008002 },
+	{ _MMIO(0x278c), 0x0000a7ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_3[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_3[] = {
+	{ _MMIO(0x9888), 0x126c4e80 },
+	{ _MMIO(0x9888), 0x146c0000 },
+	{ _MMIO(0x9888), 0x0a633400 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x026c3321 },
+	{ _MMIO(0x9888), 0x046c342f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1a6c2000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x061b4000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c1800 },
+	{ _MMIO(0x9888), 0x06604000 },
+	{ _MMIO(0x9888), 0x0c630044 },
+	{ _MMIO(0x9888), 0x10630000 },
+	{ _MMIO(0x9888), 0x06630000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c00aa },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f4000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x1190f800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900002 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_l3_3_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_3;
+	lens[n] = ARRAY_SIZE(mux_config_l3_3);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000efff },
+	{ _MMIO(0x2778), 0x00006000 },
+	{ _MMIO(0x277c), 0x0000f3ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x9888), 0x102f3800 },
+	{ _MMIO(0x9888), 0x144d0500 },
+	{ _MMIO(0x9888), 0x120d03c0 },
+	{ _MMIO(0x9888), 0x140d03cf },
+	{ _MMIO(0x9888), 0x0c0f0004 },
+	{ _MMIO(0x9888), 0x0c4e4000 },
+	{ _MMIO(0x9888), 0x042f0480 },
+	{ _MMIO(0x9888), 0x082f0000 },
+	{ _MMIO(0x9888), 0x022f0000 },
+	{ _MMIO(0x9888), 0x0a4c0090 },
+	{ _MMIO(0x9888), 0x064d0027 },
+	{ _MMIO(0x9888), 0x004d0000 },
+	{ _MMIO(0x9888), 0x000d0d40 },
+	{ _MMIO(0x9888), 0x020d803f },
+	{ _MMIO(0x9888), 0x040d8023 },
+	{ _MMIO(0x9888), 0x100d0000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020f0010 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x0e0f0050 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41901400 },
+	{ _MMIO(0x9888), 0x43901485 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900001 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_rasterizer_and_pixel_backend_mux_config(struct drm_i915_private *dev_priv,
+					    const struct i915_oa_reg **regs,
+					    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_rasterizer_and_pixel_backend;
+	lens[n] = ARRAY_SIZE(mux_config_rasterizer_and_pixel_backend);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler[] = {
+	{ _MMIO(0x9888), 0x14152c00 },
+	{ _MMIO(0x9888), 0x16150005 },
+	{ _MMIO(0x9888), 0x121600a0 },
+	{ _MMIO(0x9888), 0x14352c00 },
+	{ _MMIO(0x9888), 0x16350005 },
+	{ _MMIO(0x9888), 0x123600a0 },
+	{ _MMIO(0x9888), 0x14552c00 },
+	{ _MMIO(0x9888), 0x16550005 },
+	{ _MMIO(0x9888), 0x125600a0 },
+	{ _MMIO(0x9888), 0x062f6000 },
+	{ _MMIO(0x9888), 0x022f2000 },
+	{ _MMIO(0x9888), 0x0c4c0050 },
+	{ _MMIO(0x9888), 0x0a4c0010 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0350 },
+	{ _MMIO(0x9888), 0x0c0fb000 },
+	{ _MMIO(0x9888), 0x0e0f00da },
+	{ _MMIO(0x9888), 0x182c0028 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x022dc000 },
+	{ _MMIO(0x9888), 0x042d4000 },
+	{ _MMIO(0x9888), 0x0c138000 },
+	{ _MMIO(0x9888), 0x0e132000 },
+	{ _MMIO(0x9888), 0x0413c000 },
+	{ _MMIO(0x9888), 0x1c140018 },
+	{ _MMIO(0x9888), 0x0c157000 },
+	{ _MMIO(0x9888), 0x0e150078 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x04162180 },
+	{ _MMIO(0x9888), 0x02160000 },
+	{ _MMIO(0x9888), 0x04174000 },
+	{ _MMIO(0x9888), 0x0233a000 },
+	{ _MMIO(0x9888), 0x04333000 },
+	{ _MMIO(0x9888), 0x14348000 },
+	{ _MMIO(0x9888), 0x16348000 },
+	{ _MMIO(0x9888), 0x02357870 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x04360043 },
+	{ _MMIO(0x9888), 0x02360000 },
+	{ _MMIO(0x9888), 0x04371000 },
+	{ _MMIO(0x9888), 0x0e538000 },
+	{ _MMIO(0x9888), 0x00538000 },
+	{ _MMIO(0x9888), 0x06533000 },
+	{ _MMIO(0x9888), 0x1c540020 },
+	{ _MMIO(0x9888), 0x12548000 },
+	{ _MMIO(0x9888), 0x0e557000 },
+	{ _MMIO(0x9888), 0x00557800 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x06560043 },
+	{ _MMIO(0x9888), 0x02560000 },
+	{ _MMIO(0x9888), 0x06571000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900060 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900060 },
+};
+
+static int
+get_sampler_mux_config(struct drm_i915_private *dev_priv,
+		       const struct i915_oa_reg **regs,
+		       int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_sampler;
+	lens[n] = ARRAY_SIZE(mux_config_sampler);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x00007fff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x00009fff },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000efff },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000f3ff },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fdff },
+	{ _MMIO(0x2798), 0x00000000 },
+	{ _MMIO(0x279c), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_1[] = {
+	{ _MMIO(0x9888), 0x12120000 },
+	{ _MMIO(0x9888), 0x12320000 },
+	{ _MMIO(0x9888), 0x12520000 },
+	{ _MMIO(0x9888), 0x002f8000 },
+	{ _MMIO(0x9888), 0x022f3000 },
+	{ _MMIO(0x9888), 0x0a4c0015 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f03a0 },
+	{ _MMIO(0x9888), 0x0c0ff000 },
+	{ _MMIO(0x9888), 0x0e0f0095 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2d8000 },
+	{ _MMIO(0x9888), 0x0e2d4000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x02108000 },
+	{ _MMIO(0x9888), 0x0410c000 },
+	{ _MMIO(0x9888), 0x02118000 },
+	{ _MMIO(0x9888), 0x0411c000 },
+	{ _MMIO(0x9888), 0x02121880 },
+	{ _MMIO(0x9888), 0x041219b5 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x02134000 },
+	{ _MMIO(0x9888), 0x04135000 },
+	{ _MMIO(0x9888), 0x0c308000 },
+	{ _MMIO(0x9888), 0x0e304000 },
+	{ _MMIO(0x9888), 0x06304000 },
+	{ _MMIO(0x9888), 0x0c318000 },
+	{ _MMIO(0x9888), 0x0e314000 },
+	{ _MMIO(0x9888), 0x06314000 },
+	{ _MMIO(0x9888), 0x0c321a80 },
+	{ _MMIO(0x9888), 0x0e320033 },
+	{ _MMIO(0x9888), 0x06320031 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x0c334000 },
+	{ _MMIO(0x9888), 0x0e331000 },
+	{ _MMIO(0x9888), 0x06331000 },
+	{ _MMIO(0x9888), 0x0e508000 },
+	{ _MMIO(0x9888), 0x00508000 },
+	{ _MMIO(0x9888), 0x02504000 },
+	{ _MMIO(0x9888), 0x0e518000 },
+	{ _MMIO(0x9888), 0x00518000 },
+	{ _MMIO(0x9888), 0x02514000 },
+	{ _MMIO(0x9888), 0x0e521880 },
+	{ _MMIO(0x9888), 0x00521a80 },
+	{ _MMIO(0x9888), 0x02520033 },
+	{ _MMIO(0x9888), 0x0e534000 },
+	{ _MMIO(0x9888), 0x00534000 },
+	{ _MMIO(0x9888), 0x02531000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900062 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+};
+
+static int
+get_tdl_1_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_1;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_2[] = {
+	{ _MMIO(0x9888), 0x12124d60 },
+	{ _MMIO(0x9888), 0x12322e60 },
+	{ _MMIO(0x9888), 0x12524d60 },
+	{ _MMIO(0x9888), 0x022f3000 },
+	{ _MMIO(0x9888), 0x0a4c0014 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0fe000 },
+	{ _MMIO(0x9888), 0x0e0f0097 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x002d8000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x0410c000 },
+	{ _MMIO(0x9888), 0x0411c000 },
+	{ _MMIO(0x9888), 0x04121fb7 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x04135000 },
+	{ _MMIO(0x9888), 0x00308000 },
+	{ _MMIO(0x9888), 0x06304000 },
+	{ _MMIO(0x9888), 0x00318000 },
+	{ _MMIO(0x9888), 0x06314000 },
+	{ _MMIO(0x9888), 0x00321b80 },
+	{ _MMIO(0x9888), 0x0632003f },
+	{ _MMIO(0x9888), 0x00334000 },
+	{ _MMIO(0x9888), 0x06331000 },
+	{ _MMIO(0x9888), 0x0250c000 },
+	{ _MMIO(0x9888), 0x0251c000 },
+	{ _MMIO(0x9888), 0x02521fb7 },
+	{ _MMIO(0x9888), 0x00520000 },
+	{ _MMIO(0x9888), 0x02535000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x43900063 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_tdl_2_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_2;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extra[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extra[] = {
+	{ _MMIO(0xe458), 0x00001000 },
+	{ _MMIO(0xe558), 0x00003002 },
+	{ _MMIO(0xe658), 0x00005004 },
+	{ _MMIO(0xe758), 0x00011010 },
+	{ _MMIO(0xe45c), 0x00050012 },
+	{ _MMIO(0xe55c), 0x00052051 },
+	{ _MMIO(0xe65c), 0x00000008 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extra[] = {
+	{ _MMIO(0x9888), 0x121203e0 },
+	{ _MMIO(0x9888), 0x123203e0 },
+	{ _MMIO(0x9888), 0x125203e0 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x0a4c0040 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0e0f006c },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x042d8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06114000 },
+	{ _MMIO(0x9888), 0x06120033 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x04308000 },
+	{ _MMIO(0x9888), 0x04318000 },
+	{ _MMIO(0x9888), 0x04321980 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x04334000 },
+	{ _MMIO(0x9888), 0x04504000 },
+	{ _MMIO(0x9888), 0x04514000 },
+	{ _MMIO(0x9888), 0x04520033 },
+	{ _MMIO(0x9888), 0x00520000 },
+	{ _MMIO(0x9888), 0x04531000 },
+	{ _MMIO(0x9888), 0x1190e000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900c00 },
+	{ _MMIO(0x9888), 0x45900002 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_compute_extra_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_extra;
+	lens[n] = ARRAY_SIZE(mux_config_compute_extra);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_vme_pipe[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00100030 },
+	{ _MMIO(0x2774), 0x0000fff9 },
+	{ _MMIO(0x2778), 0x00000002 },
+	{ _MMIO(0x277c), 0x0000fffc },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000fff3 },
+	{ _MMIO(0x2788), 0x00100180 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00000002 },
+	{ _MMIO(0x279c), 0x0000ff3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_vme_pipe[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00008003 },
+};
+
+static const struct i915_oa_reg mux_config_vme_pipe[] = {
+	{ _MMIO(0x9888), 0x141a5800 },
+	{ _MMIO(0x9888), 0x161a00c0 },
+	{ _MMIO(0x9888), 0x12180240 },
+	{ _MMIO(0x9888), 0x14180002 },
+	{ _MMIO(0x9888), 0x143a5800 },
+	{ _MMIO(0x9888), 0x163a00c0 },
+	{ _MMIO(0x9888), 0x12380240 },
+	{ _MMIO(0x9888), 0x14380002 },
+	{ _MMIO(0x9888), 0x002f1000 },
+	{ _MMIO(0x9888), 0x022f8000 },
+	{ _MMIO(0x9888), 0x042f3000 },
+	{ _MMIO(0x9888), 0x004c4000 },
+	{ _MMIO(0x9888), 0x0a4c1500 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f9500 },
+	{ _MMIO(0x9888), 0x100f002a },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162c0a00 },
+	{ _MMIO(0x9888), 0x0a2dc000 },
+	{ _MMIO(0x9888), 0x0c2dc000 },
+	{ _MMIO(0x9888), 0x04193000 },
+	{ _MMIO(0x9888), 0x081a28c1 },
+	{ _MMIO(0x9888), 0x001a0000 },
+	{ _MMIO(0x9888), 0x00133000 },
+	{ _MMIO(0x9888), 0x0613c000 },
+	{ _MMIO(0x9888), 0x0813f000 },
+	{ _MMIO(0x9888), 0x00172000 },
+	{ _MMIO(0x9888), 0x06178000 },
+	{ _MMIO(0x9888), 0x0817a000 },
+	{ _MMIO(0x9888), 0x00180037 },
+	{ _MMIO(0x9888), 0x06180940 },
+	{ _MMIO(0x9888), 0x08180000 },
+	{ _MMIO(0x9888), 0x02180000 },
+	{ _MMIO(0x9888), 0x04183000 },
+	{ _MMIO(0x9888), 0x06393000 },
+	{ _MMIO(0x9888), 0x0c3a28c1 },
+	{ _MMIO(0x9888), 0x003a0000 },
+	{ _MMIO(0x9888), 0x0a33f000 },
+	{ _MMIO(0x9888), 0x0c33f000 },
+	{ _MMIO(0x9888), 0x0a37a000 },
+	{ _MMIO(0x9888), 0x0c37a000 },
+	{ _MMIO(0x9888), 0x0a380977 },
+	{ _MMIO(0x9888), 0x08380000 },
+	{ _MMIO(0x9888), 0x04380000 },
+	{ _MMIO(0x9888), 0x06383000 },
+	{ _MMIO(0x9888), 0x119000ff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900040 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900800 },
+	{ _MMIO(0x9888), 0x47901000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900844 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_vme_pipe_mux_config(struct drm_i915_private *dev_priv,
+			const struct i915_oa_reg **regs,
+			int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_vme_pipe;
+	lens[n] = ARRAY_SIZE(mux_config_vme_pipe);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_test_oa[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2770), 0x00000004 },
+	{ _MMIO(0x2774), 0x00000000 },
+	{ _MMIO(0x2778), 0x00000003 },
+	{ _MMIO(0x277c), 0x00000000 },
+	{ _MMIO(0x2780), 0x00000007 },
+	{ _MMIO(0x2784), 0x00000000 },
+	{ _MMIO(0x2788), 0x00100002 },
+	{ _MMIO(0x278c), 0x0000fff7 },
+	{ _MMIO(0x2790), 0x00100002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00100082 },
+	{ _MMIO(0x279c), 0x0000ffef },
+	{ _MMIO(0x27a0), 0x001000c2 },
+	{ _MMIO(0x27a4), 0x0000ffe7 },
+	{ _MMIO(0x27a8), 0x00100001 },
+	{ _MMIO(0x27ac), 0x0000ffe7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_test_oa[] = {
+};
+
+static const struct i915_oa_reg mux_config_test_oa[] = {
+	{ _MMIO(0x9888), 0x11810000 },
+	{ _MMIO(0x9888), 0x07810013 },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930040 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_test_oa_mux_config(struct drm_i915_private *dev_priv,
+		       const struct i915_oa_reg **regs,
+		       int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_test_oa;
+	lens[n] = ARRAY_SIZE(mux_config_test_oa);
+	n++;
+
+	return n;
+}
+
+int i915_oa_select_metric_set_kblgt2(struct drm_i915_private *dev_priv)
+{
+	dev_priv->perf.oa.n_mux_configs = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_basic_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_basic_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
+		return 0;
+	case METRIC_SET_ID_RENDER_PIPE_PROFILE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_pipe_profile_mux_config(dev_priv,
+							   dev_priv->perf.oa.mux_regs,
+							   dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_PIPE_PROFILE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_pipe_profile;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_pipe_profile);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_pipe_profile;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_pipe_profile);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_READS:
+		dev_priv->perf.oa.n_mux_configs =
+			get_memory_reads_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_READS\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_reads;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_reads);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_reads;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_reads);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_WRITES:
+		dev_priv->perf.oa.n_mux_configs =
+			get_memory_writes_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_WRITES\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_writes;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_writes);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_writes;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_writes);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTENDED:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_extended_mux_config(dev_priv,
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTENDED\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extended;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extended);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extended;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extended);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_L3_CACHE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_l3_cache_mux_config(dev_priv,
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_L3_CACHE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_l3_cache;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_l3_cache);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_l3_cache;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_l3_cache);
+
+		return 0;
+	case METRIC_SET_ID_HDC_AND_SF:
+		dev_priv->perf.oa.n_mux_configs =
+			get_hdc_and_sf_mux_config(dev_priv,
+						  dev_priv->perf.oa.mux_regs,
+						  dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"HDC_AND_SF\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_hdc_and_sf;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_hdc_and_sf);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_hdc_and_sf;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_hdc_and_sf);
+
+		return 0;
+	case METRIC_SET_ID_L3_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_1_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_1);
+
+		return 0;
+	case METRIC_SET_ID_L3_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_2_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_2);
+
+		return 0;
+	case METRIC_SET_ID_L3_3:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_3_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_3\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_3;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_3);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_3;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_3);
+
+		return 0;
+	case METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND:
+		dev_priv->perf.oa.n_mux_configs =
+			get_rasterizer_and_pixel_backend_mux_config(dev_priv,
+								    dev_priv->perf.oa.mux_regs,
+								    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RASTERIZER_AND_PIXEL_BACKEND\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_rasterizer_and_pixel_backend);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_rasterizer_and_pixel_backend);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER:
+		dev_priv->perf.oa.n_mux_configs =
+			get_sampler_mux_config(dev_priv,
+					       dev_priv->perf.oa.mux_regs,
+					       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler);
+
+		return 0;
+	case METRIC_SET_ID_TDL_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_1_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_1);
+
+		return 0;
+	case METRIC_SET_ID_TDL_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_2_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_2);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTRA:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_extra_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTRA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extra;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extra);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extra;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extra);
+
+		return 0;
+	case METRIC_SET_ID_VME_PIPE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_vme_pipe_mux_config(dev_priv,
+						dev_priv->perf.oa.mux_regs,
+						dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"VME_PIPE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_vme_pipe;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_vme_pipe);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_vme_pipe;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_vme_pipe);
+
+		return 0;
+	case METRIC_SET_ID_TEST_OA:
+		dev_priv->perf.oa.n_mux_configs =
+			get_test_oa_mux_config(dev_priv,
+					       dev_priv->perf.oa.mux_regs,
+					       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TEST_OA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_test_oa;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_test_oa);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_test_oa;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_test_oa);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "f8d677e9-ff6f-4df1-9310-0334c6efacce",
+	.attrs =  attrs_render_basic,
+};
+
+static ssize_t
+show_compute_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_BASIC);
+}
+
+static struct device_attribute dev_attr_compute_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_basic[] = {
+	&dev_attr_compute_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_basic = {
+	.name = "e17fc42a-e614-41b6-90c4-1074841a6c77",
+	.attrs =  attrs_compute_basic,
+};
+
+static ssize_t
+show_render_pipe_profile_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_PIPE_PROFILE);
+}
+
+static struct device_attribute dev_attr_render_pipe_profile_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_pipe_profile_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_pipe_profile[] = {
+	&dev_attr_render_pipe_profile_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_pipe_profile = {
+	.name = "d7a17a3a-ca71-40d2-a919-ace80d50633f",
+	.attrs =  attrs_render_pipe_profile,
+};
+
+static ssize_t
+show_memory_reads_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_READS);
+}
+
+static struct device_attribute dev_attr_memory_reads_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_reads_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_reads[] = {
+	&dev_attr_memory_reads_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_reads = {
+	.name = "57b59202-172b-477a-87de-33f85572c589",
+	.attrs =  attrs_memory_reads,
+};
+
+static ssize_t
+show_memory_writes_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_WRITES);
+}
+
+static struct device_attribute dev_attr_memory_writes_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_writes_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_writes[] = {
+	&dev_attr_memory_writes_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_writes = {
+	.name = "3addf8ef-8e9b-40f5-a448-3dbb5d5128b0",
+	.attrs =  attrs_memory_writes,
+};
+
+static ssize_t
+show_compute_extended_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTENDED);
+}
+
+static struct device_attribute dev_attr_compute_extended_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extended_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extended[] = {
+	&dev_attr_compute_extended_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extended = {
+	.name = "4af0400a-81c3-47db-a6b6-deddbd75680e",
+	.attrs =  attrs_compute_extended,
+};
+
+static ssize_t
+show_compute_l3_cache_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_L3_CACHE);
+}
+
+static struct device_attribute dev_attr_compute_l3_cache_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_l3_cache_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_l3_cache[] = {
+	&dev_attr_compute_l3_cache_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_l3_cache = {
+	.name = "0e22f995-79ca-4f67-83ab-e9d9772488d8",
+	.attrs =  attrs_compute_l3_cache,
+};
+
+static ssize_t
+show_hdc_and_sf_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_HDC_AND_SF);
+}
+
+static struct device_attribute dev_attr_hdc_and_sf_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_hdc_and_sf_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_hdc_and_sf[] = {
+	&dev_attr_hdc_and_sf_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_hdc_and_sf = {
+	.name = "bc2a00f7-cb8a-4ff2-8ad0-e241dad16937",
+	.attrs =  attrs_hdc_and_sf,
+};
+
+static ssize_t
+show_l3_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_1);
+}
+
+static struct device_attribute dev_attr_l3_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_1[] = {
+	&dev_attr_l3_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_1 = {
+	.name = "d2bbe790-f058-42d9-81c6-cdedcf655bc2",
+	.attrs =  attrs_l3_1,
+};
+
+static ssize_t
+show_l3_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_2);
+}
+
+static struct device_attribute dev_attr_l3_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_2[] = {
+	&dev_attr_l3_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_2 = {
+	.name = "2f8e32e4-5956-46e2-af31-c8ea95887332",
+	.attrs =  attrs_l3_2,
+};
+
+static ssize_t
+show_l3_3_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_3);
+}
+
+static struct device_attribute dev_attr_l3_3_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_3_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_3[] = {
+	&dev_attr_l3_3_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_3 = {
+	.name = "ca046aad-b5fb-4101-adce-6473ee6e5b14",
+	.attrs =  attrs_l3_3,
+};
+
+static ssize_t
+show_rasterizer_and_pixel_backend_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND);
+}
+
+static struct device_attribute dev_attr_rasterizer_and_pixel_backend_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_rasterizer_and_pixel_backend_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_rasterizer_and_pixel_backend[] = {
+	&dev_attr_rasterizer_and_pixel_backend_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_rasterizer_and_pixel_backend = {
+	.name = "605f388f-24bb-455c-88e3-8d57ae0d7e9f",
+	.attrs =  attrs_rasterizer_and_pixel_backend,
+};
+
+static ssize_t
+show_sampler_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER);
+}
+
+static struct device_attribute dev_attr_sampler_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler[] = {
+	&dev_attr_sampler_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler = {
+	.name = "31dd157c-bf4e-4bab-bf2b-f5c8174af1af",
+	.attrs =  attrs_sampler,
+};
+
+static ssize_t
+show_tdl_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_1);
+}
+
+static struct device_attribute dev_attr_tdl_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_1[] = {
+	&dev_attr_tdl_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_1 = {
+	.name = "105db928-5542-466b-9128-e1f3c91426cb",
+	.attrs =  attrs_tdl_1,
+};
+
+static ssize_t
+show_tdl_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_2);
+}
+
+static struct device_attribute dev_attr_tdl_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_2[] = {
+	&dev_attr_tdl_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_2 = {
+	.name = "03db94d2-b37f-4c58-a791-0d2067b013bb",
+	.attrs =  attrs_tdl_2,
+};
+
+static ssize_t
+show_compute_extra_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTRA);
+}
+
+static struct device_attribute dev_attr_compute_extra_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extra_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extra[] = {
+	&dev_attr_compute_extra_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extra = {
+	.name = "aa7a3fb9-22fb-43ff-a32d-0ab6c13bbd16",
+	.attrs =  attrs_compute_extra,
+};
+
+static ssize_t
+show_vme_pipe_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_VME_PIPE);
+}
+
+static struct device_attribute dev_attr_vme_pipe_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_vme_pipe_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_vme_pipe[] = {
+	&dev_attr_vme_pipe_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_vme_pipe = {
+	.name = "398a4268-ef6f-4ffc-b55f-3c7b5363ce61",
+	.attrs =  attrs_vme_pipe,
+};
+
+static ssize_t
+show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TEST_OA);
+}
+
+static struct device_attribute dev_attr_test_oa_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_test_oa_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_test_oa[] = {
+	&dev_attr_test_oa_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_test_oa = {
+	.name = "baa3c7e4-52b6-4b85-801e-465a94b746dd",
+	.attrs =  attrs_test_oa,
+};
+
+int
+i915_perf_register_sysfs_kblgt2(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+	int ret = 0;
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+		if (ret)
+			goto error_render_basic;
+	}
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+		if (ret)
+			goto error_compute_basic;
+	}
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+		if (ret)
+			goto error_render_pipe_profile;
+	}
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+		if (ret)
+			goto error_memory_reads;
+	}
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+		if (ret)
+			goto error_memory_writes;
+	}
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+		if (ret)
+			goto error_compute_extended;
+	}
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+		if (ret)
+			goto error_compute_l3_cache;
+	}
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+		if (ret)
+			goto error_hdc_and_sf;
+	}
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+		if (ret)
+			goto error_l3_1;
+	}
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+		if (ret)
+			goto error_l3_2;
+	}
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+		if (ret)
+			goto error_l3_3;
+	}
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+		if (ret)
+			goto error_rasterizer_and_pixel_backend;
+	}
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler);
+		if (ret)
+			goto error_sampler;
+	}
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+		if (ret)
+			goto error_tdl_1;
+	}
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+		if (ret)
+			goto error_tdl_2;
+	}
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+		if (ret)
+			goto error_compute_extra;
+	}
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+		if (ret)
+			goto error_vme_pipe;
+	}
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+		if (ret)
+			goto error_test_oa;
+	}
+
+	return 0;
+
+error_test_oa:
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+error_vme_pipe:
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+error_compute_extra:
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+error_tdl_2:
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+error_tdl_1:
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+error_sampler:
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+error_rasterizer_and_pixel_backend:
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+error_l3_3:
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+error_l3_2:
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+error_l3_1:
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+error_hdc_and_sf:
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+error_compute_l3_cache:
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+error_compute_extended:
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+error_memory_writes:
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+error_memory_reads:
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+error_render_pipe_profile:
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+error_compute_basic:
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+error_render_basic:
+	return ret;
+}
+
+void
+i915_perf_unregister_sysfs_kblgt2(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+}
diff --git a/drivers/gpu/drm/i915/i915_oa_kblgt2.h b/drivers/gpu/drm/i915/i915_oa_kblgt2.h
new file mode 100644
index 000000000000..7e61bfc4f9f5
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_kblgt2.h
@@ -0,0 +1,40 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_OA_KBLGT2_H__
+#define __I915_OA_KBLGT2_H__
+
+extern int i915_oa_n_builtin_metric_sets_kblgt2;
+
+extern int i915_oa_select_metric_set_kblgt2(struct drm_i915_private *dev_priv);
+
+extern int i915_perf_register_sysfs_kblgt2(struct drm_i915_private *dev_priv);
+
+extern void i915_perf_unregister_sysfs_kblgt2(struct drm_i915_private *dev_priv);
+
+#endif
diff --git a/drivers/gpu/drm/i915/i915_oa_kblgt3.c b/drivers/gpu/drm/i915/i915_oa_kblgt3.c
new file mode 100644
index 000000000000..6ed092566a32
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_kblgt3.c
@@ -0,0 +1,3040 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/sysfs.h>
+
+#include "i915_drv.h"
+#include "i915_oa_kblgt3.h"
+
+enum metric_set_id {
+	METRIC_SET_ID_RENDER_BASIC = 1,
+	METRIC_SET_ID_COMPUTE_BASIC,
+	METRIC_SET_ID_RENDER_PIPE_PROFILE,
+	METRIC_SET_ID_MEMORY_READS,
+	METRIC_SET_ID_MEMORY_WRITES,
+	METRIC_SET_ID_COMPUTE_EXTENDED,
+	METRIC_SET_ID_COMPUTE_L3_CACHE,
+	METRIC_SET_ID_HDC_AND_SF,
+	METRIC_SET_ID_L3_1,
+	METRIC_SET_ID_L3_2,
+	METRIC_SET_ID_L3_3,
+	METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND,
+	METRIC_SET_ID_SAMPLER,
+	METRIC_SET_ID_TDL_1,
+	METRIC_SET_ID_TDL_2,
+	METRIC_SET_ID_COMPUTE_EXTRA,
+	METRIC_SET_ID_VME_PIPE,
+	METRIC_SET_ID_TEST_OA,
+};
+
+int i915_oa_n_builtin_metric_sets_kblgt3 = 18;
+
+static const struct i915_oa_reg b_counter_config_render_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic[] = {
+	{ _MMIO(0x9888), 0x166c01e0 },
+	{ _MMIO(0x9888), 0x12170280 },
+	{ _MMIO(0x9888), 0x12370280 },
+	{ _MMIO(0x9888), 0x16ec01e0 },
+	{ _MMIO(0x9888), 0x11930317 },
+	{ _MMIO(0x9888), 0x159303df },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x1a4e0380 },
+	{ _MMIO(0x9888), 0x0a6c0053 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x0a1b4000 },
+	{ _MMIO(0x9888), 0x1c1c0001 },
+	{ _MMIO(0x9888), 0x002f1000 },
+	{ _MMIO(0x9888), 0x042f1000 },
+	{ _MMIO(0x9888), 0x004c4000 },
+	{ _MMIO(0x9888), 0x0a4c8400 },
+	{ _MMIO(0x9888), 0x0c4c0002 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f6600 },
+	{ _MMIO(0x9888), 0x100f0001 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x162ca200 },
+	{ _MMIO(0x9888), 0x062d8000 },
+	{ _MMIO(0x9888), 0x082d8000 },
+	{ _MMIO(0x9888), 0x00133000 },
+	{ _MMIO(0x9888), 0x08133000 },
+	{ _MMIO(0x9888), 0x00170020 },
+	{ _MMIO(0x9888), 0x08170021 },
+	{ _MMIO(0x9888), 0x10170000 },
+	{ _MMIO(0x9888), 0x0633c000 },
+	{ _MMIO(0x9888), 0x0833c000 },
+	{ _MMIO(0x9888), 0x06370800 },
+	{ _MMIO(0x9888), 0x08370840 },
+	{ _MMIO(0x9888), 0x10370000 },
+	{ _MMIO(0x9888), 0x1ace0200 },
+	{ _MMIO(0x9888), 0x0aec5300 },
+	{ _MMIO(0x9888), 0x10ec0000 },
+	{ _MMIO(0x9888), 0x1cec0000 },
+	{ _MMIO(0x9888), 0x0a9b8000 },
+	{ _MMIO(0x9888), 0x1c9c0002 },
+	{ _MMIO(0x9888), 0x0ccc0002 },
+	{ _MMIO(0x9888), 0x0a8d8000 },
+	{ _MMIO(0x9888), 0x108f0001 },
+	{ _MMIO(0x9888), 0x16ac8000 },
+	{ _MMIO(0x9888), 0x0d933031 },
+	{ _MMIO(0x9888), 0x0f933e3f },
+	{ _MMIO(0x9888), 0x01933d00 },
+	{ _MMIO(0x9888), 0x0393073c },
+	{ _MMIO(0x9888), 0x0593000e },
+	{ _MMIO(0x9888), 0x1d930000 },
+	{ _MMIO(0x9888), 0x19930000 },
+	{ _MMIO(0x9888), 0x1b930000 },
+	{ _MMIO(0x9888), 0x1d900157 },
+	{ _MMIO(0x9888), 0x1f900158 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d908000 },
+	{ _MMIO(0x9888), 0x2f908000 },
+	{ _MMIO(0x9888), 0x31908000 },
+	{ _MMIO(0x9888), 0x15908000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1190003f },
+	{ _MMIO(0x9888), 0x51902240 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x55900242 },
+	{ _MMIO(0x9888), 0x45900084 },
+	{ _MMIO(0x9888), 0x47901400 },
+	{ _MMIO(0x9888), 0x57902220 },
+	{ _MMIO(0x9888), 0x49900c60 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900002 },
+	{ _MMIO(0x9888), 0x43900c63 },
+	{ _MMIO(0x9888), 0x53902222 },
+};
+
+static int
+get_render_basic_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_render_basic;
+	lens[n] = ARRAY_SIZE(mux_config_render_basic);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic[] = {
+	{ _MMIO(0x9888), 0x104f00e0 },
+	{ _MMIO(0x9888), 0x124f1c00 },
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x1a4e0820 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x064f0900 },
+	{ _MMIO(0x9888), 0x084f0032 },
+	{ _MMIO(0x9888), 0x0a4f1891 },
+	{ _MMIO(0x9888), 0x0c4f0e00 },
+	{ _MMIO(0x9888), 0x0e4f003c },
+	{ _MMIO(0x9888), 0x004f0d80 },
+	{ _MMIO(0x9888), 0x024f003b },
+	{ _MMIO(0x9888), 0x006c0002 },
+	{ _MMIO(0x9888), 0x086c0100 },
+	{ _MMIO(0x9888), 0x0c6c000c },
+	{ _MMIO(0x9888), 0x0e6c0b00 },
+	{ _MMIO(0x9888), 0x186c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x081b8000 },
+	{ _MMIO(0x9888), 0x0c1b4000 },
+	{ _MMIO(0x9888), 0x0e1b8000 },
+	{ _MMIO(0x9888), 0x101c8000 },
+	{ _MMIO(0x9888), 0x1a1c8000 },
+	{ _MMIO(0x9888), 0x1c1c0024 },
+	{ _MMIO(0x9888), 0x065b8000 },
+	{ _MMIO(0x9888), 0x085b4000 },
+	{ _MMIO(0x9888), 0x0a5bc000 },
+	{ _MMIO(0x9888), 0x0c5b8000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x005b8000 },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x1a5c6000 },
+	{ _MMIO(0x9888), 0x1c5c001b },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x145c8000 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4c2000 },
+	{ _MMIO(0x9888), 0x0c4c0208 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020d2000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5500 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2cc000 },
+	{ _MMIO(0x9888), 0x162cfb00 },
+	{ _MMIO(0x9888), 0x182c00be },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042cc000 },
+	{ _MMIO(0x9888), 0x19900157 },
+	{ _MMIO(0x9888), 0x1b900158 },
+	{ _MMIO(0x9888), 0x1d900105 },
+	{ _MMIO(0x9888), 0x1f900103 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x11900fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900821 },
+	{ _MMIO(0x9888), 0x47900802 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900802 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900002 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900422 },
+	{ _MMIO(0x9888), 0x53904444 },
+};
+
+static int
+get_compute_basic_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_basic;
+	lens[n] = ARRAY_SIZE(mux_config_compute_basic);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_render_pipe_profile[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ffea },
+	{ _MMIO(0x2774), 0x00007ffc },
+	{ _MMIO(0x2778), 0x0007affa },
+	{ _MMIO(0x277c), 0x0000f5fd },
+	{ _MMIO(0x2780), 0x00079ffa },
+	{ _MMIO(0x2784), 0x0000f3fb },
+	{ _MMIO(0x2788), 0x0007bf7a },
+	{ _MMIO(0x278c), 0x0000f7e7 },
+	{ _MMIO(0x2790), 0x0007fefa },
+	{ _MMIO(0x2794), 0x0000f7cf },
+	{ _MMIO(0x2798), 0x00077ffa },
+	{ _MMIO(0x279c), 0x0000efdf },
+	{ _MMIO(0x27a0), 0x0006fffa },
+	{ _MMIO(0x27a4), 0x0000cfbf },
+	{ _MMIO(0x27a8), 0x0003fffa },
+	{ _MMIO(0x27ac), 0x00005f7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_pipe_profile[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile[] = {
+	{ _MMIO(0x9888), 0x0c0e001f },
+	{ _MMIO(0x9888), 0x0a0f0000 },
+	{ _MMIO(0x9888), 0x10116800 },
+	{ _MMIO(0x9888), 0x178a03e0 },
+	{ _MMIO(0x9888), 0x11824c00 },
+	{ _MMIO(0x9888), 0x11830020 },
+	{ _MMIO(0x9888), 0x13840020 },
+	{ _MMIO(0x9888), 0x11850019 },
+	{ _MMIO(0x9888), 0x11860007 },
+	{ _MMIO(0x9888), 0x01870c40 },
+	{ _MMIO(0x9888), 0x17880000 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x0a4c0040 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x040d4000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020e5400 },
+	{ _MMIO(0x9888), 0x000e0000 },
+	{ _MMIO(0x9888), 0x080f0040 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x100f0000 },
+	{ _MMIO(0x9888), 0x0e0f0040 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06110012 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x01898000 },
+	{ _MMIO(0x9888), 0x0d890100 },
+	{ _MMIO(0x9888), 0x03898000 },
+	{ _MMIO(0x9888), 0x09808000 },
+	{ _MMIO(0x9888), 0x0b808000 },
+	{ _MMIO(0x9888), 0x0380c000 },
+	{ _MMIO(0x9888), 0x0f8a0075 },
+	{ _MMIO(0x9888), 0x1d8a0000 },
+	{ _MMIO(0x9888), 0x118a8000 },
+	{ _MMIO(0x9888), 0x1b8a4000 },
+	{ _MMIO(0x9888), 0x138a8000 },
+	{ _MMIO(0x9888), 0x1d81a000 },
+	{ _MMIO(0x9888), 0x15818000 },
+	{ _MMIO(0x9888), 0x17818000 },
+	{ _MMIO(0x9888), 0x0b820030 },
+	{ _MMIO(0x9888), 0x07828000 },
+	{ _MMIO(0x9888), 0x0d824000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x05824000 },
+	{ _MMIO(0x9888), 0x0d830003 },
+	{ _MMIO(0x9888), 0x0583000c },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x03838000 },
+	{ _MMIO(0x9888), 0x07838000 },
+	{ _MMIO(0x9888), 0x0b840980 },
+	{ _MMIO(0x9888), 0x03844d80 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x09848000 },
+	{ _MMIO(0x9888), 0x09850080 },
+	{ _MMIO(0x9888), 0x03850003 },
+	{ _MMIO(0x9888), 0x01850000 },
+	{ _MMIO(0x9888), 0x07860000 },
+	{ _MMIO(0x9888), 0x0f860400 },
+	{ _MMIO(0x9888), 0x09870032 },
+	{ _MMIO(0x9888), 0x01888052 },
+	{ _MMIO(0x9888), 0x11880000 },
+	{ _MMIO(0x9888), 0x09884000 },
+	{ _MMIO(0x9888), 0x1b931001 },
+	{ _MMIO(0x9888), 0x1d930001 },
+	{ _MMIO(0x9888), 0x19934000 },
+	{ _MMIO(0x9888), 0x1b958000 },
+	{ _MMIO(0x9888), 0x1d950094 },
+	{ _MMIO(0x9888), 0x19958000 },
+	{ _MMIO(0x9888), 0x09e58000 },
+	{ _MMIO(0x9888), 0x0be58000 },
+	{ _MMIO(0x9888), 0x03e5c000 },
+	{ _MMIO(0x9888), 0x0592c000 },
+	{ _MMIO(0x9888), 0x0b928000 },
+	{ _MMIO(0x9888), 0x0d924000 },
+	{ _MMIO(0x9888), 0x0f924000 },
+	{ _MMIO(0x9888), 0x11928000 },
+	{ _MMIO(0x9888), 0x1392c000 },
+	{ _MMIO(0x9888), 0x09924000 },
+	{ _MMIO(0x9888), 0x01985000 },
+	{ _MMIO(0x9888), 0x07988000 },
+	{ _MMIO(0x9888), 0x09981000 },
+	{ _MMIO(0x9888), 0x0b982000 },
+	{ _MMIO(0x9888), 0x0d982000 },
+	{ _MMIO(0x9888), 0x0f989000 },
+	{ _MMIO(0x9888), 0x05982000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1190c080 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900440 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900400 },
+	{ _MMIO(0x9888), 0x47900c21 },
+	{ _MMIO(0x9888), 0x57900400 },
+	{ _MMIO(0x9888), 0x49900042 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900024 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900841 },
+	{ _MMIO(0x9888), 0x53900400 },
+};
+
+static int
+get_render_pipe_profile_mux_config(struct drm_i915_private *dev_priv,
+				   const struct i915_oa_reg **regs,
+				   int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_render_pipe_profile;
+	lens[n] = ARRAY_SIZE(mux_config_render_pipe_profile);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_reads[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f872 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_reads[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f900064 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930055 },
+	{ _MMIO(0x9888), 0x03e58000 },
+	{ _MMIO(0x9888), 0x05e5c000 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x13900150 },
+	{ _MMIO(0x9888), 0x21900151 },
+	{ _MMIO(0x9888), 0x23900152 },
+	{ _MMIO(0x9888), 0x25900153 },
+	{ _MMIO(0x9888), 0x27900154 },
+	{ _MMIO(0x9888), 0x29900155 },
+	{ _MMIO(0x9888), 0x2b900156 },
+	{ _MMIO(0x9888), 0x2d900157 },
+	{ _MMIO(0x9888), 0x2f90015f },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c60 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900c63 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c63 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static int
+get_memory_reads_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_memory_reads;
+	lens[n] = ARRAY_SIZE(mux_config_memory_reads);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_writes[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f822 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_writes[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f901000 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930055 },
+	{ _MMIO(0x9888), 0x03e58000 },
+	{ _MMIO(0x9888), 0x05e5c000 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x13900160 },
+	{ _MMIO(0x9888), 0x21900161 },
+	{ _MMIO(0x9888), 0x23900162 },
+	{ _MMIO(0x9888), 0x25900163 },
+	{ _MMIO(0x9888), 0x27900164 },
+	{ _MMIO(0x9888), 0x29900165 },
+	{ _MMIO(0x9888), 0x2b900166 },
+	{ _MMIO(0x9888), 0x2d900167 },
+	{ _MMIO(0x9888), 0x2f900150 },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c60 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900c63 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c63 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static int
+get_memory_writes_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_memory_writes;
+	lens[n] = ARRAY_SIZE(mux_config_memory_writes);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extended[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fc2a },
+	{ _MMIO(0x2774), 0x0000bf00 },
+	{ _MMIO(0x2778), 0x0007fc6a },
+	{ _MMIO(0x277c), 0x0000bf00 },
+	{ _MMIO(0x2780), 0x0007fc92 },
+	{ _MMIO(0x2784), 0x0000bf00 },
+	{ _MMIO(0x2788), 0x0007fca2 },
+	{ _MMIO(0x278c), 0x0000bf00 },
+	{ _MMIO(0x2790), 0x0007fc32 },
+	{ _MMIO(0x2794), 0x0000bf00 },
+	{ _MMIO(0x2798), 0x0007fc9a },
+	{ _MMIO(0x279c), 0x0000bf00 },
+	{ _MMIO(0x27a0), 0x0007fe6a },
+	{ _MMIO(0x27a4), 0x0000bf00 },
+	{ _MMIO(0x27a8), 0x0007fe7a },
+	{ _MMIO(0x27ac), 0x0000bf00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extended[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended[] = {
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x141c8160 },
+	{ _MMIO(0x9888), 0x161c8015 },
+	{ _MMIO(0x9888), 0x181c0120 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4eaaa0 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0e6c0b01 },
+	{ _MMIO(0x9888), 0x006c0200 },
+	{ _MMIO(0x9888), 0x026c000c },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x001b8000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x001c0041 },
+	{ _MMIO(0x9888), 0x061c4200 },
+	{ _MMIO(0x9888), 0x081c4443 },
+	{ _MMIO(0x9888), 0x0a1c4645 },
+	{ _MMIO(0x9888), 0x0c1c7647 },
+	{ _MMIO(0x9888), 0x041c7357 },
+	{ _MMIO(0x9888), 0x1c1c0030 },
+	{ _MMIO(0x9888), 0x101c0000 },
+	{ _MMIO(0x9888), 0x1a1c0000 },
+	{ _MMIO(0x9888), 0x121c8000 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4caa2a },
+	{ _MMIO(0x9888), 0x0c4c02aa },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5515 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162caa00 },
+	{ _MMIO(0x9888), 0x182c00aa },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x11907fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900040 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900802 },
+	{ _MMIO(0x9888), 0x47900842 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900842 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900800 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static int
+get_compute_extended_mux_config(struct drm_i915_private *dev_priv,
+				const struct i915_oa_reg **regs,
+				int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_extended;
+	lens[n] = ARRAY_SIZE(mux_config_compute_extended);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_l3_cache[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fffa },
+	{ _MMIO(0x2774), 0x0000fefe },
+	{ _MMIO(0x2778), 0x0007fffa },
+	{ _MMIO(0x277c), 0x0000fefd },
+	{ _MMIO(0x2790), 0x0007fffa },
+	{ _MMIO(0x2794), 0x0000fbef },
+	{ _MMIO(0x2798), 0x0007fffa },
+	{ _MMIO(0x279c), 0x0000fbdf },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_l3_cache[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00101100 },
+	{ _MMIO(0xe45c), 0x00201200 },
+	{ _MMIO(0xe55c), 0x00301300 },
+	{ _MMIO(0xe65c), 0x00401400 },
+};
+
+static const struct i915_oa_reg mux_config_compute_l3_cache[] = {
+	{ _MMIO(0x9888), 0x166c0760 },
+	{ _MMIO(0x9888), 0x1593001e },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4e8020 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x006c0051 },
+	{ _MMIO(0x9888), 0x066c5000 },
+	{ _MMIO(0x9888), 0x086c5c5d },
+	{ _MMIO(0x9888), 0x0e6c5e5f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x186c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x061b8000 },
+	{ _MMIO(0x9888), 0x081bc000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x101c8000 },
+	{ _MMIO(0x9888), 0x1a1ce000 },
+	{ _MMIO(0x9888), 0x1c1c0030 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4c2a00 },
+	{ _MMIO(0x9888), 0x0c4c0280 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f1500 },
+	{ _MMIO(0x9888), 0x100f0140 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162c0a00 },
+	{ _MMIO(0x9888), 0x182c00a0 },
+	{ _MMIO(0x9888), 0x03933300 },
+	{ _MMIO(0x9888), 0x05930032 },
+	{ _MMIO(0x9888), 0x11930000 },
+	{ _MMIO(0x9888), 0x1b930000 },
+	{ _MMIO(0x9888), 0x1d900157 },
+	{ _MMIO(0x9888), 0x1f900158 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1190030f },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900021 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x53904444 },
+	{ _MMIO(0x9888), 0x43900000 },
+};
+
+static int
+get_compute_l3_cache_mux_config(struct drm_i915_private *dev_priv,
+				const struct i915_oa_reg **regs,
+				int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_l3_cache;
+	lens[n] = ARRAY_SIZE(mux_config_compute_l3_cache);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_hdc_and_sf[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x10800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+};
+
+static const struct i915_oa_reg flex_eu_config_hdc_and_sf[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_hdc_and_sf[] = {
+	{ _MMIO(0x9888), 0x104f0232 },
+	{ _MMIO(0x9888), 0x124f4640 },
+	{ _MMIO(0x9888), 0x106c0232 },
+	{ _MMIO(0x9888), 0x11834400 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x004f1880 },
+	{ _MMIO(0x9888), 0x024f08bb },
+	{ _MMIO(0x9888), 0x044f001b },
+	{ _MMIO(0x9888), 0x046c0100 },
+	{ _MMIO(0x9888), 0x066c000b },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x041b8000 },
+	{ _MMIO(0x9888), 0x061b4000 },
+	{ _MMIO(0x9888), 0x1a1c1800 },
+	{ _MMIO(0x9888), 0x005b8000 },
+	{ _MMIO(0x9888), 0x025bc000 },
+	{ _MMIO(0x9888), 0x045b4000 },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x145c8000 },
+	{ _MMIO(0x9888), 0x165c8000 },
+	{ _MMIO(0x9888), 0x185c8000 },
+	{ _MMIO(0x9888), 0x0a4c00a0 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042cc000 },
+	{ _MMIO(0x9888), 0x062cc000 },
+	{ _MMIO(0x9888), 0x082cc000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x0f8305c0 },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x07830000 },
+	{ _MMIO(0x9888), 0x1d950080 },
+	{ _MMIO(0x9888), 0x13928000 },
+	{ _MMIO(0x9888), 0x0f988000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900040 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_hdc_and_sf_mux_config(struct drm_i915_private *dev_priv,
+			  const struct i915_oa_reg **regs,
+			  int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_hdc_and_sf;
+	lens[n] = ARRAY_SIZE(mux_config_hdc_and_sf);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1[] = {
+	{ _MMIO(0x9888), 0x126c7b40 },
+	{ _MMIO(0x9888), 0x166c0020 },
+	{ _MMIO(0x9888), 0x0a603444 },
+	{ _MMIO(0x9888), 0x0a613400 },
+	{ _MMIO(0x9888), 0x1a4ea800 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x064f4000 },
+	{ _MMIO(0x9888), 0x0c6c5327 },
+	{ _MMIO(0x9888), 0x0e6c5425 },
+	{ _MMIO(0x9888), 0x006c2a00 },
+	{ _MMIO(0x9888), 0x026c285b },
+	{ _MMIO(0x9888), 0x046c005c },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0800 },
+	{ _MMIO(0x9888), 0x0c1bc000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x001b8000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x1c1c003c },
+	{ _MMIO(0x9888), 0x121c8000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c0800 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x1a5c1000 },
+	{ _MMIO(0x9888), 0x10600000 },
+	{ _MMIO(0x9888), 0x04600000 },
+	{ _MMIO(0x9888), 0x0c610044 },
+	{ _MMIO(0x9888), 0x10610000 },
+	{ _MMIO(0x9888), 0x06610000 },
+	{ _MMIO(0x9888), 0x0c4c02a8 },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x0a4c002a },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0154 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x182c00aa },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2cc000 },
+	{ _MMIO(0x9888), 0x1190ffc0 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900420 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900021 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900400 },
+	{ _MMIO(0x9888), 0x43900421 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+};
+
+static int
+get_l3_1_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_1;
+	lens[n] = ARRAY_SIZE(mux_config_l3_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00028002 },
+	{ _MMIO(0x277c), 0x000087ff },
+	{ _MMIO(0x2780), 0x00020002 },
+	{ _MMIO(0x2784), 0x00008fff },
+	{ _MMIO(0x2788), 0x00008002 },
+	{ _MMIO(0x278c), 0x0000a7ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_2[] = {
+	{ _MMIO(0x9888), 0x126c02e0 },
+	{ _MMIO(0x9888), 0x146c0001 },
+	{ _MMIO(0x9888), 0x0a623400 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x064f4000 },
+	{ _MMIO(0x9888), 0x026c3324 },
+	{ _MMIO(0x9888), 0x046c3422 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c0800 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x1a5c1000 },
+	{ _MMIO(0x9888), 0x06614000 },
+	{ _MMIO(0x9888), 0x0c620044 },
+	{ _MMIO(0x9888), 0x10620000 },
+	{ _MMIO(0x9888), 0x06620000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c002a },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f4000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2cc000 },
+	{ _MMIO(0x9888), 0x1190f800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_l3_2_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_2;
+	lens[n] = ARRAY_SIZE(mux_config_l3_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_3[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00028002 },
+	{ _MMIO(0x277c), 0x000087ff },
+	{ _MMIO(0x2780), 0x00020002 },
+	{ _MMIO(0x2784), 0x00008fff },
+	{ _MMIO(0x2788), 0x00008002 },
+	{ _MMIO(0x278c), 0x0000a7ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_3[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_3[] = {
+	{ _MMIO(0x9888), 0x126c4e80 },
+	{ _MMIO(0x9888), 0x146c0000 },
+	{ _MMIO(0x9888), 0x0a633400 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x026c3321 },
+	{ _MMIO(0x9888), 0x046c342f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1a6c2000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x061b4000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c1800 },
+	{ _MMIO(0x9888), 0x06604000 },
+	{ _MMIO(0x9888), 0x0c630044 },
+	{ _MMIO(0x9888), 0x10630000 },
+	{ _MMIO(0x9888), 0x06630000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c00aa },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f4000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x1190f800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900002 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_l3_3_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_3;
+	lens[n] = ARRAY_SIZE(mux_config_l3_3);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000efff },
+	{ _MMIO(0x2778), 0x00006000 },
+	{ _MMIO(0x277c), 0x0000f3ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x9888), 0x102f3800 },
+	{ _MMIO(0x9888), 0x144d0500 },
+	{ _MMIO(0x9888), 0x120d03c0 },
+	{ _MMIO(0x9888), 0x140d03cf },
+	{ _MMIO(0x9888), 0x0c0f0004 },
+	{ _MMIO(0x9888), 0x0c4e4000 },
+	{ _MMIO(0x9888), 0x042f0480 },
+	{ _MMIO(0x9888), 0x082f0000 },
+	{ _MMIO(0x9888), 0x022f0000 },
+	{ _MMIO(0x9888), 0x0a4c0090 },
+	{ _MMIO(0x9888), 0x064d0027 },
+	{ _MMIO(0x9888), 0x004d0000 },
+	{ _MMIO(0x9888), 0x000d0d40 },
+	{ _MMIO(0x9888), 0x020d803f },
+	{ _MMIO(0x9888), 0x040d8023 },
+	{ _MMIO(0x9888), 0x100d0000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020f0010 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x0e0f0050 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41901400 },
+	{ _MMIO(0x9888), 0x43901485 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900001 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_rasterizer_and_pixel_backend_mux_config(struct drm_i915_private *dev_priv,
+					    const struct i915_oa_reg **regs,
+					    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_rasterizer_and_pixel_backend;
+	lens[n] = ARRAY_SIZE(mux_config_rasterizer_and_pixel_backend);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler[] = {
+	{ _MMIO(0x9888), 0x14152c00 },
+	{ _MMIO(0x9888), 0x16150005 },
+	{ _MMIO(0x9888), 0x121600a0 },
+	{ _MMIO(0x9888), 0x14352c00 },
+	{ _MMIO(0x9888), 0x16350005 },
+	{ _MMIO(0x9888), 0x123600a0 },
+	{ _MMIO(0x9888), 0x14552c00 },
+	{ _MMIO(0x9888), 0x16550005 },
+	{ _MMIO(0x9888), 0x125600a0 },
+	{ _MMIO(0x9888), 0x062f6000 },
+	{ _MMIO(0x9888), 0x022f2000 },
+	{ _MMIO(0x9888), 0x0c4c0050 },
+	{ _MMIO(0x9888), 0x0a4c0010 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0350 },
+	{ _MMIO(0x9888), 0x0c0fb000 },
+	{ _MMIO(0x9888), 0x0e0f00da },
+	{ _MMIO(0x9888), 0x182c0028 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x022dc000 },
+	{ _MMIO(0x9888), 0x042d4000 },
+	{ _MMIO(0x9888), 0x0c138000 },
+	{ _MMIO(0x9888), 0x0e132000 },
+	{ _MMIO(0x9888), 0x0413c000 },
+	{ _MMIO(0x9888), 0x1c140018 },
+	{ _MMIO(0x9888), 0x0c157000 },
+	{ _MMIO(0x9888), 0x0e150078 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x04162180 },
+	{ _MMIO(0x9888), 0x02160000 },
+	{ _MMIO(0x9888), 0x04174000 },
+	{ _MMIO(0x9888), 0x0233a000 },
+	{ _MMIO(0x9888), 0x04333000 },
+	{ _MMIO(0x9888), 0x14348000 },
+	{ _MMIO(0x9888), 0x16348000 },
+	{ _MMIO(0x9888), 0x02357870 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x04360043 },
+	{ _MMIO(0x9888), 0x02360000 },
+	{ _MMIO(0x9888), 0x04371000 },
+	{ _MMIO(0x9888), 0x0e538000 },
+	{ _MMIO(0x9888), 0x00538000 },
+	{ _MMIO(0x9888), 0x06533000 },
+	{ _MMIO(0x9888), 0x1c540020 },
+	{ _MMIO(0x9888), 0x12548000 },
+	{ _MMIO(0x9888), 0x0e557000 },
+	{ _MMIO(0x9888), 0x00557800 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x06560043 },
+	{ _MMIO(0x9888), 0x02560000 },
+	{ _MMIO(0x9888), 0x06571000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900060 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900060 },
+};
+
+static int
+get_sampler_mux_config(struct drm_i915_private *dev_priv,
+		       const struct i915_oa_reg **regs,
+		       int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_sampler;
+	lens[n] = ARRAY_SIZE(mux_config_sampler);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x00007fff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x00009fff },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000efff },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000f3ff },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fdff },
+	{ _MMIO(0x2798), 0x00000000 },
+	{ _MMIO(0x279c), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_1[] = {
+	{ _MMIO(0x9888), 0x12120000 },
+	{ _MMIO(0x9888), 0x12320000 },
+	{ _MMIO(0x9888), 0x12520000 },
+	{ _MMIO(0x9888), 0x002f8000 },
+	{ _MMIO(0x9888), 0x022f3000 },
+	{ _MMIO(0x9888), 0x0a4c0015 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f03a0 },
+	{ _MMIO(0x9888), 0x0c0ff000 },
+	{ _MMIO(0x9888), 0x0e0f0095 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2d8000 },
+	{ _MMIO(0x9888), 0x0e2d4000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x02108000 },
+	{ _MMIO(0x9888), 0x0410c000 },
+	{ _MMIO(0x9888), 0x02118000 },
+	{ _MMIO(0x9888), 0x0411c000 },
+	{ _MMIO(0x9888), 0x02121880 },
+	{ _MMIO(0x9888), 0x041219b5 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x02134000 },
+	{ _MMIO(0x9888), 0x04135000 },
+	{ _MMIO(0x9888), 0x0c308000 },
+	{ _MMIO(0x9888), 0x0e304000 },
+	{ _MMIO(0x9888), 0x06304000 },
+	{ _MMIO(0x9888), 0x0c318000 },
+	{ _MMIO(0x9888), 0x0e314000 },
+	{ _MMIO(0x9888), 0x06314000 },
+	{ _MMIO(0x9888), 0x0c321a80 },
+	{ _MMIO(0x9888), 0x0e320033 },
+	{ _MMIO(0x9888), 0x06320031 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x0c334000 },
+	{ _MMIO(0x9888), 0x0e331000 },
+	{ _MMIO(0x9888), 0x06331000 },
+	{ _MMIO(0x9888), 0x0e508000 },
+	{ _MMIO(0x9888), 0x00508000 },
+	{ _MMIO(0x9888), 0x02504000 },
+	{ _MMIO(0x9888), 0x0e518000 },
+	{ _MMIO(0x9888), 0x00518000 },
+	{ _MMIO(0x9888), 0x02514000 },
+	{ _MMIO(0x9888), 0x0e521880 },
+	{ _MMIO(0x9888), 0x00521a80 },
+	{ _MMIO(0x9888), 0x02520033 },
+	{ _MMIO(0x9888), 0x0e534000 },
+	{ _MMIO(0x9888), 0x00534000 },
+	{ _MMIO(0x9888), 0x02531000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900062 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+};
+
+static int
+get_tdl_1_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_1;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_2[] = {
+	{ _MMIO(0x9888), 0x12124d60 },
+	{ _MMIO(0x9888), 0x12322e60 },
+	{ _MMIO(0x9888), 0x12524d60 },
+	{ _MMIO(0x9888), 0x022f3000 },
+	{ _MMIO(0x9888), 0x0a4c0014 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0fe000 },
+	{ _MMIO(0x9888), 0x0e0f0097 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x002d8000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x0410c000 },
+	{ _MMIO(0x9888), 0x0411c000 },
+	{ _MMIO(0x9888), 0x04121fb7 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x04135000 },
+	{ _MMIO(0x9888), 0x00308000 },
+	{ _MMIO(0x9888), 0x06304000 },
+	{ _MMIO(0x9888), 0x00318000 },
+	{ _MMIO(0x9888), 0x06314000 },
+	{ _MMIO(0x9888), 0x00321b80 },
+	{ _MMIO(0x9888), 0x0632003f },
+	{ _MMIO(0x9888), 0x00334000 },
+	{ _MMIO(0x9888), 0x06331000 },
+	{ _MMIO(0x9888), 0x0250c000 },
+	{ _MMIO(0x9888), 0x0251c000 },
+	{ _MMIO(0x9888), 0x02521fb7 },
+	{ _MMIO(0x9888), 0x00520000 },
+	{ _MMIO(0x9888), 0x02535000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x43900063 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_tdl_2_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_2;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extra[] = {
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extra[] = {
+};
+
+static const struct i915_oa_reg mux_config_compute_extra[] = {
+	{ _MMIO(0x9888), 0x121203e0 },
+	{ _MMIO(0x9888), 0x123203e0 },
+	{ _MMIO(0x9888), 0x125203e0 },
+	{ _MMIO(0x9888), 0x129203e0 },
+	{ _MMIO(0x9888), 0x12b203e0 },
+	{ _MMIO(0x9888), 0x12d203e0 },
+	{ _MMIO(0x9888), 0x024ec000 },
+	{ _MMIO(0x9888), 0x044ec000 },
+	{ _MMIO(0x9888), 0x064ec000 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x0a4c0042 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f006d },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x042d8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06114000 },
+	{ _MMIO(0x9888), 0x06120033 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x04308000 },
+	{ _MMIO(0x9888), 0x04318000 },
+	{ _MMIO(0x9888), 0x04321980 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x04334000 },
+	{ _MMIO(0x9888), 0x04504000 },
+	{ _MMIO(0x9888), 0x04514000 },
+	{ _MMIO(0x9888), 0x04520033 },
+	{ _MMIO(0x9888), 0x00520000 },
+	{ _MMIO(0x9888), 0x04531000 },
+	{ _MMIO(0x9888), 0x00af8000 },
+	{ _MMIO(0x9888), 0x0acc0001 },
+	{ _MMIO(0x9888), 0x008d8000 },
+	{ _MMIO(0x9888), 0x028da000 },
+	{ _MMIO(0x9888), 0x0c8fb000 },
+	{ _MMIO(0x9888), 0x0e8f0001 },
+	{ _MMIO(0x9888), 0x06ac8000 },
+	{ _MMIO(0x9888), 0x02ad4000 },
+	{ _MMIO(0x9888), 0x02908000 },
+	{ _MMIO(0x9888), 0x02918000 },
+	{ _MMIO(0x9888), 0x02921980 },
+	{ _MMIO(0x9888), 0x00920000 },
+	{ _MMIO(0x9888), 0x02934000 },
+	{ _MMIO(0x9888), 0x02b04000 },
+	{ _MMIO(0x9888), 0x02b14000 },
+	{ _MMIO(0x9888), 0x02b20033 },
+	{ _MMIO(0x9888), 0x00b20000 },
+	{ _MMIO(0x9888), 0x02b31000 },
+	{ _MMIO(0x9888), 0x00d08000 },
+	{ _MMIO(0x9888), 0x00d18000 },
+	{ _MMIO(0x9888), 0x00d21980 },
+	{ _MMIO(0x9888), 0x00d34000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900002 },
+	{ _MMIO(0x9888), 0x53900420 },
+	{ _MMIO(0x9888), 0x459000a1 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_compute_extra_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_extra;
+	lens[n] = ARRAY_SIZE(mux_config_compute_extra);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_vme_pipe[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00100030 },
+	{ _MMIO(0x2774), 0x0000fff9 },
+	{ _MMIO(0x2778), 0x00000002 },
+	{ _MMIO(0x277c), 0x0000fffc },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000fff3 },
+	{ _MMIO(0x2788), 0x00100180 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00000002 },
+	{ _MMIO(0x279c), 0x0000ff3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_vme_pipe[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00008003 },
+};
+
+static const struct i915_oa_reg mux_config_vme_pipe[] = {
+	{ _MMIO(0x9888), 0x141a5800 },
+	{ _MMIO(0x9888), 0x161a00c0 },
+	{ _MMIO(0x9888), 0x12180240 },
+	{ _MMIO(0x9888), 0x14180002 },
+	{ _MMIO(0x9888), 0x149a5800 },
+	{ _MMIO(0x9888), 0x169a00c0 },
+	{ _MMIO(0x9888), 0x12980240 },
+	{ _MMIO(0x9888), 0x14980002 },
+	{ _MMIO(0x9888), 0x1a4e3fc0 },
+	{ _MMIO(0x9888), 0x002f1000 },
+	{ _MMIO(0x9888), 0x022f8000 },
+	{ _MMIO(0x9888), 0x042f3000 },
+	{ _MMIO(0x9888), 0x004c4000 },
+	{ _MMIO(0x9888), 0x0a4c9500 },
+	{ _MMIO(0x9888), 0x0c4c002a },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f5500 },
+	{ _MMIO(0x9888), 0x100f0015 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162caa00 },
+	{ _MMIO(0x9888), 0x182c000a },
+	{ _MMIO(0x9888), 0x04193000 },
+	{ _MMIO(0x9888), 0x081a28c1 },
+	{ _MMIO(0x9888), 0x001a0000 },
+	{ _MMIO(0x9888), 0x00133000 },
+	{ _MMIO(0x9888), 0x0613c000 },
+	{ _MMIO(0x9888), 0x0813f000 },
+	{ _MMIO(0x9888), 0x00172000 },
+	{ _MMIO(0x9888), 0x06178000 },
+	{ _MMIO(0x9888), 0x0817a000 },
+	{ _MMIO(0x9888), 0x00180037 },
+	{ _MMIO(0x9888), 0x06180940 },
+	{ _MMIO(0x9888), 0x08180000 },
+	{ _MMIO(0x9888), 0x02180000 },
+	{ _MMIO(0x9888), 0x04183000 },
+	{ _MMIO(0x9888), 0x04afc000 },
+	{ _MMIO(0x9888), 0x06af3000 },
+	{ _MMIO(0x9888), 0x0acc4000 },
+	{ _MMIO(0x9888), 0x0ccc0015 },
+	{ _MMIO(0x9888), 0x0a8da000 },
+	{ _MMIO(0x9888), 0x0c8da000 },
+	{ _MMIO(0x9888), 0x0e8f4000 },
+	{ _MMIO(0x9888), 0x108f0015 },
+	{ _MMIO(0x9888), 0x16aca000 },
+	{ _MMIO(0x9888), 0x18ac000a },
+	{ _MMIO(0x9888), 0x06993000 },
+	{ _MMIO(0x9888), 0x0c9a28c1 },
+	{ _MMIO(0x9888), 0x009a0000 },
+	{ _MMIO(0x9888), 0x0a93f000 },
+	{ _MMIO(0x9888), 0x0c93f000 },
+	{ _MMIO(0x9888), 0x0a97a000 },
+	{ _MMIO(0x9888), 0x0c97a000 },
+	{ _MMIO(0x9888), 0x0a980977 },
+	{ _MMIO(0x9888), 0x08980000 },
+	{ _MMIO(0x9888), 0x04980000 },
+	{ _MMIO(0x9888), 0x06983000 },
+	{ _MMIO(0x9888), 0x119000ff },
+	{ _MMIO(0x9888), 0x51900040 },
+	{ _MMIO(0x9888), 0x41900020 },
+	{ _MMIO(0x9888), 0x55900004 },
+	{ _MMIO(0x9888), 0x45900400 },
+	{ _MMIO(0x9888), 0x479008a5 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900002 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_vme_pipe_mux_config(struct drm_i915_private *dev_priv,
+			const struct i915_oa_reg **regs,
+			int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_vme_pipe;
+	lens[n] = ARRAY_SIZE(mux_config_vme_pipe);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_test_oa[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2770), 0x00000004 },
+	{ _MMIO(0x2774), 0x00000000 },
+	{ _MMIO(0x2778), 0x00000003 },
+	{ _MMIO(0x277c), 0x00000000 },
+	{ _MMIO(0x2780), 0x00000007 },
+	{ _MMIO(0x2784), 0x00000000 },
+	{ _MMIO(0x2788), 0x00100002 },
+	{ _MMIO(0x278c), 0x0000fff7 },
+	{ _MMIO(0x2790), 0x00100002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00100082 },
+	{ _MMIO(0x279c), 0x0000ffef },
+	{ _MMIO(0x27a0), 0x001000c2 },
+	{ _MMIO(0x27a4), 0x0000ffe7 },
+	{ _MMIO(0x27a8), 0x00100001 },
+	{ _MMIO(0x27ac), 0x0000ffe7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_test_oa[] = {
+};
+
+static const struct i915_oa_reg mux_config_test_oa[] = {
+	{ _MMIO(0x9888), 0x11810000 },
+	{ _MMIO(0x9888), 0x07810013 },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930040 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_test_oa_mux_config(struct drm_i915_private *dev_priv,
+		       const struct i915_oa_reg **regs,
+		       int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_test_oa;
+	lens[n] = ARRAY_SIZE(mux_config_test_oa);
+	n++;
+
+	return n;
+}
+
+int i915_oa_select_metric_set_kblgt3(struct drm_i915_private *dev_priv)
+{
+	dev_priv->perf.oa.n_mux_configs = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_basic_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_basic_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
+		return 0;
+	case METRIC_SET_ID_RENDER_PIPE_PROFILE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_pipe_profile_mux_config(dev_priv,
+							   dev_priv->perf.oa.mux_regs,
+							   dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_PIPE_PROFILE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_pipe_profile;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_pipe_profile);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_pipe_profile;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_pipe_profile);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_READS:
+		dev_priv->perf.oa.n_mux_configs =
+			get_memory_reads_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_READS\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_reads;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_reads);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_reads;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_reads);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_WRITES:
+		dev_priv->perf.oa.n_mux_configs =
+			get_memory_writes_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_WRITES\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_writes;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_writes);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_writes;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_writes);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTENDED:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_extended_mux_config(dev_priv,
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTENDED\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extended;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extended);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extended;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extended);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_L3_CACHE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_l3_cache_mux_config(dev_priv,
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_L3_CACHE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_l3_cache;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_l3_cache);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_l3_cache;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_l3_cache);
+
+		return 0;
+	case METRIC_SET_ID_HDC_AND_SF:
+		dev_priv->perf.oa.n_mux_configs =
+			get_hdc_and_sf_mux_config(dev_priv,
+						  dev_priv->perf.oa.mux_regs,
+						  dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"HDC_AND_SF\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_hdc_and_sf;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_hdc_and_sf);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_hdc_and_sf;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_hdc_and_sf);
+
+		return 0;
+	case METRIC_SET_ID_L3_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_1_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_1);
+
+		return 0;
+	case METRIC_SET_ID_L3_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_2_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_2);
+
+		return 0;
+	case METRIC_SET_ID_L3_3:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_3_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_3\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_3;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_3);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_3;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_3);
+
+		return 0;
+	case METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND:
+		dev_priv->perf.oa.n_mux_configs =
+			get_rasterizer_and_pixel_backend_mux_config(dev_priv,
+								    dev_priv->perf.oa.mux_regs,
+								    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RASTERIZER_AND_PIXEL_BACKEND\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_rasterizer_and_pixel_backend);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_rasterizer_and_pixel_backend);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER:
+		dev_priv->perf.oa.n_mux_configs =
+			get_sampler_mux_config(dev_priv,
+					       dev_priv->perf.oa.mux_regs,
+					       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler);
+
+		return 0;
+	case METRIC_SET_ID_TDL_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_1_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_1);
+
+		return 0;
+	case METRIC_SET_ID_TDL_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_2_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_2);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTRA:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_extra_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTRA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extra;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extra);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extra;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extra);
+
+		return 0;
+	case METRIC_SET_ID_VME_PIPE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_vme_pipe_mux_config(dev_priv,
+						dev_priv->perf.oa.mux_regs,
+						dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"VME_PIPE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_vme_pipe;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_vme_pipe);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_vme_pipe;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_vme_pipe);
+
+		return 0;
+	case METRIC_SET_ID_TEST_OA:
+		dev_priv->perf.oa.n_mux_configs =
+			get_test_oa_mux_config(dev_priv,
+					       dev_priv->perf.oa.mux_regs,
+					       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TEST_OA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_test_oa;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_test_oa);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_test_oa;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_test_oa);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "0286c920-2f6d-493b-b22d-7a5280df43de",
+	.attrs =  attrs_render_basic,
+};
+
+static ssize_t
+show_compute_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_BASIC);
+}
+
+static struct device_attribute dev_attr_compute_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_basic[] = {
+	&dev_attr_compute_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_basic = {
+	.name = "9823aaa1-b06f-40ce-884b-cd798c79f0c2",
+	.attrs =  attrs_compute_basic,
+};
+
+static ssize_t
+show_render_pipe_profile_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_PIPE_PROFILE);
+}
+
+static struct device_attribute dev_attr_render_pipe_profile_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_pipe_profile_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_pipe_profile[] = {
+	&dev_attr_render_pipe_profile_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_pipe_profile = {
+	.name = "c7c735f3-ce58-45cf-aa04-30b183f1faff",
+	.attrs =  attrs_render_pipe_profile,
+};
+
+static ssize_t
+show_memory_reads_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_READS);
+}
+
+static struct device_attribute dev_attr_memory_reads_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_reads_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_reads[] = {
+	&dev_attr_memory_reads_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_reads = {
+	.name = "96ec2219-040b-428a-856a-6bc03363a057",
+	.attrs =  attrs_memory_reads,
+};
+
+static ssize_t
+show_memory_writes_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_WRITES);
+}
+
+static struct device_attribute dev_attr_memory_writes_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_writes_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_writes[] = {
+	&dev_attr_memory_writes_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_writes = {
+	.name = "03372b64-4996-4d3b-aa18-790e75eeb9c2",
+	.attrs =  attrs_memory_writes,
+};
+
+static ssize_t
+show_compute_extended_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTENDED);
+}
+
+static struct device_attribute dev_attr_compute_extended_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extended_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extended[] = {
+	&dev_attr_compute_extended_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extended = {
+	.name = "31b4ce5a-bd61-4c1f-bb5d-f2e731412150",
+	.attrs =  attrs_compute_extended,
+};
+
+static ssize_t
+show_compute_l3_cache_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_L3_CACHE);
+}
+
+static struct device_attribute dev_attr_compute_l3_cache_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_l3_cache_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_l3_cache[] = {
+	&dev_attr_compute_l3_cache_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_l3_cache = {
+	.name = "2ce0911a-27fc-4887-96f0-11084fa807c3",
+	.attrs =  attrs_compute_l3_cache,
+};
+
+static ssize_t
+show_hdc_and_sf_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_HDC_AND_SF);
+}
+
+static struct device_attribute dev_attr_hdc_and_sf_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_hdc_and_sf_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_hdc_and_sf[] = {
+	&dev_attr_hdc_and_sf_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_hdc_and_sf = {
+	.name = "546c4c1d-99b8-42fb-a107-5aaabb5314a8",
+	.attrs =  attrs_hdc_and_sf,
+};
+
+static ssize_t
+show_l3_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_1);
+}
+
+static struct device_attribute dev_attr_l3_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_1[] = {
+	&dev_attr_l3_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_1 = {
+	.name = "4e93d156-9b39-4268-8544-a8e0480806d7",
+	.attrs =  attrs_l3_1,
+};
+
+static ssize_t
+show_l3_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_2);
+}
+
+static struct device_attribute dev_attr_l3_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_2[] = {
+	&dev_attr_l3_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_2 = {
+	.name = "de1bec86-ca92-4b43-89fa-147653221cc0",
+	.attrs =  attrs_l3_2,
+};
+
+static ssize_t
+show_l3_3_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_3);
+}
+
+static struct device_attribute dev_attr_l3_3_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_3_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_3[] = {
+	&dev_attr_l3_3_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_3 = {
+	.name = "e63537bb-10be-4d4a-92c4-c6b0c65e02ef",
+	.attrs =  attrs_l3_3,
+};
+
+static ssize_t
+show_rasterizer_and_pixel_backend_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND);
+}
+
+static struct device_attribute dev_attr_rasterizer_and_pixel_backend_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_rasterizer_and_pixel_backend_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_rasterizer_and_pixel_backend[] = {
+	&dev_attr_rasterizer_and_pixel_backend_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_rasterizer_and_pixel_backend = {
+	.name = "7a03a9f8-ec5e-46bb-8b67-1f0ff1476281",
+	.attrs =  attrs_rasterizer_and_pixel_backend,
+};
+
+static ssize_t
+show_sampler_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER);
+}
+
+static struct device_attribute dev_attr_sampler_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler[] = {
+	&dev_attr_sampler_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler = {
+	.name = "b25d2ebf-a6e0-4b29-96be-a9b010edeeda",
+	.attrs =  attrs_sampler,
+};
+
+static ssize_t
+show_tdl_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_1);
+}
+
+static struct device_attribute dev_attr_tdl_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_1[] = {
+	&dev_attr_tdl_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_1 = {
+	.name = "469a05e5-e299-46f7-9598-7b05f3c34991",
+	.attrs =  attrs_tdl_1,
+};
+
+static ssize_t
+show_tdl_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_2);
+}
+
+static struct device_attribute dev_attr_tdl_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_2[] = {
+	&dev_attr_tdl_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_2 = {
+	.name = "52f925c6-786a-4ec6-86ce-cba85c83453a",
+	.attrs =  attrs_tdl_2,
+};
+
+static ssize_t
+show_compute_extra_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTRA);
+}
+
+static struct device_attribute dev_attr_compute_extra_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extra_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extra[] = {
+	&dev_attr_compute_extra_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extra = {
+	.name = "efc497ac-884e-4ee4-a4a8-15fba22aaf21",
+	.attrs =  attrs_compute_extra,
+};
+
+static ssize_t
+show_vme_pipe_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_VME_PIPE);
+}
+
+static struct device_attribute dev_attr_vme_pipe_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_vme_pipe_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_vme_pipe[] = {
+	&dev_attr_vme_pipe_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_vme_pipe = {
+	.name = "bfd9764d-2c5b-4c16-bfc1-89de3ca10917",
+	.attrs =  attrs_vme_pipe,
+};
+
+static ssize_t
+show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TEST_OA);
+}
+
+static struct device_attribute dev_attr_test_oa_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_test_oa_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_test_oa[] = {
+	&dev_attr_test_oa_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_test_oa = {
+	.name = "f1792f32-6db2-4b50-b4b2-557128f1688d",
+	.attrs =  attrs_test_oa,
+};
+
+int
+i915_perf_register_sysfs_kblgt3(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+	int ret = 0;
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+		if (ret)
+			goto error_render_basic;
+	}
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+		if (ret)
+			goto error_compute_basic;
+	}
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+		if (ret)
+			goto error_render_pipe_profile;
+	}
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+		if (ret)
+			goto error_memory_reads;
+	}
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+		if (ret)
+			goto error_memory_writes;
+	}
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+		if (ret)
+			goto error_compute_extended;
+	}
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+		if (ret)
+			goto error_compute_l3_cache;
+	}
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+		if (ret)
+			goto error_hdc_and_sf;
+	}
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+		if (ret)
+			goto error_l3_1;
+	}
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+		if (ret)
+			goto error_l3_2;
+	}
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+		if (ret)
+			goto error_l3_3;
+	}
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+		if (ret)
+			goto error_rasterizer_and_pixel_backend;
+	}
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler);
+		if (ret)
+			goto error_sampler;
+	}
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+		if (ret)
+			goto error_tdl_1;
+	}
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+		if (ret)
+			goto error_tdl_2;
+	}
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+		if (ret)
+			goto error_compute_extra;
+	}
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+		if (ret)
+			goto error_vme_pipe;
+	}
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+		if (ret)
+			goto error_test_oa;
+	}
+
+	return 0;
+
+error_test_oa:
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+error_vme_pipe:
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+error_compute_extra:
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+error_tdl_2:
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+error_tdl_1:
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+error_sampler:
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+error_rasterizer_and_pixel_backend:
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+error_l3_3:
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+error_l3_2:
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+error_l3_1:
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+error_hdc_and_sf:
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+error_compute_l3_cache:
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+error_compute_extended:
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+error_memory_writes:
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+error_memory_reads:
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+error_render_pipe_profile:
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+error_compute_basic:
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+error_render_basic:
+	return ret;
+}
+
+void
+i915_perf_unregister_sysfs_kblgt3(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+}
diff --git a/drivers/gpu/drm/i915/i915_oa_kblgt3.h b/drivers/gpu/drm/i915/i915_oa_kblgt3.h
new file mode 100644
index 000000000000..b0ca7f3114d3
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_kblgt3.h
@@ -0,0 +1,40 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_OA_KBLGT3_H__
+#define __I915_OA_KBLGT3_H__
+
+extern int i915_oa_n_builtin_metric_sets_kblgt3;
+
+extern int i915_oa_select_metric_set_kblgt3(struct drm_i915_private *dev_priv);
+
+extern int i915_perf_register_sysfs_kblgt3(struct drm_i915_private *dev_priv);
+
+extern void i915_perf_unregister_sysfs_kblgt3(struct drm_i915_private *dev_priv);
+
+#endif
diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 9e5307bbaab1..d5126faf2a35 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -202,6 +202,8 @@
 #include "i915_oa_sklgt3.h"
 #include "i915_oa_sklgt4.h"
 #include "i915_oa_bxt.h"
+#include "i915_oa_kblgt2.h"
+#include "i915_oa_kblgt3.h"
 
 /* HW requires this to be a power of two, between 128k and 16M, though driver
  * is currently generally designed assuming the largest 16M size is used such
@@ -1778,7 +1780,8 @@ static int gen8_enable_metric_set(struct drm_i915_private *dev_priv)
 	 * be read back from automatically triggered reports, as part of the
 	 * RPT_ID field.
 	 */
-	if (IS_SKYLAKE(dev_priv) || IS_BROXTON(dev_priv)) {
+	if (IS_SKYLAKE(dev_priv) || IS_BROXTON(dev_priv) ||
+	    IS_KABYLAKE(dev_priv)) {
 		I915_WRITE(GEN8_OA_DEBUG,
 			   _MASKED_BIT_ENABLE(GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS |
 					      GEN9_OA_DEBUG_INCLUDE_CLK_RATIO));
@@ -2857,6 +2860,15 @@ void i915_perf_register(struct drm_i915_private *dev_priv)
 	} else if (IS_BROXTON(dev_priv)) {
 		if (i915_perf_register_sysfs_bxt(dev_priv))
 			goto sysfs_error;
+	} else if (IS_KABYLAKE(dev_priv)) {
+		if (IS_KBL_GT2(dev_priv)) {
+			if (i915_perf_register_sysfs_kblgt2(dev_priv))
+				goto sysfs_error;
+		} else if (IS_KBL_GT3(dev_priv)) {
+			if (i915_perf_register_sysfs_kblgt3(dev_priv))
+				goto sysfs_error;
+		} else
+			goto sysfs_error;
 	}
 
 	goto exit;
@@ -2898,6 +2910,12 @@ void i915_perf_unregister(struct drm_i915_private *dev_priv)
 			i915_perf_unregister_sysfs_sklgt4(dev_priv);
 	} else if (IS_BROXTON(dev_priv))
 		i915_perf_unregister_sysfs_bxt(dev_priv);
+	else if (IS_KABYLAKE(dev_priv)) {
+		if (IS_KBL_GT2(dev_priv))
+			i915_perf_unregister_sysfs_kblgt2(dev_priv);
+		else if (IS_KBL_GT3(dev_priv))
+			i915_perf_unregister_sysfs_kblgt3(dev_priv);
+	}
 
 	kobject_put(dev_priv->perf.metrics_kobj);
 	dev_priv->perf.metrics_kobj = NULL;
@@ -3031,6 +3049,16 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
 					i915_oa_n_builtin_metric_sets_bxt;
 				dev_priv->perf.oa.ops.select_metric_set =
 					i915_oa_select_metric_set_bxt;
+			} else if (IS_KBL_GT2(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_kblgt2;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_kblgt2;
+			} else if (IS_KBL_GT3(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_kblgt3;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_kblgt3;
 			}
 		}
 
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v14 12/14] drm/i915/perf: add GLK support
  2017-05-26 11:56   ` [PATCH v14 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                       ` (10 preceding siblings ...)
  2017-05-26 11:56     ` [PATCH v14 11/14] drm/i915/perf: add KBL support Lionel Landwerlin
@ 2017-05-26 11:56     ` Lionel Landwerlin
  2017-05-26 11:56     ` [PATCH v14 13/14] drm/i915/perf: reprogram NOA muxes at the beginning of each workload Lionel Landwerlin
  2017-05-26 11:56     ` [PATCH v14 14/14] drm/i915/perf: notify sseu configuration changes Lionel Landwerlin
  13 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-26 11:56 UTC (permalink / raw)
  To: intel-gfx

Add OA support for Geminilake (pretty much identical to Broxton), and
also add the associated OA configurations.

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/Makefile      |    3 +-
 drivers/gpu/drm/i915/i915_oa_glk.c | 2602 ++++++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_glk.h |   40 +
 drivers/gpu/drm/i915/i915_perf.c   |   17 +-
 4 files changed, 2659 insertions(+), 3 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_oa_glk.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_glk.h

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index 859e1751d7ab..bb781bc628df 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -136,7 +136,8 @@ i915-y += i915_perf.o \
 	  i915_oa_sklgt4.o \
 	  i915_oa_bxt.o \
 	  i915_oa_kblgt2.o \
-	  i915_oa_kblgt3.o
+	  i915_oa_kblgt3.o \
+	  i915_oa_glk.o
 
 ifeq ($(CONFIG_DRM_I915_GVT),y)
 i915-y += intel_gvt.o
diff --git a/drivers/gpu/drm/i915/i915_oa_glk.c b/drivers/gpu/drm/i915/i915_oa_glk.c
new file mode 100644
index 000000000000..2f356d51bff8
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_glk.c
@@ -0,0 +1,2602 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/sysfs.h>
+
+#include "i915_drv.h"
+#include "i915_oa_glk.h"
+
+enum metric_set_id {
+	METRIC_SET_ID_RENDER_BASIC = 1,
+	METRIC_SET_ID_COMPUTE_BASIC,
+	METRIC_SET_ID_RENDER_PIPE_PROFILE,
+	METRIC_SET_ID_MEMORY_READS,
+	METRIC_SET_ID_MEMORY_WRITES,
+	METRIC_SET_ID_COMPUTE_EXTENDED,
+	METRIC_SET_ID_COMPUTE_L3_CACHE,
+	METRIC_SET_ID_HDC_AND_SF,
+	METRIC_SET_ID_L3_1,
+	METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND,
+	METRIC_SET_ID_SAMPLER,
+	METRIC_SET_ID_TDL_1,
+	METRIC_SET_ID_TDL_2,
+	METRIC_SET_ID_COMPUTE_EXTRA,
+	METRIC_SET_ID_TEST_OA,
+};
+
+int i915_oa_n_builtin_metric_sets_glk = 15;
+
+static const struct i915_oa_reg b_counter_config_render_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic[] = {
+	{ _MMIO(0x9888), 0x166c00f0 },
+	{ _MMIO(0x9888), 0x12120280 },
+	{ _MMIO(0x9888), 0x12320280 },
+	{ _MMIO(0x9888), 0x11930317 },
+	{ _MMIO(0x9888), 0x159303df },
+	{ _MMIO(0x9888), 0x3f900c00 },
+	{ _MMIO(0x9888), 0x419000a0 },
+	{ _MMIO(0x9888), 0x002d1000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x082d5000 },
+	{ _MMIO(0x9888), 0x0a2d1000 },
+	{ _MMIO(0x9888), 0x0c2e0800 },
+	{ _MMIO(0x9888), 0x0e2e5900 },
+	{ _MMIO(0x9888), 0x0a4c8000 },
+	{ _MMIO(0x9888), 0x0c4c8000 },
+	{ _MMIO(0x9888), 0x0e4c4000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e2000 },
+	{ _MMIO(0x9888), 0x1c4f0010 },
+	{ _MMIO(0x9888), 0x0a6c0053 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1a0fcc00 },
+	{ _MMIO(0x9888), 0x1c0f0002 },
+	{ _MMIO(0x9888), 0x1c2c0040 },
+	{ _MMIO(0x9888), 0x00101000 },
+	{ _MMIO(0x9888), 0x04101000 },
+	{ _MMIO(0x9888), 0x00114000 },
+	{ _MMIO(0x9888), 0x08114000 },
+	{ _MMIO(0x9888), 0x00120020 },
+	{ _MMIO(0x9888), 0x08120021 },
+	{ _MMIO(0x9888), 0x00141000 },
+	{ _MMIO(0x9888), 0x08141000 },
+	{ _MMIO(0x9888), 0x02308000 },
+	{ _MMIO(0x9888), 0x04302000 },
+	{ _MMIO(0x9888), 0x06318000 },
+	{ _MMIO(0x9888), 0x08318000 },
+	{ _MMIO(0x9888), 0x06320800 },
+	{ _MMIO(0x9888), 0x08320840 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x06344000 },
+	{ _MMIO(0x9888), 0x08344000 },
+	{ _MMIO(0x9888), 0x0d931831 },
+	{ _MMIO(0x9888), 0x0f939f3f },
+	{ _MMIO(0x9888), 0x01939e80 },
+	{ _MMIO(0x9888), 0x039303bc },
+	{ _MMIO(0x9888), 0x0593000e },
+	{ _MMIO(0x9888), 0x1993002a },
+	{ _MMIO(0x9888), 0x07930000 },
+	{ _MMIO(0x9888), 0x09930000 },
+	{ _MMIO(0x9888), 0x1d900177 },
+	{ _MMIO(0x9888), 0x1f900187 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25904000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x53901110 },
+	{ _MMIO(0x9888), 0x43900423 },
+	{ _MMIO(0x9888), 0x55900111 },
+	{ _MMIO(0x9888), 0x47900c02 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900020 },
+	{ _MMIO(0x9888), 0x59901111 },
+	{ _MMIO(0x9888), 0x4b900421 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900001 },
+	{ _MMIO(0x9888), 0x45900821 },
+};
+
+static int
+get_render_basic_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_render_basic;
+	lens[n] = ARRAY_SIZE(mux_config_render_basic);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic[] = {
+	{ _MMIO(0x9888), 0x104f00e0 },
+	{ _MMIO(0x9888), 0x124f1c00 },
+	{ _MMIO(0x9888), 0x39900340 },
+	{ _MMIO(0x9888), 0x3f900c00 },
+	{ _MMIO(0x9888), 0x41900000 },
+	{ _MMIO(0x9888), 0x002d5000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x082d4000 },
+	{ _MMIO(0x9888), 0x0a2d1000 },
+	{ _MMIO(0x9888), 0x0c2d5000 },
+	{ _MMIO(0x9888), 0x0e2d4000 },
+	{ _MMIO(0x9888), 0x0c2e1400 },
+	{ _MMIO(0x9888), 0x0e2e5100 },
+	{ _MMIO(0x9888), 0x102e0114 },
+	{ _MMIO(0x9888), 0x044cc000 },
+	{ _MMIO(0x9888), 0x0a4c8000 },
+	{ _MMIO(0x9888), 0x0c4c8000 },
+	{ _MMIO(0x9888), 0x0e4c4000 },
+	{ _MMIO(0x9888), 0x104c8000 },
+	{ _MMIO(0x9888), 0x124c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x004ea000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e2000 },
+	{ _MMIO(0x9888), 0x0c4ea000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x004f6b42 },
+	{ _MMIO(0x9888), 0x064f6200 },
+	{ _MMIO(0x9888), 0x084f4100 },
+	{ _MMIO(0x9888), 0x0a4f0061 },
+	{ _MMIO(0x9888), 0x0c4f6c4c },
+	{ _MMIO(0x9888), 0x0e4f4b00 },
+	{ _MMIO(0x9888), 0x1a4f0000 },
+	{ _MMIO(0x9888), 0x1c4f0000 },
+	{ _MMIO(0x9888), 0x180f5000 },
+	{ _MMIO(0x9888), 0x1a0f8800 },
+	{ _MMIO(0x9888), 0x1c0f08a2 },
+	{ _MMIO(0x9888), 0x182c4000 },
+	{ _MMIO(0x9888), 0x1c2c1451 },
+	{ _MMIO(0x9888), 0x1e2c0001 },
+	{ _MMIO(0x9888), 0x1a2c0010 },
+	{ _MMIO(0x9888), 0x01938000 },
+	{ _MMIO(0x9888), 0x0f938000 },
+	{ _MMIO(0x9888), 0x19938a28 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x19900177 },
+	{ _MMIO(0x9888), 0x1b900178 },
+	{ _MMIO(0x9888), 0x1d900125 },
+	{ _MMIO(0x9888), 0x1f900123 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x25904000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x53901000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x55900111 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+};
+
+static int
+get_compute_basic_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_basic;
+	lens[n] = ARRAY_SIZE(mux_config_compute_basic);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_render_pipe_profile[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ffea },
+	{ _MMIO(0x2774), 0x00007ffc },
+	{ _MMIO(0x2778), 0x0007affa },
+	{ _MMIO(0x277c), 0x0000f5fd },
+	{ _MMIO(0x2780), 0x00079ffa },
+	{ _MMIO(0x2784), 0x0000f3fb },
+	{ _MMIO(0x2788), 0x0007bf7a },
+	{ _MMIO(0x278c), 0x0000f7e7 },
+	{ _MMIO(0x2790), 0x0007fefa },
+	{ _MMIO(0x2794), 0x0000f7cf },
+	{ _MMIO(0x2798), 0x00077ffa },
+	{ _MMIO(0x279c), 0x0000efdf },
+	{ _MMIO(0x27a0), 0x0006fffa },
+	{ _MMIO(0x27a4), 0x0000cfbf },
+	{ _MMIO(0x27a8), 0x0003fffa },
+	{ _MMIO(0x27ac), 0x00005f7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_pipe_profile[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile[] = {
+	{ _MMIO(0x9888), 0x0c2e001f },
+	{ _MMIO(0x9888), 0x0a2f0000 },
+	{ _MMIO(0x9888), 0x10186800 },
+	{ _MMIO(0x9888), 0x11810019 },
+	{ _MMIO(0x9888), 0x15810013 },
+	{ _MMIO(0x9888), 0x13820020 },
+	{ _MMIO(0x9888), 0x11830020 },
+	{ _MMIO(0x9888), 0x17840000 },
+	{ _MMIO(0x9888), 0x11860007 },
+	{ _MMIO(0x9888), 0x21860000 },
+	{ _MMIO(0x9888), 0x178703e0 },
+	{ _MMIO(0x9888), 0x0c2d8000 },
+	{ _MMIO(0x9888), 0x042d4000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x022e5400 },
+	{ _MMIO(0x9888), 0x002e0000 },
+	{ _MMIO(0x9888), 0x0e2e0080 },
+	{ _MMIO(0x9888), 0x082f0040 },
+	{ _MMIO(0x9888), 0x002f0000 },
+	{ _MMIO(0x9888), 0x06143000 },
+	{ _MMIO(0x9888), 0x06174000 },
+	{ _MMIO(0x9888), 0x06180012 },
+	{ _MMIO(0x9888), 0x00180000 },
+	{ _MMIO(0x9888), 0x0d804000 },
+	{ _MMIO(0x9888), 0x0f804000 },
+	{ _MMIO(0x9888), 0x05804000 },
+	{ _MMIO(0x9888), 0x09810200 },
+	{ _MMIO(0x9888), 0x0b810030 },
+	{ _MMIO(0x9888), 0x03810003 },
+	{ _MMIO(0x9888), 0x21819140 },
+	{ _MMIO(0x9888), 0x23819050 },
+	{ _MMIO(0x9888), 0x25810018 },
+	{ _MMIO(0x9888), 0x0b820980 },
+	{ _MMIO(0x9888), 0x03820d80 },
+	{ _MMIO(0x9888), 0x11820000 },
+	{ _MMIO(0x9888), 0x0182c000 },
+	{ _MMIO(0x9888), 0x07828000 },
+	{ _MMIO(0x9888), 0x09824000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x0d830004 },
+	{ _MMIO(0x9888), 0x0583000c },
+	{ _MMIO(0x9888), 0x0f831000 },
+	{ _MMIO(0x9888), 0x01848072 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x09844000 },
+	{ _MMIO(0x9888), 0x0f848000 },
+	{ _MMIO(0x9888), 0x07860000 },
+	{ _MMIO(0x9888), 0x09860092 },
+	{ _MMIO(0x9888), 0x0f860400 },
+	{ _MMIO(0x9888), 0x01869100 },
+	{ _MMIO(0x9888), 0x0f870065 },
+	{ _MMIO(0x9888), 0x01870000 },
+	{ _MMIO(0x9888), 0x19930800 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x1b952000 },
+	{ _MMIO(0x9888), 0x1d955055 },
+	{ _MMIO(0x9888), 0x1f951455 },
+	{ _MMIO(0x9888), 0x0992a000 },
+	{ _MMIO(0x9888), 0x0f928000 },
+	{ _MMIO(0x9888), 0x1192a800 },
+	{ _MMIO(0x9888), 0x1392028a },
+	{ _MMIO(0x9888), 0x0b92a000 },
+	{ _MMIO(0x9888), 0x0d922000 },
+	{ _MMIO(0x9888), 0x13908000 },
+	{ _MMIO(0x9888), 0x21908000 },
+	{ _MMIO(0x9888), 0x23908000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27908000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f908000 },
+	{ _MMIO(0x9888), 0x31908000 },
+	{ _MMIO(0x9888), 0x15908000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900c01 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900863 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900061 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900000 },
+	{ _MMIO(0x9888), 0x45900c22 },
+};
+
+static int
+get_render_pipe_profile_mux_config(struct drm_i915_private *dev_priv,
+				   const struct i915_oa_reg **regs,
+				   int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_render_pipe_profile;
+	lens[n] = ARRAY_SIZE(mux_config_render_pipe_profile);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_reads[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f872 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_reads[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads[] = {
+	{ _MMIO(0x9888), 0x19800343 },
+	{ _MMIO(0x9888), 0x39900340 },
+	{ _MMIO(0x9888), 0x3f901000 },
+	{ _MMIO(0x9888), 0x41900003 },
+	{ _MMIO(0x9888), 0x03803180 },
+	{ _MMIO(0x9888), 0x058035e2 },
+	{ _MMIO(0x9888), 0x0780006a },
+	{ _MMIO(0x9888), 0x11800000 },
+	{ _MMIO(0x9888), 0x2181a000 },
+	{ _MMIO(0x9888), 0x2381000a },
+	{ _MMIO(0x9888), 0x1d950550 },
+	{ _MMIO(0x9888), 0x0b928000 },
+	{ _MMIO(0x9888), 0x0d92a000 },
+	{ _MMIO(0x9888), 0x0f922000 },
+	{ _MMIO(0x9888), 0x13900170 },
+	{ _MMIO(0x9888), 0x21900171 },
+	{ _MMIO(0x9888), 0x23900172 },
+	{ _MMIO(0x9888), 0x25900173 },
+	{ _MMIO(0x9888), 0x27900174 },
+	{ _MMIO(0x9888), 0x29900175 },
+	{ _MMIO(0x9888), 0x2b900176 },
+	{ _MMIO(0x9888), 0x2d900177 },
+	{ _MMIO(0x9888), 0x2f90017f },
+	{ _MMIO(0x9888), 0x31900125 },
+	{ _MMIO(0x9888), 0x15900123 },
+	{ _MMIO(0x9888), 0x17900121 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43901084 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47901080 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49901084 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b901084 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900004 },
+	{ _MMIO(0x9888), 0x45900000 },
+};
+
+static int
+get_memory_reads_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_memory_reads;
+	lens[n] = ARRAY_SIZE(mux_config_memory_reads);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_writes[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f822 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_writes[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes[] = {
+	{ _MMIO(0x9888), 0x19800343 },
+	{ _MMIO(0x9888), 0x39900340 },
+	{ _MMIO(0x9888), 0x3f900000 },
+	{ _MMIO(0x9888), 0x41900080 },
+	{ _MMIO(0x9888), 0x03803180 },
+	{ _MMIO(0x9888), 0x058035e2 },
+	{ _MMIO(0x9888), 0x0780006a },
+	{ _MMIO(0x9888), 0x11800000 },
+	{ _MMIO(0x9888), 0x2181a000 },
+	{ _MMIO(0x9888), 0x2381000a },
+	{ _MMIO(0x9888), 0x1d950550 },
+	{ _MMIO(0x9888), 0x0b928000 },
+	{ _MMIO(0x9888), 0x0d92a000 },
+	{ _MMIO(0x9888), 0x0f922000 },
+	{ _MMIO(0x9888), 0x13900180 },
+	{ _MMIO(0x9888), 0x21900181 },
+	{ _MMIO(0x9888), 0x23900182 },
+	{ _MMIO(0x9888), 0x25900183 },
+	{ _MMIO(0x9888), 0x27900184 },
+	{ _MMIO(0x9888), 0x29900185 },
+	{ _MMIO(0x9888), 0x2b900186 },
+	{ _MMIO(0x9888), 0x2d900187 },
+	{ _MMIO(0x9888), 0x2f900170 },
+	{ _MMIO(0x9888), 0x31900125 },
+	{ _MMIO(0x9888), 0x15900123 },
+	{ _MMIO(0x9888), 0x17900121 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43901084 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47901080 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49901084 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b901084 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900004 },
+	{ _MMIO(0x9888), 0x45900000 },
+};
+
+static int
+get_memory_writes_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_memory_writes;
+	lens[n] = ARRAY_SIZE(mux_config_memory_writes);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extended[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fc2a },
+	{ _MMIO(0x2774), 0x0000bf00 },
+	{ _MMIO(0x2778), 0x0007fc6a },
+	{ _MMIO(0x277c), 0x0000bf00 },
+	{ _MMIO(0x2780), 0x0007fc92 },
+	{ _MMIO(0x2784), 0x0000bf00 },
+	{ _MMIO(0x2788), 0x0007fca2 },
+	{ _MMIO(0x278c), 0x0000bf00 },
+	{ _MMIO(0x2790), 0x0007fc32 },
+	{ _MMIO(0x2794), 0x0000bf00 },
+	{ _MMIO(0x2798), 0x0007fc9a },
+	{ _MMIO(0x279c), 0x0000bf00 },
+	{ _MMIO(0x27a0), 0x0007fe6a },
+	{ _MMIO(0x27a4), 0x0000bf00 },
+	{ _MMIO(0x27a8), 0x0007fe7a },
+	{ _MMIO(0x27ac), 0x0000bf00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extended[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended[] = {
+	{ _MMIO(0x9888), 0x104f00e0 },
+	{ _MMIO(0x9888), 0x141c0160 },
+	{ _MMIO(0x9888), 0x161c0015 },
+	{ _MMIO(0x9888), 0x181c0120 },
+	{ _MMIO(0x9888), 0x002d5000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x082d5000 },
+	{ _MMIO(0x9888), 0x0a2d5000 },
+	{ _MMIO(0x9888), 0x0c2d5000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x0c2e5400 },
+	{ _MMIO(0x9888), 0x0e2e5515 },
+	{ _MMIO(0x9888), 0x102e0155 },
+	{ _MMIO(0x9888), 0x044cc000 },
+	{ _MMIO(0x9888), 0x0a4c8000 },
+	{ _MMIO(0x9888), 0x0c4cc000 },
+	{ _MMIO(0x9888), 0x0e4cc000 },
+	{ _MMIO(0x9888), 0x104c8000 },
+	{ _MMIO(0x9888), 0x124c8000 },
+	{ _MMIO(0x9888), 0x144c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x064cc000 },
+	{ _MMIO(0x9888), 0x084cc000 },
+	{ _MMIO(0x9888), 0x004ea000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084ea000 },
+	{ _MMIO(0x9888), 0x0a4ea000 },
+	{ _MMIO(0x9888), 0x0c4ea000 },
+	{ _MMIO(0x9888), 0x0e4ea000 },
+	{ _MMIO(0x9888), 0x024ea000 },
+	{ _MMIO(0x9888), 0x044ea000 },
+	{ _MMIO(0x9888), 0x0e4f4b41 },
+	{ _MMIO(0x9888), 0x004f4200 },
+	{ _MMIO(0x9888), 0x024f404c },
+	{ _MMIO(0x9888), 0x1c4f0000 },
+	{ _MMIO(0x9888), 0x1a4f0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x061b8000 },
+	{ _MMIO(0x9888), 0x081bc000 },
+	{ _MMIO(0x9888), 0x0a1bc000 },
+	{ _MMIO(0x9888), 0x0c1bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x001c0031 },
+	{ _MMIO(0x9888), 0x061c1900 },
+	{ _MMIO(0x9888), 0x081c1a33 },
+	{ _MMIO(0x9888), 0x0a1c1b35 },
+	{ _MMIO(0x9888), 0x0c1c3337 },
+	{ _MMIO(0x9888), 0x041c31c7 },
+	{ _MMIO(0x9888), 0x180f5000 },
+	{ _MMIO(0x9888), 0x1a0fa8aa },
+	{ _MMIO(0x9888), 0x1c0f0aaa },
+	{ _MMIO(0x9888), 0x182c8000 },
+	{ _MMIO(0x9888), 0x1c2c6aaa },
+	{ _MMIO(0x9888), 0x1e2c0001 },
+	{ _MMIO(0x9888), 0x1a2c2950 },
+	{ _MMIO(0x9888), 0x01938000 },
+	{ _MMIO(0x9888), 0x0f938000 },
+	{ _MMIO(0x9888), 0x1993aaaa },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25904000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x29904000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900420 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900400 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900001 },
+	{ _MMIO(0x9888), 0x45900001 },
+};
+
+static int
+get_compute_extended_mux_config(struct drm_i915_private *dev_priv,
+				const struct i915_oa_reg **regs,
+				int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_extended;
+	lens[n] = ARRAY_SIZE(mux_config_compute_extended);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_l3_cache[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fffa },
+	{ _MMIO(0x2774), 0x0000fefe },
+	{ _MMIO(0x2778), 0x0007fffa },
+	{ _MMIO(0x277c), 0x0000fefd },
+	{ _MMIO(0x2790), 0x0007fffa },
+	{ _MMIO(0x2794), 0x0000fbef },
+	{ _MMIO(0x2798), 0x0007fffa },
+	{ _MMIO(0x279c), 0x0000fbdf },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_l3_cache[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00101100 },
+	{ _MMIO(0xe45c), 0x00201200 },
+	{ _MMIO(0xe55c), 0x00301300 },
+	{ _MMIO(0xe65c), 0x00401400 },
+};
+
+static const struct i915_oa_reg mux_config_compute_l3_cache[] = {
+	{ _MMIO(0x9888), 0x166c03b0 },
+	{ _MMIO(0x9888), 0x1593001e },
+	{ _MMIO(0x9888), 0x3f900c00 },
+	{ _MMIO(0x9888), 0x41900000 },
+	{ _MMIO(0x9888), 0x002d1000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x082d5000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x0c2e0400 },
+	{ _MMIO(0x9888), 0x0e2e1500 },
+	{ _MMIO(0x9888), 0x102e0140 },
+	{ _MMIO(0x9888), 0x044c4000 },
+	{ _MMIO(0x9888), 0x0a4c8000 },
+	{ _MMIO(0x9888), 0x0c4cc000 },
+	{ _MMIO(0x9888), 0x144c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x004e2000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084ea000 },
+	{ _MMIO(0x9888), 0x0e4ea000 },
+	{ _MMIO(0x9888), 0x1a4f4001 },
+	{ _MMIO(0x9888), 0x1c4f5005 },
+	{ _MMIO(0x9888), 0x006c0051 },
+	{ _MMIO(0x9888), 0x066c5000 },
+	{ _MMIO(0x9888), 0x086c5c5d },
+	{ _MMIO(0x9888), 0x0e6c5e5f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x146c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x180f1000 },
+	{ _MMIO(0x9888), 0x1a0fa800 },
+	{ _MMIO(0x9888), 0x1c0f0a00 },
+	{ _MMIO(0x9888), 0x182c4000 },
+	{ _MMIO(0x9888), 0x1c2c4015 },
+	{ _MMIO(0x9888), 0x1e2c0001 },
+	{ _MMIO(0x9888), 0x03931980 },
+	{ _MMIO(0x9888), 0x05930032 },
+	{ _MMIO(0x9888), 0x11930000 },
+	{ _MMIO(0x9888), 0x01938000 },
+	{ _MMIO(0x9888), 0x0f938000 },
+	{ _MMIO(0x9888), 0x1993a00a },
+	{ _MMIO(0x9888), 0x07930000 },
+	{ _MMIO(0x9888), 0x09930000 },
+	{ _MMIO(0x9888), 0x1d900177 },
+	{ _MMIO(0x9888), 0x1f900178 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x53901000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x55900111 },
+	{ _MMIO(0x9888), 0x47900001 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x4d900000 },
+	{ _MMIO(0x9888), 0x45900400 },
+};
+
+static int
+get_compute_l3_cache_mux_config(struct drm_i915_private *dev_priv,
+				const struct i915_oa_reg **regs,
+				int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_l3_cache;
+	lens[n] = ARRAY_SIZE(mux_config_compute_l3_cache);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_hdc_and_sf[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x10800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+};
+
+static const struct i915_oa_reg flex_eu_config_hdc_and_sf[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_hdc_and_sf[] = {
+	{ _MMIO(0x9888), 0x104f0232 },
+	{ _MMIO(0x9888), 0x124f4640 },
+	{ _MMIO(0x9888), 0x11834400 },
+	{ _MMIO(0x9888), 0x022d4000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x0e2e0055 },
+	{ _MMIO(0x9888), 0x064c8000 },
+	{ _MMIO(0x9888), 0x084cc000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044ea000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x024f6100 },
+	{ _MMIO(0x9888), 0x044f416b },
+	{ _MMIO(0x9888), 0x064f004b },
+	{ _MMIO(0x9888), 0x1a4f0000 },
+	{ _MMIO(0x9888), 0x1a0f02a8 },
+	{ _MMIO(0x9888), 0x1a2c5500 },
+	{ _MMIO(0x9888), 0x0f808000 },
+	{ _MMIO(0x9888), 0x25810020 },
+	{ _MMIO(0x9888), 0x0f8305c0 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x1f951000 },
+	{ _MMIO(0x9888), 0x13920200 },
+	{ _MMIO(0x9888), 0x31908000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4d900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_hdc_and_sf_mux_config(struct drm_i915_private *dev_priv,
+			  const struct i915_oa_reg **regs,
+			  int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_hdc_and_sf;
+	lens[n] = ARRAY_SIZE(mux_config_hdc_and_sf);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1[] = {
+	{ _MMIO(0x9888), 0x12643400 },
+	{ _MMIO(0x9888), 0x12653400 },
+	{ _MMIO(0x9888), 0x106c6800 },
+	{ _MMIO(0x9888), 0x126c001e },
+	{ _MMIO(0x9888), 0x166c0010 },
+	{ _MMIO(0x9888), 0x0c2d5000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x002d4000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x102e0154 },
+	{ _MMIO(0x9888), 0x0c2e5000 },
+	{ _MMIO(0x9888), 0x0e2e0055 },
+	{ _MMIO(0x9888), 0x104c8000 },
+	{ _MMIO(0x9888), 0x124c8000 },
+	{ _MMIO(0x9888), 0x144c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x044c8000 },
+	{ _MMIO(0x9888), 0x064cc000 },
+	{ _MMIO(0x9888), 0x084cc000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x0c4ea000 },
+	{ _MMIO(0x9888), 0x0e4ea000 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x024ea000 },
+	{ _MMIO(0x9888), 0x044ea000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x1c4f5500 },
+	{ _MMIO(0x9888), 0x1a4f1554 },
+	{ _MMIO(0x9888), 0x0a640024 },
+	{ _MMIO(0x9888), 0x10640000 },
+	{ _MMIO(0x9888), 0x04640000 },
+	{ _MMIO(0x9888), 0x0c650024 },
+	{ _MMIO(0x9888), 0x10650000 },
+	{ _MMIO(0x9888), 0x06650000 },
+	{ _MMIO(0x9888), 0x0c6c5327 },
+	{ _MMIO(0x9888), 0x0e6c5425 },
+	{ _MMIO(0x9888), 0x006c2a00 },
+	{ _MMIO(0x9888), 0x026c285b },
+	{ _MMIO(0x9888), 0x046c005c },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0900 },
+	{ _MMIO(0x9888), 0x1c0f0aa0 },
+	{ _MMIO(0x9888), 0x180f4000 },
+	{ _MMIO(0x9888), 0x1a0f02aa },
+	{ _MMIO(0x9888), 0x1c2c5400 },
+	{ _MMIO(0x9888), 0x1e2c0001 },
+	{ _MMIO(0x9888), 0x1a2c5550 },
+	{ _MMIO(0x9888), 0x1993aa00 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900421 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900001 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900420 },
+	{ _MMIO(0x9888), 0x45900021 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+};
+
+static int
+get_l3_1_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_1;
+	lens[n] = ARRAY_SIZE(mux_config_l3_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000efff },
+	{ _MMIO(0x2778), 0x00006000 },
+	{ _MMIO(0x277c), 0x0000f3ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x9888), 0x102d7800 },
+	{ _MMIO(0x9888), 0x122d79e0 },
+	{ _MMIO(0x9888), 0x0c2f0004 },
+	{ _MMIO(0x9888), 0x100e3800 },
+	{ _MMIO(0x9888), 0x180f0005 },
+	{ _MMIO(0x9888), 0x002d0940 },
+	{ _MMIO(0x9888), 0x022d802f },
+	{ _MMIO(0x9888), 0x042d4013 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x0e2e0050 },
+	{ _MMIO(0x9888), 0x022f0010 },
+	{ _MMIO(0x9888), 0x002f0000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x040e0480 },
+	{ _MMIO(0x9888), 0x000e0000 },
+	{ _MMIO(0x9888), 0x060f0027 },
+	{ _MMIO(0x9888), 0x100f0000 },
+	{ _MMIO(0x9888), 0x1a0f0040 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x439014a0 },
+	{ _MMIO(0x9888), 0x459000a4 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900001 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_rasterizer_and_pixel_backend_mux_config(struct drm_i915_private *dev_priv,
+					    const struct i915_oa_reg **regs,
+					    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_rasterizer_and_pixel_backend;
+	lens[n] = ARRAY_SIZE(mux_config_rasterizer_and_pixel_backend);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler[] = {
+	{ _MMIO(0x9888), 0x121300a0 },
+	{ _MMIO(0x9888), 0x141600ab },
+	{ _MMIO(0x9888), 0x123300a0 },
+	{ _MMIO(0x9888), 0x143600ab },
+	{ _MMIO(0x9888), 0x125300a0 },
+	{ _MMIO(0x9888), 0x145600ab },
+	{ _MMIO(0x9888), 0x0c2d4000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x002d4000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x102e01a0 },
+	{ _MMIO(0x9888), 0x0c2e5000 },
+	{ _MMIO(0x9888), 0x0e2e0065 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x044c8000 },
+	{ _MMIO(0x9888), 0x064cc000 },
+	{ _MMIO(0x9888), 0x084c4000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x024ea000 },
+	{ _MMIO(0x9888), 0x044e2000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x1c0f0800 },
+	{ _MMIO(0x9888), 0x180f4000 },
+	{ _MMIO(0x9888), 0x1a0f023f },
+	{ _MMIO(0x9888), 0x1e2c0003 },
+	{ _MMIO(0x9888), 0x1a2cc030 },
+	{ _MMIO(0x9888), 0x04132180 },
+	{ _MMIO(0x9888), 0x02130000 },
+	{ _MMIO(0x9888), 0x0c148000 },
+	{ _MMIO(0x9888), 0x0e142000 },
+	{ _MMIO(0x9888), 0x04148000 },
+	{ _MMIO(0x9888), 0x1e150140 },
+	{ _MMIO(0x9888), 0x1c150040 },
+	{ _MMIO(0x9888), 0x0c163000 },
+	{ _MMIO(0x9888), 0x0e160068 },
+	{ _MMIO(0x9888), 0x10160000 },
+	{ _MMIO(0x9888), 0x18160000 },
+	{ _MMIO(0x9888), 0x0a164000 },
+	{ _MMIO(0x9888), 0x04330043 },
+	{ _MMIO(0x9888), 0x02330000 },
+	{ _MMIO(0x9888), 0x0234a000 },
+	{ _MMIO(0x9888), 0x04342000 },
+	{ _MMIO(0x9888), 0x1c350015 },
+	{ _MMIO(0x9888), 0x02363460 },
+	{ _MMIO(0x9888), 0x10360000 },
+	{ _MMIO(0x9888), 0x04360000 },
+	{ _MMIO(0x9888), 0x06360000 },
+	{ _MMIO(0x9888), 0x08364000 },
+	{ _MMIO(0x9888), 0x06530043 },
+	{ _MMIO(0x9888), 0x02530000 },
+	{ _MMIO(0x9888), 0x0e548000 },
+	{ _MMIO(0x9888), 0x00548000 },
+	{ _MMIO(0x9888), 0x06542000 },
+	{ _MMIO(0x9888), 0x1e550400 },
+	{ _MMIO(0x9888), 0x1a552000 },
+	{ _MMIO(0x9888), 0x1c550100 },
+	{ _MMIO(0x9888), 0x0e563000 },
+	{ _MMIO(0x9888), 0x00563400 },
+	{ _MMIO(0x9888), 0x10560000 },
+	{ _MMIO(0x9888), 0x18560000 },
+	{ _MMIO(0x9888), 0x02560000 },
+	{ _MMIO(0x9888), 0x0c564000 },
+	{ _MMIO(0x9888), 0x1993a800 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b9014a0 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900001 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900820 },
+	{ _MMIO(0x9888), 0x45901022 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+};
+
+static int
+get_sampler_mux_config(struct drm_i915_private *dev_priv,
+		       const struct i915_oa_reg **regs,
+		       int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_sampler;
+	lens[n] = ARRAY_SIZE(mux_config_sampler);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x00007fff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x00009fff },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000efff },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000f3ff },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fdff },
+	{ _MMIO(0x2798), 0x00000000 },
+	{ _MMIO(0x279c), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_1[] = {
+	{ _MMIO(0x9888), 0x141a0000 },
+	{ _MMIO(0x9888), 0x143a0000 },
+	{ _MMIO(0x9888), 0x145a0000 },
+	{ _MMIO(0x9888), 0x0c2d4000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x002d4000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x102e0150 },
+	{ _MMIO(0x9888), 0x0c2e5000 },
+	{ _MMIO(0x9888), 0x0e2e006a },
+	{ _MMIO(0x9888), 0x124c8000 },
+	{ _MMIO(0x9888), 0x144c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x044c8000 },
+	{ _MMIO(0x9888), 0x064c4000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x0e4ea000 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x024e2000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x1c0f0bc0 },
+	{ _MMIO(0x9888), 0x180f4000 },
+	{ _MMIO(0x9888), 0x1a0f0302 },
+	{ _MMIO(0x9888), 0x1e2c0003 },
+	{ _MMIO(0x9888), 0x1a2c00f0 },
+	{ _MMIO(0x9888), 0x021a3080 },
+	{ _MMIO(0x9888), 0x041a31e5 },
+	{ _MMIO(0x9888), 0x02148000 },
+	{ _MMIO(0x9888), 0x0414a000 },
+	{ _MMIO(0x9888), 0x1c150054 },
+	{ _MMIO(0x9888), 0x06168000 },
+	{ _MMIO(0x9888), 0x08168000 },
+	{ _MMIO(0x9888), 0x0a168000 },
+	{ _MMIO(0x9888), 0x0c3a3280 },
+	{ _MMIO(0x9888), 0x0e3a0063 },
+	{ _MMIO(0x9888), 0x063a0061 },
+	{ _MMIO(0x9888), 0x023a0000 },
+	{ _MMIO(0x9888), 0x0c348000 },
+	{ _MMIO(0x9888), 0x0e342000 },
+	{ _MMIO(0x9888), 0x06342000 },
+	{ _MMIO(0x9888), 0x1e350140 },
+	{ _MMIO(0x9888), 0x1c350100 },
+	{ _MMIO(0x9888), 0x18360028 },
+	{ _MMIO(0x9888), 0x0c368000 },
+	{ _MMIO(0x9888), 0x0e5a3080 },
+	{ _MMIO(0x9888), 0x005a3280 },
+	{ _MMIO(0x9888), 0x025a0063 },
+	{ _MMIO(0x9888), 0x0e548000 },
+	{ _MMIO(0x9888), 0x00548000 },
+	{ _MMIO(0x9888), 0x02542000 },
+	{ _MMIO(0x9888), 0x1e550400 },
+	{ _MMIO(0x9888), 0x1a552000 },
+	{ _MMIO(0x9888), 0x1c550001 },
+	{ _MMIO(0x9888), 0x18560080 },
+	{ _MMIO(0x9888), 0x02568000 },
+	{ _MMIO(0x9888), 0x04568000 },
+	{ _MMIO(0x9888), 0x1993a800 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900420 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x45901084 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900001 },
+};
+
+static int
+get_tdl_1_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_1;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_2[] = {
+	{ _MMIO(0x9888), 0x141a026b },
+	{ _MMIO(0x9888), 0x143a0173 },
+	{ _MMIO(0x9888), 0x145a026b },
+	{ _MMIO(0x9888), 0x002d4000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x0c2e5000 },
+	{ _MMIO(0x9888), 0x0e2e0069 },
+	{ _MMIO(0x9888), 0x044c8000 },
+	{ _MMIO(0x9888), 0x064cc000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x024ea000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x180f6000 },
+	{ _MMIO(0x9888), 0x1a0f030a },
+	{ _MMIO(0x9888), 0x1a2c03c0 },
+	{ _MMIO(0x9888), 0x041a37e7 },
+	{ _MMIO(0x9888), 0x021a0000 },
+	{ _MMIO(0x9888), 0x0414a000 },
+	{ _MMIO(0x9888), 0x1c150050 },
+	{ _MMIO(0x9888), 0x08168000 },
+	{ _MMIO(0x9888), 0x0a168000 },
+	{ _MMIO(0x9888), 0x003a3380 },
+	{ _MMIO(0x9888), 0x063a006f },
+	{ _MMIO(0x9888), 0x023a0000 },
+	{ _MMIO(0x9888), 0x00348000 },
+	{ _MMIO(0x9888), 0x06342000 },
+	{ _MMIO(0x9888), 0x1a352000 },
+	{ _MMIO(0x9888), 0x1c350100 },
+	{ _MMIO(0x9888), 0x02368000 },
+	{ _MMIO(0x9888), 0x0c368000 },
+	{ _MMIO(0x9888), 0x025a37e7 },
+	{ _MMIO(0x9888), 0x0254a000 },
+	{ _MMIO(0x9888), 0x1c550005 },
+	{ _MMIO(0x9888), 0x04568000 },
+	{ _MMIO(0x9888), 0x06568000 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900020 },
+	{ _MMIO(0x9888), 0x45901080 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900001 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_tdl_2_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_2;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extra[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extra[] = {
+	{ _MMIO(0xe458), 0x00001000 },
+	{ _MMIO(0xe558), 0x00003002 },
+	{ _MMIO(0xe658), 0x00005004 },
+	{ _MMIO(0xe758), 0x00011010 },
+	{ _MMIO(0xe45c), 0x00050012 },
+	{ _MMIO(0xe55c), 0x00052051 },
+	{ _MMIO(0xe65c), 0x00000008 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extra[] = {
+	{ _MMIO(0x9888), 0x141a001f },
+	{ _MMIO(0x9888), 0x143a001f },
+	{ _MMIO(0x9888), 0x145a001f },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x0e2e0094 },
+	{ _MMIO(0x9888), 0x084cc000 },
+	{ _MMIO(0x9888), 0x044ea000 },
+	{ _MMIO(0x9888), 0x1a0f00e0 },
+	{ _MMIO(0x9888), 0x1a2c0c00 },
+	{ _MMIO(0x9888), 0x061a0063 },
+	{ _MMIO(0x9888), 0x021a0000 },
+	{ _MMIO(0x9888), 0x06142000 },
+	{ _MMIO(0x9888), 0x1c150100 },
+	{ _MMIO(0x9888), 0x0c168000 },
+	{ _MMIO(0x9888), 0x043a3180 },
+	{ _MMIO(0x9888), 0x023a0000 },
+	{ _MMIO(0x9888), 0x04348000 },
+	{ _MMIO(0x9888), 0x1c350040 },
+	{ _MMIO(0x9888), 0x0a368000 },
+	{ _MMIO(0x9888), 0x045a0063 },
+	{ _MMIO(0x9888), 0x025a0000 },
+	{ _MMIO(0x9888), 0x04542000 },
+	{ _MMIO(0x9888), 0x1c550010 },
+	{ _MMIO(0x9888), 0x08568000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900400 },
+	{ _MMIO(0x9888), 0x47900004 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_compute_extra_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_extra;
+	lens[n] = ARRAY_SIZE(mux_config_compute_extra);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_test_oa[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2770), 0x00000004 },
+	{ _MMIO(0x2774), 0x00000000 },
+	{ _MMIO(0x2778), 0x00000003 },
+	{ _MMIO(0x277c), 0x00000000 },
+	{ _MMIO(0x2780), 0x00000007 },
+	{ _MMIO(0x2784), 0x00000000 },
+	{ _MMIO(0x2788), 0x00100002 },
+	{ _MMIO(0x278c), 0x0000fff7 },
+	{ _MMIO(0x2790), 0x00100002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00100082 },
+	{ _MMIO(0x279c), 0x0000ffef },
+	{ _MMIO(0x27a0), 0x001000c2 },
+	{ _MMIO(0x27a4), 0x0000ffe7 },
+	{ _MMIO(0x27a8), 0x00100001 },
+	{ _MMIO(0x27ac), 0x0000ffe7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_test_oa[] = {
+};
+
+static const struct i915_oa_reg mux_config_test_oa[] = {
+	{ _MMIO(0x9888), 0x19800000 },
+	{ _MMIO(0x9888), 0x07800063 },
+	{ _MMIO(0x9888), 0x11800000 },
+	{ _MMIO(0x9888), 0x23810008 },
+	{ _MMIO(0x9888), 0x1d950400 },
+	{ _MMIO(0x9888), 0x0f922000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_test_oa_mux_config(struct drm_i915_private *dev_priv,
+		       const struct i915_oa_reg **regs,
+		       int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_test_oa;
+	lens[n] = ARRAY_SIZE(mux_config_test_oa);
+	n++;
+
+	return n;
+}
+
+int i915_oa_select_metric_set_glk(struct drm_i915_private *dev_priv)
+{
+	dev_priv->perf.oa.n_mux_configs = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_basic_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_basic_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
+		return 0;
+	case METRIC_SET_ID_RENDER_PIPE_PROFILE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_pipe_profile_mux_config(dev_priv,
+							   dev_priv->perf.oa.mux_regs,
+							   dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_PIPE_PROFILE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_pipe_profile;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_pipe_profile);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_pipe_profile;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_pipe_profile);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_READS:
+		dev_priv->perf.oa.n_mux_configs =
+			get_memory_reads_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_READS\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_reads;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_reads);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_reads;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_reads);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_WRITES:
+		dev_priv->perf.oa.n_mux_configs =
+			get_memory_writes_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_WRITES\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_writes;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_writes);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_writes;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_writes);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTENDED:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_extended_mux_config(dev_priv,
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTENDED\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extended;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extended);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extended;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extended);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_L3_CACHE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_l3_cache_mux_config(dev_priv,
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_L3_CACHE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_l3_cache;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_l3_cache);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_l3_cache;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_l3_cache);
+
+		return 0;
+	case METRIC_SET_ID_HDC_AND_SF:
+		dev_priv->perf.oa.n_mux_configs =
+			get_hdc_and_sf_mux_config(dev_priv,
+						  dev_priv->perf.oa.mux_regs,
+						  dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"HDC_AND_SF\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_hdc_and_sf;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_hdc_and_sf);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_hdc_and_sf;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_hdc_and_sf);
+
+		return 0;
+	case METRIC_SET_ID_L3_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_1_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_1);
+
+		return 0;
+	case METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND:
+		dev_priv->perf.oa.n_mux_configs =
+			get_rasterizer_and_pixel_backend_mux_config(dev_priv,
+								    dev_priv->perf.oa.mux_regs,
+								    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RASTERIZER_AND_PIXEL_BACKEND\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_rasterizer_and_pixel_backend);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_rasterizer_and_pixel_backend);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER:
+		dev_priv->perf.oa.n_mux_configs =
+			get_sampler_mux_config(dev_priv,
+					       dev_priv->perf.oa.mux_regs,
+					       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler);
+
+		return 0;
+	case METRIC_SET_ID_TDL_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_1_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_1);
+
+		return 0;
+	case METRIC_SET_ID_TDL_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_2_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_2);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTRA:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_extra_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTRA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extra;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extra);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extra;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extra);
+
+		return 0;
+	case METRIC_SET_ID_TEST_OA:
+		dev_priv->perf.oa.n_mux_configs =
+			get_test_oa_mux_config(dev_priv,
+					       dev_priv->perf.oa.mux_regs,
+					       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TEST_OA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_test_oa;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_test_oa);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_test_oa;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_test_oa);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "d72df5c7-5b4a-4274-a43f-00b0fd51fc68",
+	.attrs =  attrs_render_basic,
+};
+
+static ssize_t
+show_compute_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_BASIC);
+}
+
+static struct device_attribute dev_attr_compute_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_basic[] = {
+	&dev_attr_compute_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_basic = {
+	.name = "814285f6-354d-41d2-ba49-e24e622714a0",
+	.attrs =  attrs_compute_basic,
+};
+
+static ssize_t
+show_render_pipe_profile_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_PIPE_PROFILE);
+}
+
+static struct device_attribute dev_attr_render_pipe_profile_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_pipe_profile_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_pipe_profile[] = {
+	&dev_attr_render_pipe_profile_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_pipe_profile = {
+	.name = "07d397a6-b3e6-49f6-9433-a4f293d55978",
+	.attrs =  attrs_render_pipe_profile,
+};
+
+static ssize_t
+show_memory_reads_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_READS);
+}
+
+static struct device_attribute dev_attr_memory_reads_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_reads_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_reads[] = {
+	&dev_attr_memory_reads_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_reads = {
+	.name = "1a356946-5428-450b-a2f0-89f8783a302d",
+	.attrs =  attrs_memory_reads,
+};
+
+static ssize_t
+show_memory_writes_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_WRITES);
+}
+
+static struct device_attribute dev_attr_memory_writes_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_writes_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_writes[] = {
+	&dev_attr_memory_writes_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_writes = {
+	.name = "5299be9d-7a61-4c99-9f81-f87e6c5aaca9",
+	.attrs =  attrs_memory_writes,
+};
+
+static ssize_t
+show_compute_extended_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTENDED);
+}
+
+static struct device_attribute dev_attr_compute_extended_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extended_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extended[] = {
+	&dev_attr_compute_extended_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extended = {
+	.name = "bc9bcff2-459a-4cbc-986d-a84b077153f3",
+	.attrs =  attrs_compute_extended,
+};
+
+static ssize_t
+show_compute_l3_cache_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_L3_CACHE);
+}
+
+static struct device_attribute dev_attr_compute_l3_cache_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_l3_cache_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_l3_cache[] = {
+	&dev_attr_compute_l3_cache_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_l3_cache = {
+	.name = "88ec931f-5b4a-453a-9db6-a61232b6143d",
+	.attrs =  attrs_compute_l3_cache,
+};
+
+static ssize_t
+show_hdc_and_sf_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_HDC_AND_SF);
+}
+
+static struct device_attribute dev_attr_hdc_and_sf_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_hdc_and_sf_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_hdc_and_sf[] = {
+	&dev_attr_hdc_and_sf_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_hdc_and_sf = {
+	.name = "530d176d-2a18-4014-adf8-1500c6c60835",
+	.attrs =  attrs_hdc_and_sf,
+};
+
+static ssize_t
+show_l3_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_1);
+}
+
+static struct device_attribute dev_attr_l3_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_1[] = {
+	&dev_attr_l3_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_1 = {
+	.name = "fdee5a5a-f23c-43d1-aa73-f6257c71671d",
+	.attrs =  attrs_l3_1,
+};
+
+static ssize_t
+show_rasterizer_and_pixel_backend_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND);
+}
+
+static struct device_attribute dev_attr_rasterizer_and_pixel_backend_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_rasterizer_and_pixel_backend_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_rasterizer_and_pixel_backend[] = {
+	&dev_attr_rasterizer_and_pixel_backend_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_rasterizer_and_pixel_backend = {
+	.name = "6617623e-ca73-4791-b2b7-ddedd0846a0c",
+	.attrs =  attrs_rasterizer_and_pixel_backend,
+};
+
+static ssize_t
+show_sampler_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER);
+}
+
+static struct device_attribute dev_attr_sampler_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler[] = {
+	&dev_attr_sampler_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler = {
+	.name = "f3b2ea63-e82e-4234-b418-44dd20dd34d0",
+	.attrs =  attrs_sampler,
+};
+
+static ssize_t
+show_tdl_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_1);
+}
+
+static struct device_attribute dev_attr_tdl_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_1[] = {
+	&dev_attr_tdl_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_1 = {
+	.name = "14411d35-cbf6-4f5e-b68b-190faf9a1a83",
+	.attrs =  attrs_tdl_1,
+};
+
+static ssize_t
+show_tdl_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_2);
+}
+
+static struct device_attribute dev_attr_tdl_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_2[] = {
+	&dev_attr_tdl_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_2 = {
+	.name = "ffa3f263-0478-4724-8c9f-c911c5ec0f1d",
+	.attrs =  attrs_tdl_2,
+};
+
+static ssize_t
+show_compute_extra_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTRA);
+}
+
+static struct device_attribute dev_attr_compute_extra_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extra_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extra[] = {
+	&dev_attr_compute_extra_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extra = {
+	.name = "15274c82-27d2-4819-876a-7cb1a2c59ba4",
+	.attrs =  attrs_compute_extra,
+};
+
+static ssize_t
+show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TEST_OA);
+}
+
+static struct device_attribute dev_attr_test_oa_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_test_oa_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_test_oa[] = {
+	&dev_attr_test_oa_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_test_oa = {
+	.name = "dd3fd789-e783-4204-8cd0-b671bbccb0cf",
+	.attrs =  attrs_test_oa,
+};
+
+int
+i915_perf_register_sysfs_glk(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+	int ret = 0;
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+		if (ret)
+			goto error_render_basic;
+	}
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+		if (ret)
+			goto error_compute_basic;
+	}
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+		if (ret)
+			goto error_render_pipe_profile;
+	}
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+		if (ret)
+			goto error_memory_reads;
+	}
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+		if (ret)
+			goto error_memory_writes;
+	}
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+		if (ret)
+			goto error_compute_extended;
+	}
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+		if (ret)
+			goto error_compute_l3_cache;
+	}
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+		if (ret)
+			goto error_hdc_and_sf;
+	}
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+		if (ret)
+			goto error_l3_1;
+	}
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+		if (ret)
+			goto error_rasterizer_and_pixel_backend;
+	}
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler);
+		if (ret)
+			goto error_sampler;
+	}
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+		if (ret)
+			goto error_tdl_1;
+	}
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+		if (ret)
+			goto error_tdl_2;
+	}
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+		if (ret)
+			goto error_compute_extra;
+	}
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+		if (ret)
+			goto error_test_oa;
+	}
+
+	return 0;
+
+error_test_oa:
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+error_compute_extra:
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+error_tdl_2:
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+error_tdl_1:
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+error_sampler:
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+error_rasterizer_and_pixel_backend:
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+error_l3_1:
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+error_hdc_and_sf:
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+error_compute_l3_cache:
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+error_compute_extended:
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+error_memory_writes:
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+error_memory_reads:
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+error_render_pipe_profile:
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+error_compute_basic:
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+error_render_basic:
+	return ret;
+}
+
+void
+i915_perf_unregister_sysfs_glk(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+}
diff --git a/drivers/gpu/drm/i915/i915_oa_glk.h b/drivers/gpu/drm/i915/i915_oa_glk.h
new file mode 100644
index 000000000000..5511bb1cecf7
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_glk.h
@@ -0,0 +1,40 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_OA_GLK_H__
+#define __I915_OA_GLK_H__
+
+extern int i915_oa_n_builtin_metric_sets_glk;
+
+extern int i915_oa_select_metric_set_glk(struct drm_i915_private *dev_priv);
+
+extern int i915_perf_register_sysfs_glk(struct drm_i915_private *dev_priv);
+
+extern void i915_perf_unregister_sysfs_glk(struct drm_i915_private *dev_priv);
+
+#endif
diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index d5126faf2a35..c281847eb56b 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -204,6 +204,7 @@
 #include "i915_oa_bxt.h"
 #include "i915_oa_kblgt2.h"
 #include "i915_oa_kblgt3.h"
+#include "i915_oa_glk.h"
 
 /* HW requires this to be a power of two, between 128k and 16M, though driver
  * is currently generally designed assuming the largest 16M size is used such
@@ -1781,7 +1782,7 @@ static int gen8_enable_metric_set(struct drm_i915_private *dev_priv)
 	 * RPT_ID field.
 	 */
 	if (IS_SKYLAKE(dev_priv) || IS_BROXTON(dev_priv) ||
-	    IS_KABYLAKE(dev_priv)) {
+	    IS_KABYLAKE(dev_priv) || IS_GEMINILAKE(dev_priv)) {
 		I915_WRITE(GEN8_OA_DEBUG,
 			   _MASKED_BIT_ENABLE(GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS |
 					      GEN9_OA_DEBUG_INCLUDE_CLK_RATIO));
@@ -2869,6 +2870,9 @@ void i915_perf_register(struct drm_i915_private *dev_priv)
 				goto sysfs_error;
 		} else
 			goto sysfs_error;
+	} else if (IS_GEMINILAKE(dev_priv)) {
+		if (i915_perf_register_sysfs_glk(dev_priv))
+			goto sysfs_error;
 	}
 
 	goto exit;
@@ -2915,7 +2919,9 @@ void i915_perf_unregister(struct drm_i915_private *dev_priv)
 			i915_perf_unregister_sysfs_kblgt2(dev_priv);
 		else if (IS_KBL_GT3(dev_priv))
 			i915_perf_unregister_sysfs_kblgt3(dev_priv);
-	}
+	} else if (IS_GEMINILAKE(dev_priv))
+		i915_perf_unregister_sysfs_glk(dev_priv);
+
 
 	kobject_put(dev_priv->perf.metrics_kobj);
 	dev_priv->perf.metrics_kobj = NULL;
@@ -3059,6 +3065,13 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
 					i915_oa_n_builtin_metric_sets_kblgt3;
 				dev_priv->perf.oa.ops.select_metric_set =
 					i915_oa_select_metric_set_kblgt3;
+			} else if (IS_GEMINILAKE(dev_priv)) {
+				dev_priv->perf.oa.timestamp_frequency = 19200000;
+
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_glk;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_glk;
 			}
 		}
 
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v14 13/14] drm/i915/perf: reprogram NOA muxes at the beginning of each workload
  2017-05-26 11:56   ` [PATCH v14 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                       ` (11 preceding siblings ...)
  2017-05-26 11:56     ` [PATCH v14 12/14] drm/i915/perf: add GLK support Lionel Landwerlin
@ 2017-05-26 11:56     ` Lionel Landwerlin
  2017-05-30 18:51       ` Lionel Landwerlin
  2017-05-26 11:56     ` [PATCH v14 14/14] drm/i915/perf: notify sseu configuration changes Lionel Landwerlin
  13 siblings, 1 reply; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-26 11:56 UTC (permalink / raw)
  To: intel-gfx

Dynamic slices/subslices shutdown will effectivelly loose the NOA
configuration uploaded in the slices/subslices. When i915 perf is in
use, we therefore need to reprogram it.

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h         |  2 ++
 drivers/gpu/drm/i915/i915_perf.c        | 64 +++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/intel_ringbuffer.c |  3 ++
 3 files changed, 69 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index cd1dc9ee05cb..efa8a0b302ca 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2399,6 +2399,7 @@ struct drm_i915_private {
 			const struct i915_oa_reg *mux_regs[6];
 			int mux_regs_lens[6];
 			int n_mux_configs;
+			int total_n_mux_regs;
 
 			const struct i915_oa_reg *b_counter_regs;
 			int b_counter_regs_len;
@@ -3535,6 +3536,7 @@ int i915_perf_open_ioctl(struct drm_device *dev, void *data,
 void i915_oa_init_reg_state(struct intel_engine_cs *engine,
 			    struct i915_gem_context *ctx,
 			    uint32_t *reg_state);
+int i915_oa_emit_noa_config_locked(struct drm_i915_gem_request *req);
 
 /* i915_gem_evict.c */
 int __must_check i915_gem_evict_something(struct i915_address_space *vm,
diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index c281847eb56b..15b1f92fe5b5 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -1438,11 +1438,24 @@ static void config_oa_regs(struct drm_i915_private *dev_priv,
 	}
 }
 
+static void count_total_mux_regs(struct drm_i915_private *dev_priv)
+{
+	int i;
+
+	dev_priv->perf.oa.total_n_mux_regs = 0;
+	for (i = 0; i < dev_priv->perf.oa.n_mux_configs; i++) {
+		dev_priv->perf.oa.total_n_mux_regs +=
+			dev_priv->perf.oa.mux_regs_lens[i];
+	}
+}
+
 static int hsw_enable_metric_set(struct drm_i915_private *dev_priv)
 {
 	int ret = i915_oa_select_metric_set_hsw(dev_priv);
 	int i;
 
+	count_total_mux_regs(dev_priv);
+
 	if (ret)
 		return ret;
 
@@ -1756,6 +1769,8 @@ static int gen8_enable_metric_set(struct drm_i915_private *dev_priv)
 	int ret = dev_priv->perf.oa.ops.select_metric_set(dev_priv);
 	int i;
 
+	count_total_mux_regs(dev_priv);
+
 	if (ret)
 		return ret;
 
@@ -2094,6 +2109,55 @@ void i915_oa_init_reg_state(struct intel_engine_cs *engine,
 	gen8_update_reg_state_unlocked(ctx, reg_state);
 }
 
+int i915_oa_emit_noa_config_locked(struct drm_i915_gem_request *req)
+{
+	struct drm_i915_private *dev_priv = req->i915;
+	u32 *cs;
+	int i, j;
+
+	lockdep_assert_held(&dev_priv->drm.struct_mutex);
+
+	if (!IS_GEN(dev_priv, 8, 9))
+		return 0;
+
+	/* Perf not supported or not enabled. */
+	if (!dev_priv->perf.initialized ||
+	    !dev_priv->perf.oa.exclusive_stream)
+		return 0;
+
+	cs = intel_ring_begin(req,
+			      1 /* MI_LOAD_REGISTER_IMM */ +
+			      dev_priv->perf.oa.total_n_mux_regs * 2 +
+			      4 /* GDT_CHICKEN_BITS */ +
+			      1 /* NOOP */);
+	if (IS_ERR(cs))
+		return PTR_ERR(cs);
+
+	*cs++ = MI_LOAD_REGISTER_IMM(dev_priv->perf.oa.total_n_mux_regs);
+
+	*cs++ = i915_mmio_reg_offset(GDT_CHICKEN_BITS);
+	*cs++ = 0xA0;
+
+	for (i = 0; i < dev_priv->perf.oa.n_mux_configs; i++) {
+		const struct i915_oa_reg *mux_regs =
+			dev_priv->perf.oa.mux_regs[i];
+		const int mux_regs_len = dev_priv->perf.oa.mux_regs_lens[i];
+
+		for (j = 0; j < mux_regs_len; j++) {
+			*cs++ = i915_mmio_reg_offset(mux_regs[j].addr);
+			*cs++ = mux_regs[j].value;
+		}
+	}
+
+	*cs++ = i915_mmio_reg_offset(GDT_CHICKEN_BITS);
+	*cs++ = 0x80;
+
+	*cs++ = MI_NOOP;
+	intel_ring_advance(req, cs);
+
+	return 0;
+}
+
 /**
  * i915_perf_read_locked - &i915_perf_stream_ops->read with error normalisation
  * @stream: An i915 perf stream
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index acd1da9b62a3..67aaaebb194b 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -1874,6 +1874,9 @@ gen8_emit_bb_start(struct drm_i915_gem_request *req,
 			!(dispatch_flags & I915_DISPATCH_SECURE);
 	u32 *cs;
 
+	/* Emit NOA config */
+	i915_oa_emit_noa_config_locked(req);
+
 	cs = intel_ring_begin(req, 4);
 	if (IS_ERR(cs))
 		return PTR_ERR(cs);
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v14 14/14] drm/i915/perf: notify sseu configuration changes
  2017-05-26 11:56   ` [PATCH v14 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                       ` (12 preceding siblings ...)
  2017-05-26 11:56     ` [PATCH v14 13/14] drm/i915/perf: reprogram NOA muxes at the beginning of each workload Lionel Landwerlin
@ 2017-05-26 11:56     ` Lionel Landwerlin
  13 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-26 11:56 UTC (permalink / raw)
  To: intel-gfx

This adds the ability for userspace to request that the kernel track &
record sseu configuration changes. These changes are inserted into the
perf stream so that userspace can interpret the OA reports using the
configuration applied at the time the OA reports where generated.

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h         |  45 ++++
 drivers/gpu/drm/i915/i915_gem_context.c |   3 +
 drivers/gpu/drm/i915/i915_gem_context.h |  21 ++
 drivers/gpu/drm/i915/i915_perf.c        | 452 ++++++++++++++++++++++++++++++--
 drivers/gpu/drm/i915/intel_lrc.c        |   7 +-
 include/uapi/drm/i915_drm.h             |  30 +++
 6 files changed, 531 insertions(+), 27 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index efa8a0b302ca..02daea8d49fb 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1962,6 +1962,12 @@ struct i915_perf_stream {
 	struct i915_gem_context *ctx;
 
 	/**
+	 * @has_sseu: Whether the stream emits SSEU configuration changes
+	 * reports.
+	 */
+	bool has_sseu;
+
+	/**
 	 * @enabled: Whether the stream is currently enabled, considering
 	 * whether the stream was opened in a disabled state and based
 	 * on `I915_PERF_IOCTL_ENABLE` and `I915_PERF_IOCTL_DISABLE` calls.
@@ -2492,6 +2498,44 @@ struct drm_i915_private {
 			const struct i915_oa_format *oa_formats;
 			int n_builtin_sets;
 		} oa;
+
+		struct {
+			/**
+			 * Buffer containing change reports of the SSEU
+			 * configuration.
+			 */
+			struct i915_vma *vma;
+			u8 *vaddr;
+
+			/**
+			 * Scheduler write to the head, and the perf driver
+			 * reads from tail.
+			 */
+			u32 head;
+			u32 tail;
+
+			/**
+			 * Is the sseu buffer enabled.
+			 */
+			bool enabled;
+
+			/**
+			 * Whether the buffer overflown.
+			 */
+			bool overflow;
+
+			/**
+			 * Keeps track of how many times the OA unit has been
+			 * enabled. This number is used to discard stale
+			 * @perf_sseu in @i915_gem_context.
+			 */
+			atomic64_t enable_no;
+
+			/**
+			 * Lock writes & tail pointer updates on this buffer.
+			 */
+			spinlock_t ptr_lock;
+		} sseu_buffer;
 	} perf;
 
 	/* Abstract the submission mechanism (legacy ringbuffer or execlists) away */
@@ -3537,6 +3581,7 @@ void i915_oa_init_reg_state(struct intel_engine_cs *engine,
 			    struct i915_gem_context *ctx,
 			    uint32_t *reg_state);
 int i915_oa_emit_noa_config_locked(struct drm_i915_gem_request *req);
+void i915_perf_emit_sseu_config(struct drm_i915_gem_request *req);
 
 /* i915_gem_evict.c */
 int __must_check i915_gem_evict_something(struct i915_address_space *vm,
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index 60d0bbf0ae2b..368bb527a75a 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -510,6 +510,9 @@ mi_set_context(struct drm_i915_gem_request *req, u32 flags)
 	if (IS_ERR(cs))
 		return PTR_ERR(cs);
 
+	/* Notify perf of possible change in SSEU configuration. */
+	i915_perf_emit_sseu_config(req);
+
 	/* WaProgramMiArbOnOffAroundMiSetContext:ivb,vlv,hsw,bdw,chv */
 	if (INTEL_GEN(dev_priv) >= 7) {
 		*cs++ = MI_ARB_ON_OFF | MI_ARB_DISABLE;
diff --git a/drivers/gpu/drm/i915/i915_gem_context.h b/drivers/gpu/drm/i915/i915_gem_context.h
index e8247e2f6011..9c005be4933a 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.h
+++ b/drivers/gpu/drm/i915/i915_gem_context.h
@@ -194,6 +194,27 @@ struct i915_gem_context {
 
 	/** remap_slice: Bitmask of cache lines that need remapping */
 	u8 remap_slice;
+
+	/**
+	 * @oa_sseu: last SSEU configuration notified to userspace
+	 *
+	 * SSEU configuration changes are notified to userspace through the
+	 * perf infrastructure. We keep track here of the last notified
+	 * configuration for a given context since configuration can change
+	 * per engine.
+	 */
+	struct sseu_dev_info perf_sseu;
+
+	/**
+	 * @perf_enable_no: last perf enable identifier
+	 *
+	 * Userspace can enable/disable the perf infrastructure whenever it
+	 * wants without reconfiguring the OA unit. This number is updated at
+	 * the same time as @oa_sseu by copying
+	 * dev_priv->perf.oa.oa_sseu.enable_no which changes every time the
+	 * user enables the OA unit.
+	 */
+	u64 perf_enable_no;
 };
 
 static inline bool i915_gem_context_is_closed(const struct i915_gem_context *ctx)
diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 15b1f92fe5b5..7d3fd0887d4b 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -212,6 +212,14 @@
  */
 #define OA_BUFFER_SIZE		SZ_16M
 
+/* Dimension the SSEU buffer based on the size of the OA buffer to ensure we
+ * don't run out of space before the OA unit notifies us new data is
+ * available.
+ */
+#define SSEU_BUFFER_ITEM_SIZE	sizeof(struct perf_sseu_change)
+#define SSEU_BUFFER_SIZE	(OA_BUFFER_SIZE * SSEU_BUFFER_ITEM_SIZE / 256)
+#define SSEU_BUFFER_NB_ITEM	(SSEU_BUFFER_SIZE / SSEU_BUFFER_ITEM_SIZE)
+
 #define OA_TAKEN(tail, head)	((tail - head) & (OA_BUFFER_SIZE - 1))
 
 /**
@@ -350,6 +358,8 @@ struct perf_open_properties {
 	u64 single_context:1;
 	u64 ctx_handle;
 
+	bool sseu_enable;
+
 	/* OA sampling state */
 	int metrics_set;
 	int oa_format;
@@ -357,6 +367,11 @@ struct perf_open_properties {
 	int oa_period_exponent;
 };
 
+struct perf_sseu_change {
+	u32 timestamp;
+	struct drm_i915_perf_sseu_change change;
+};
+
 static u32 gen8_oa_hw_tail_read(struct drm_i915_private *dev_priv)
 {
 	return I915_READ(GEN8_OATAILPTR) & GEN8_OATAILPTR_MASK;
@@ -569,6 +584,88 @@ static int append_oa_sample(struct i915_perf_stream *stream,
 	return 0;
 }
 
+static int append_sseu_change(struct i915_perf_stream *stream,
+			      char __user *buf,
+			      size_t count,
+			      size_t *offset,
+			      const struct perf_sseu_change *sseu_report)
+{
+	struct drm_i915_perf_record_header header;
+
+	header.type = DRM_I915_PERF_RECORD_SSEU_CHANGE;
+	header.pad = 0;
+	header.size = sizeof(struct drm_i915_perf_record_header) +
+		sizeof(struct drm_i915_perf_sseu_change);
+
+	if ((count - *offset) < header.size)
+		return -ENOSPC;
+
+	buf += *offset;
+	if (copy_to_user(buf, &header, sizeof(header)))
+		return -EFAULT;
+	buf += sizeof(header);
+
+	if (copy_to_user(buf, &sseu_report->change,
+			 sizeof(struct drm_i915_perf_sseu_change)))
+		return -EFAULT;
+
+	(*offset) += header.size;
+
+	return 0;
+}
+
+static struct perf_sseu_change *
+sseu_buffer_peek(struct drm_i915_private *dev_priv, u32 head, u32 tail)
+{
+	if (head == tail)
+		return NULL;
+
+	return (struct perf_sseu_change *) (dev_priv->perf.sseu_buffer.vaddr +
+					    tail * SSEU_BUFFER_ITEM_SIZE);
+}
+
+static struct perf_sseu_change *
+sseu_buffer_advance(struct drm_i915_private *dev_priv, u32 head, u32 *tail)
+{
+	*tail = (*tail + 1) % SSEU_BUFFER_NB_ITEM;
+
+	return sseu_buffer_peek(dev_priv, head, *tail);
+}
+
+static void
+sseu_buffer_read_pointers(struct i915_perf_stream *stream, u32 *head, u32 *tail)
+{
+	struct drm_i915_private *dev_priv = stream->dev_priv;
+
+	if (!stream->has_sseu) {
+		*head = 0;
+		*tail = 0;
+		return;
+	}
+
+	spin_lock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+
+	*head = dev_priv->perf.sseu_buffer.head;
+	*tail = dev_priv->perf.sseu_buffer.tail;
+
+	spin_unlock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+}
+
+static void
+sseu_buffer_write_tail_pointer(struct i915_perf_stream *stream, u32 tail)
+{
+	struct drm_i915_private *dev_priv = stream->dev_priv;
+
+	if (!stream->has_sseu)
+		return;
+
+	spin_lock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+
+	dev_priv->perf.sseu_buffer.tail = tail;
+
+	spin_unlock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+}
+
 /**
  * Copies all buffered OA reports into userspace read() buffer.
  * @stream: An i915-perf stream opened for OA metrics
@@ -604,6 +701,8 @@ static int gen8_append_oa_reports(struct i915_perf_stream *stream,
 	unsigned int aged_tail_idx;
 	u32 head, tail;
 	u32 taken;
+	struct perf_sseu_change *sseu_report;
+	u32 sseu_head, sseu_tail;
 	int ret = 0;
 
 	if (WARN_ON(!stream->enabled))
@@ -641,6 +740,9 @@ static int gen8_append_oa_reports(struct i915_perf_stream *stream,
 		      head, tail))
 		return -EIO;
 
+	/* Extract first SSEU report. */
+	sseu_buffer_read_pointers(stream, &sseu_head, &sseu_tail);
+	sseu_report = sseu_buffer_peek(dev_priv, sseu_head, sseu_tail);
 
 	for (/* none */;
 	     (taken = OA_TAKEN(tail, head));
@@ -679,6 +781,24 @@ static int gen8_append_oa_reports(struct i915_perf_stream *stream,
 			continue;
 		}
 
+		while (sseu_report && sseu_report->timestamp < report32[1] /* LOOP */) {
+			/* While filtering for a single context we avoid
+			 * leaking the IDs of other contexts.
+			 */
+			if (stream->ctx &&
+			    stream->ctx->hw_id != sseu_report->change.hw_id) {
+				sseu_report->change.hw_id = INVALID_CTX_ID;
+			}
+
+			ret = append_sseu_change(stream, buf, count,
+						 offset, sseu_report);
+			if (ret)
+				break;
+
+			sseu_report = sseu_buffer_advance(dev_priv, sseu_head,
+							  &sseu_tail);
+		}
+
 		/* XXX: Just keep the lower 21 bits for now since I'm not
 		 * entirely sure if the HW touches any of the higher bits in
 		 * this field
@@ -770,6 +890,9 @@ static int gen8_append_oa_reports(struct i915_perf_stream *stream,
 		spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
 	}
 
+	/* Update tail pointer of the sseu buffer. */
+	sseu_buffer_write_tail_pointer(stream, sseu_tail);
+
 	return ret;
 }
 
@@ -799,12 +922,32 @@ static int gen8_oa_read(struct i915_perf_stream *stream,
 			size_t *offset)
 {
 	struct drm_i915_private *dev_priv = stream->dev_priv;
+	bool overflow = false;
 	u32 oastatus;
 	int ret;
 
 	if (WARN_ON(!dev_priv->perf.oa.oa_buffer.vaddr))
 		return -EIO;
 
+	spin_lock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+
+	overflow = dev_priv->perf.sseu_buffer.overflow;
+	if (overflow) {
+		DRM_DEBUG("SSEU buffer overflow\n");
+
+		ret = append_oa_status(stream, buf, count, offset,
+				       DRM_I915_PERF_RECORD_OA_BUFFER_LOST);
+		if (ret) {
+			spin_unlock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+			return ret;
+		}
+
+		dev_priv->perf.oa.ops.oa_disable(dev_priv);
+		dev_priv->perf.oa.ops.oa_enable(dev_priv);
+	}
+
+	spin_unlock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+
 	oastatus = I915_READ(GEN8_OASTATUS);
 
 	/* We treat OABUFFER_OVERFLOW as a significant error:
@@ -821,16 +964,18 @@ static int gen8_oa_read(struct i915_perf_stream *stream,
 	 * that something has gone quite badly wrong.
 	 */
 	if (oastatus & GEN8_OASTATUS_OABUFFER_OVERFLOW) {
-		ret = append_oa_status(stream, buf, count, offset,
-				       DRM_I915_PERF_RECORD_OA_BUFFER_LOST);
-		if (ret)
-			return ret;
-
 		DRM_DEBUG("OA buffer overflow (exponent = %d): force restart\n",
 			  dev_priv->perf.oa.period_exponent);
 
-		dev_priv->perf.oa.ops.oa_disable(dev_priv);
-		dev_priv->perf.oa.ops.oa_enable(dev_priv);
+		if (!overflow) {
+			ret = append_oa_status(stream, buf, count, offset,
+					       DRM_I915_PERF_RECORD_OA_BUFFER_LOST);
+			if (ret)
+				return ret;
+
+			dev_priv->perf.oa.ops.oa_disable(dev_priv);
+			dev_priv->perf.oa.ops.oa_enable(dev_priv);
+		}
 
 		/* Note: .oa_enable() is expected to re-init the oabuffer
 		 * and reset GEN8_OASTATUS for us
@@ -839,10 +984,12 @@ static int gen8_oa_read(struct i915_perf_stream *stream,
 	}
 
 	if (oastatus & GEN8_OASTATUS_REPORT_LOST) {
-		ret = append_oa_status(stream, buf, count, offset,
-				       DRM_I915_PERF_RECORD_OA_REPORT_LOST);
-		if (ret)
-			return ret;
+		if (!overflow) {
+			ret = append_oa_status(stream, buf, count, offset,
+					       DRM_I915_PERF_RECORD_OA_REPORT_LOST);
+			if (ret)
+				return ret;
+		}
 		I915_WRITE(GEN8_OASTATUS,
 			   oastatus & ~GEN8_OASTATUS_REPORT_LOST);
 	}
@@ -885,6 +1032,8 @@ static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 	unsigned int aged_tail_idx;
 	u32 head, tail;
 	u32 taken;
+	struct perf_sseu_change *sseu_report;
+	u32 sseu_head, sseu_tail;
 	int ret = 0;
 
 	if (WARN_ON(!stream->enabled))
@@ -922,6 +1071,9 @@ static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 		      head, tail))
 		return -EIO;
 
+	/* Extract first SSEU report. */
+	sseu_buffer_read_pointers(stream, &sseu_head, &sseu_tail);
+	sseu_report = sseu_buffer_peek(dev_priv, sseu_head, sseu_tail);
 
 	for (/* none */;
 	     (taken = OA_TAKEN(tail, head));
@@ -954,6 +1106,25 @@ static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 			continue;
 		}
 
+		/* Emit any potential sseu report. */
+		while (sseu_report && sseu_report->timestamp < report32[1] /* LOOP */) {
+			/* While filtering for a single context we avoid
+			 * leaking the IDs of other contexts.
+			 */
+			if (stream->ctx &&
+			    stream->ctx->hw_id != sseu_report->change.hw_id) {
+				sseu_report->change.hw_id = INVALID_CTX_ID;
+			}
+
+			ret = append_sseu_change(stream, buf, count,
+						 offset, sseu_report);
+			if (ret)
+				break;
+
+			sseu_report = sseu_buffer_advance(dev_priv, sseu_head,
+							  &sseu_tail);
+		}
+
 		ret = append_oa_sample(stream, buf, count, offset, report);
 		if (ret)
 			break;
@@ -983,6 +1154,9 @@ static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 		spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
 	}
 
+	/* Update tail pointer of the sseu buffer. */
+	sseu_buffer_write_tail_pointer(stream, sseu_tail);
+
 	return ret;
 }
 
@@ -1008,12 +1182,32 @@ static int gen7_oa_read(struct i915_perf_stream *stream,
 			size_t *offset)
 {
 	struct drm_i915_private *dev_priv = stream->dev_priv;
+	bool overflow = false;
 	u32 oastatus1;
 	int ret;
 
 	if (WARN_ON(!dev_priv->perf.oa.oa_buffer.vaddr))
 		return -EIO;
 
+	spin_lock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+
+	overflow = dev_priv->perf.sseu_buffer.overflow;
+	if (overflow) {
+		DRM_DEBUG("SSEU buffer overflow\n");
+
+		ret = append_oa_status(stream, buf, count, offset,
+				       DRM_I915_PERF_RECORD_OA_BUFFER_LOST);
+		if (ret) {
+			spin_unlock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+			return ret;
+		}
+
+		dev_priv->perf.oa.ops.oa_disable(dev_priv);
+		dev_priv->perf.oa.ops.oa_enable(dev_priv);
+	}
+
+	spin_unlock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+
 	oastatus1 = I915_READ(GEN7_OASTATUS1);
 
 	/* XXX: On Haswell we don't have a safe way to clear oastatus1
@@ -1044,25 +1238,30 @@ static int gen7_oa_read(struct i915_perf_stream *stream,
 	 *   now.
 	 */
 	if (unlikely(oastatus1 & GEN7_OASTATUS1_OABUFFER_OVERFLOW)) {
-		ret = append_oa_status(stream, buf, count, offset,
-				       DRM_I915_PERF_RECORD_OA_BUFFER_LOST);
-		if (ret)
-			return ret;
-
 		DRM_DEBUG("OA buffer overflow (exponent = %d): force restart\n",
 			  dev_priv->perf.oa.period_exponent);
 
-		dev_priv->perf.oa.ops.oa_disable(dev_priv);
-		dev_priv->perf.oa.ops.oa_enable(dev_priv);
+		if (!overflow) {
+			ret = append_oa_status(stream, buf, count, offset,
+					       DRM_I915_PERF_RECORD_OA_BUFFER_LOST);
+			if (ret)
+				return ret;
+
+			dev_priv->perf.oa.ops.oa_disable(dev_priv);
+			dev_priv->perf.oa.ops.oa_enable(dev_priv);
+		}
 
 		oastatus1 = I915_READ(GEN7_OASTATUS1);
 	}
 
 	if (unlikely(oastatus1 & GEN7_OASTATUS1_REPORT_LOST)) {
-		ret = append_oa_status(stream, buf, count, offset,
-				       DRM_I915_PERF_RECORD_OA_REPORT_LOST);
-		if (ret)
-			return ret;
+		if (!overflow) {
+			ret = append_oa_status(stream, buf, count, offset,
+					       DRM_I915_PERF_RECORD_OA_REPORT_LOST);
+			if (ret)
+				return ret;
+		}
+
 		dev_priv->perf.oa.gen7_latched_oastatus1 |=
 			GEN7_OASTATUS1_REPORT_LOST;
 	}
@@ -1217,18 +1416,25 @@ static void oa_put_render_ctx_id(struct i915_perf_stream *stream)
 }
 
 static void
-free_oa_buffer(struct drm_i915_private *i915)
+free_sseu_buffer(struct drm_i915_private *i915)
 {
-	mutex_lock(&i915->drm.struct_mutex);
+	i915_gem_object_unpin_map(i915->perf.sseu_buffer.vma->obj);
+	i915_vma_unpin(i915->perf.sseu_buffer.vma);
+	i915_gem_object_put(i915->perf.sseu_buffer.vma->obj);
+
+	i915->perf.sseu_buffer.vma = NULL;
+	i915->perf.sseu_buffer.vaddr = NULL;
+}
 
+static void
+free_oa_buffer(struct drm_i915_private *i915)
+{
 	i915_gem_object_unpin_map(i915->perf.oa.oa_buffer.vma->obj);
 	i915_vma_unpin(i915->perf.oa.oa_buffer.vma);
 	i915_gem_object_put(i915->perf.oa.oa_buffer.vma->obj);
 
 	i915->perf.oa.oa_buffer.vma = NULL;
 	i915->perf.oa.oa_buffer.vaddr = NULL;
-
-	mutex_unlock(&i915->drm.struct_mutex);
 }
 
 static void i915_oa_stream_destroy(struct i915_perf_stream *stream)
@@ -1244,8 +1450,15 @@ static void i915_oa_stream_destroy(struct i915_perf_stream *stream)
 
 	dev_priv->perf.oa.ops.disable_metric_set(dev_priv);
 
+	mutex_lock(&dev_priv->drm.struct_mutex);
+
+	if (stream->has_sseu)
+		free_sseu_buffer(dev_priv);
+
 	free_oa_buffer(dev_priv);
 
+	mutex_unlock(&dev_priv->drm.struct_mutex);
+
 	intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
 	intel_runtime_pm_put(dev_priv);
 
@@ -1258,6 +1471,97 @@ static void i915_oa_stream_destroy(struct i915_perf_stream *stream)
 	}
 }
 
+static void init_sseu_buffer(struct drm_i915_private *dev_priv,
+			     bool need_lock)
+{
+	/* Cleanup auxilliary buffer. */
+	memset(dev_priv->perf.sseu_buffer.vaddr, 0, SSEU_BUFFER_SIZE);
+
+	atomic64_inc(&dev_priv->perf.sseu_buffer.enable_no);
+
+	if (need_lock)
+		spin_lock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+
+	/* Initialize head & tail pointers. */
+	dev_priv->perf.sseu_buffer.head = 0;
+	dev_priv->perf.sseu_buffer.tail = 0;
+	dev_priv->perf.sseu_buffer.overflow = false;
+
+	if (need_lock)
+		spin_unlock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+}
+
+static int alloc_sseu_buffer(struct drm_i915_private *dev_priv)
+{
+	struct drm_i915_gem_object *bo;
+	struct i915_vma *vma;
+	int ret = 0;
+
+	bo = i915_gem_object_create(dev_priv, SSEU_BUFFER_SIZE);
+	if (IS_ERR(bo)) {
+		DRM_ERROR("Failed to allocate SSEU buffer\n");
+		ret = PTR_ERR(bo);
+		goto out;
+	}
+
+	vma = i915_gem_object_ggtt_pin(bo, NULL, 0, SSEU_BUFFER_SIZE, 0);
+	if (IS_ERR(vma)) {
+		ret = PTR_ERR(vma);
+		goto err_unref;
+	}
+	dev_priv->perf.sseu_buffer.vma = vma;
+
+	dev_priv->perf.sseu_buffer.vaddr =
+		i915_gem_object_pin_map(bo, I915_MAP_WB);
+	if (IS_ERR(dev_priv->perf.sseu_buffer.vaddr)) {
+		ret = PTR_ERR(dev_priv->perf.sseu_buffer.vaddr);
+		goto err_unpin;
+	}
+
+	atomic64_set(&dev_priv->perf.sseu_buffer.enable_no, 0);
+
+	init_sseu_buffer(dev_priv, true);
+
+	DRM_DEBUG_DRIVER("OA SSEU Buffer initialized, gtt offset = 0x%x, vaddr = %p\n",
+			 i915_ggtt_offset(dev_priv->perf.sseu_buffer.vma),
+			 dev_priv->perf.sseu_buffer.vaddr);
+
+	goto out;
+
+err_unpin:
+	__i915_vma_unpin(vma);
+
+err_unref:
+	i915_gem_object_put(bo);
+
+	dev_priv->perf.sseu_buffer.vaddr = NULL;
+	dev_priv->perf.sseu_buffer.vma = NULL;
+
+out:
+	return ret;
+}
+
+static void sseu_enable(struct drm_i915_private *dev_priv)
+{
+	spin_lock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+
+	init_sseu_buffer(dev_priv, false);
+
+	dev_priv->perf.sseu_buffer.enabled = true;
+
+	spin_unlock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+}
+
+static void sseu_disable(struct drm_i915_private *dev_priv)
+{
+	spin_lock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+
+	dev_priv->perf.sseu_buffer.enabled = false;
+	dev_priv->perf.sseu_buffer.overflow = false;
+
+	spin_unlock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+}
+
 static void gen7_init_oa_buffer(struct drm_i915_private *dev_priv)
 {
 	u32 gtt_offset = i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
@@ -1740,6 +2044,9 @@ static int gen8_configure_all_contexts(struct drm_i915_private *dev_priv,
 		struct intel_context *ce = &ctx->engine[RCS];
 		u32 *regs;
 
+		memset(&ctx->perf_sseu, 0, sizeof(ctx->perf_sseu));
+		ctx->perf_enable_no = 0;
+
 		/* OA settings will be set upon first use */
 		if (!ce->state)
 			continue;
@@ -1898,6 +2205,9 @@ static void i915_oa_stream_enable(struct i915_perf_stream *stream)
 {
 	struct drm_i915_private *dev_priv = stream->dev_priv;
 
+	if (stream->has_sseu)
+		sseu_enable(dev_priv);
+
 	dev_priv->perf.oa.ops.oa_enable(dev_priv);
 
 	if (dev_priv->perf.oa.periodic)
@@ -1928,6 +2238,9 @@ static void i915_oa_stream_disable(struct i915_perf_stream *stream)
 {
 	struct drm_i915_private *dev_priv = stream->dev_priv;
 
+	if (stream->has_sseu)
+		sseu_disable(dev_priv);
+
 	dev_priv->perf.oa.ops.oa_disable(dev_priv);
 
 	if (dev_priv->perf.oa.periodic)
@@ -2053,6 +2366,14 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream,
 			return ret;
 	}
 
+	if (props->sseu_enable) {
+		ret = alloc_sseu_buffer(dev_priv);
+		if (ret)
+			goto err_sseu_buf_alloc;
+
+		stream->has_sseu = true;
+	}
+
 	ret = alloc_oa_buffer(dev_priv);
 	if (ret)
 		goto err_oa_buf_alloc;
@@ -2085,6 +2406,9 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream,
 err_enable:
 	intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
 	intel_runtime_pm_put(dev_priv);
+	free_sseu_buffer(dev_priv);
+
+err_sseu_buf_alloc:
 	free_oa_buffer(dev_priv);
 
 err_oa_buf_alloc:
@@ -2159,6 +2483,75 @@ int i915_oa_emit_noa_config_locked(struct drm_i915_gem_request *req)
 }
 
 /**
+ * i915_perf_emit_sseu_config - emit the SSEU configuration of a gem request
+ * @req: the request
+ *
+ * If the OA unit is enabled, the write the SSEU configuration of a
+ * given request into a circular buffer. The buffer is read at the
+ * same time as the OA buffer and its elements are inserted into the
+ * perf stream to notify the userspace of SSEU configuration changes.
+ * This is needed to interpret values from the OA reports.
+ */
+void i915_perf_emit_sseu_config(struct drm_i915_gem_request *req)
+{
+	struct drm_i915_private *dev_priv = req->i915;
+	struct perf_sseu_change *report;
+	const struct sseu_dev_info *sseu;
+	u32 head, timestamp;
+	u64 perf_enable_no;
+
+	/* Perf not supported. */
+	if (!dev_priv->perf.initialized)
+		return;
+
+	/* Nothing's changed, no need to signal userspace. */
+	sseu = &req->ctx->engine[req->engine->id].sseu;
+	perf_enable_no = atomic64_read(&dev_priv->perf.sseu_buffer.enable_no);
+	if (perf_enable_no == req->ctx->perf_enable_no &&
+	    memcmp(&req->ctx->perf_sseu, sseu, sizeof(*sseu)) == 0)
+		return;
+
+	memcpy(&req->ctx->perf_sseu, sseu, sizeof(*sseu));
+	req->ctx->perf_enable_no = perf_enable_no;
+
+	/* Read the timestamp outside the spin lock. */
+	timestamp = I915_READ(RING_TIMESTAMP(RENDER_RING_BASE));
+
+	spin_lock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+
+	if (!dev_priv->perf.sseu_buffer.enabled)
+		goto out;
+
+	head = dev_priv->perf.sseu_buffer.head;
+	dev_priv->perf.sseu_buffer.head =
+		(dev_priv->perf.sseu_buffer.head + 1) % SSEU_BUFFER_NB_ITEM;
+
+	/* This should never happen, unless we've wrongly dimensioned the SSEU
+	 * buffer. */
+	if (dev_priv->perf.sseu_buffer.head == dev_priv->perf.sseu_buffer.tail) {
+		dev_priv->perf.sseu_buffer.overflow = true;
+		goto out;
+	}
+
+	report = (struct perf_sseu_change *) (dev_priv->perf.sseu_buffer.vaddr +
+					      head * SSEU_BUFFER_ITEM_SIZE);
+	memset(report, 0, sizeof(*report));
+
+	report->timestamp = I915_READ(RING_TIMESTAMP(RENDER_RING_BASE));
+	report->change.hw_id = req->ctx->hw_id;
+
+	report->change.sseu.packed.slice_mask = sseu->slice_mask;
+	report->change.sseu.packed.subslice_mask = sseu->subslice_mask;
+	report->change.sseu.packed.min_eu_per_subslice =
+		sseu->min_eu_per_subslice;
+	report->change.sseu.packed.max_eu_per_subslice =
+		sseu->max_eu_per_subslice;
+
+out:
+	spin_unlock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+}
+
+/**
  * i915_perf_read_locked - &i915_perf_stream_ops->read with error normalisation
  * @stream: An i915 perf stream
  * @file: An i915 perf stream file
@@ -2805,6 +3198,9 @@ static int read_properties_unlocked(struct drm_i915_private *dev_priv,
 			props->oa_periodic = true;
 			props->oa_period_exponent = value;
 			break;
+		case DRM_I915_PERF_PROP_SSEU_CHANGE:
+			props->sseu_enable = true;
+			break;
 		case DRM_I915_PERF_PROP_MAX:
 			MISSING_CASE(id);
 			return -EINVAL;
@@ -3165,6 +3561,10 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
 		mutex_init(&dev_priv->perf.lock);
 		spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);
 
+		spin_lock_init(&dev_priv->perf.sseu_buffer.ptr_lock);
+		dev_priv->perf.sseu_buffer.enabled = false;
+		atomic64_set(&dev_priv->perf.sseu_buffer.enable_no, 0);
+
 		oa_sample_rate_hard_limit =
 			dev_priv->perf.oa.timestamp_frequency / 2;
 		dev_priv->perf.sysctl_header = register_sysctl_table(dev_root);
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index ad37e2b236b0..e2f83a12337b 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -350,8 +350,10 @@ static void execlists_submit_ports(struct intel_engine_cs *engine)
 		rq = port_unpack(&port[n], &count);
 		if (rq) {
 			GEM_BUG_ON(count > !n);
-			if (!count++)
+			if (!count++) {
+				i915_perf_emit_sseu_config(rq);
 				execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_IN);
+			}
 			port_set(&port[n], port_pack(rq, count));
 			desc = execlists_update_context(rq);
 			GEM_DEBUG_EXEC(port[n].context_id = upper_32_bits(desc));
@@ -1406,6 +1408,9 @@ static int gen8_emit_bb_start(struct drm_i915_gem_request *req,
 		req->ctx->ppgtt->pd_dirty_rings &= ~intel_engine_flag(req->engine);
 	}
 
+	/* Emit NOA config */
+	i915_oa_emit_noa_config_locked(req);
+
 	cs = intel_ring_begin(req, 4);
 	if (IS_ERR(cs))
 		return PTR_ERR(cs);
diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
index d561b1f98b20..ca06d46a01af 100644
--- a/include/uapi/drm/i915_drm.h
+++ b/include/uapi/drm/i915_drm.h
@@ -439,6 +439,16 @@ typedef struct drm_i915_setparam {
 	int value;
 } drm_i915_setparam_t;
 
+union drm_i915_gem_param_sseu {
+	struct {
+		__u8 slice_mask;
+		__u8 subslice_mask;
+		__u8 min_eu_per_subslice;
+		__u8 max_eu_per_subslice;
+	} packed;
+	__u64 value;
+};
+
 /* A memory manager for regions of shared memory:
  */
 #define I915_MEM_REGION_AGP 1
@@ -1358,6 +1368,11 @@ enum drm_i915_perf_property_id {
 	 */
 	DRM_I915_PERF_PROP_OA_EXPONENT,
 
+	/**
+	 * Configure the stream to monitor sseu configuration changes.
+	 */
+	DRM_I915_PERF_PROP_SSEU_CHANGE,
+
 	DRM_I915_PERF_PROP_MAX /* non-ABI */
 };
 
@@ -1441,9 +1456,24 @@ enum drm_i915_perf_record_type {
 	 */
 	DRM_I915_PERF_RECORD_OA_BUFFER_LOST = 3,
 
+	/**
+	 * A change in the sseu configuration happened.
+	 *
+	 * struct {
+	 *     struct drm_i915_perf_record_header header;
+	 *     { struct drm_i915_perf_sseu_change change; } && DRM_I915_PERF_PROP_SSEU_CHANGE
+	 * };
+	 */
+	DRM_I915_PERF_RECORD_SSEU_CHANGE = 4,
+
 	DRM_I915_PERF_RECORD_MAX /* non-ABI */
 };
 
+struct drm_i915_perf_sseu_change {
+	__u32 hw_id;
+	union drm_i915_gem_param_sseu sseu;
+};
+
 #if defined(__cplusplus)
 }
 #endif
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* ✓ Fi.CI.BAT: success for series starting with [v2,9/15] drm/i915: expose _SLICE_MASK GETPARM (rev2)
  2017-04-24  7:17 ` [PATCH 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                     ` (16 preceding siblings ...)
  2017-05-26 11:56   ` [PATCH v14 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
@ 2017-05-26 12:28   ` Patchwork
  2017-05-31 12:33   ` [PATCH v15 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
  18 siblings, 0 replies; 87+ messages in thread
From: Patchwork @ 2017-05-26 12:28 UTC (permalink / raw)
  To: Lionel Landwerlin; +Cc: intel-gfx

== Series Details ==

Series: series starting with [v2,9/15] drm/i915: expose _SLICE_MASK GETPARM (rev2)
URL   : https://patchwork.freedesktop.org/series/23896/
State : success

== Summary ==

Series 23896v2 Series without cover letter
https://patchwork.freedesktop.org/api/1.0/series/23896/revisions/2/mbox/

Test kms_busy:
        Subgroup basic-flip-default-a:
                pass       -> DMESG-WARN (fi-skl-6700hq) fdo#101144
Test kms_cursor_legacy:
        Subgroup basic-busy-flip-before-cursor-atomic:
                fail       -> PASS       (fi-snb-2600) fdo#100215

fdo#101144 https://bugs.freedesktop.org/show_bug.cgi?id=101144
fdo#100215 https://bugs.freedesktop.org/show_bug.cgi?id=100215

fi-bdw-5557u     total:278  pass:267  dwarn:0   dfail:0   fail:0   skip:11  time:444s
fi-bdw-gvtdvm    total:278  pass:256  dwarn:8   dfail:0   fail:0   skip:14  time:430s
fi-bsw-n3050     total:278  pass:242  dwarn:0   dfail:0   fail:0   skip:36  time:574s
fi-bxt-j4205     total:278  pass:259  dwarn:0   dfail:0   fail:0   skip:19  time:509s
fi-byt-j1900     total:278  pass:254  dwarn:0   dfail:0   fail:0   skip:24  time:489s
fi-byt-n2820     total:278  pass:250  dwarn:0   dfail:0   fail:0   skip:28  time:481s
fi-hsw-4770      total:278  pass:262  dwarn:0   dfail:0   fail:0   skip:16  time:413s
fi-hsw-4770r     total:278  pass:262  dwarn:0   dfail:0   fail:0   skip:16  time:412s
fi-ilk-650       total:278  pass:228  dwarn:0   dfail:0   fail:0   skip:50  time:421s
fi-ivb-3520m     total:278  pass:260  dwarn:0   dfail:0   fail:0   skip:18  time:495s
fi-ivb-3770      total:278  pass:260  dwarn:0   dfail:0   fail:0   skip:18  time:464s
fi-kbl-7500u     total:278  pass:255  dwarn:5   dfail:0   fail:0   skip:18  time:469s
fi-kbl-7560u     total:278  pass:263  dwarn:5   dfail:0   fail:0   skip:10  time:565s
fi-skl-6260u     total:278  pass:268  dwarn:0   dfail:0   fail:0   skip:10  time:460s
fi-skl-6700hq    total:278  pass:260  dwarn:1   dfail:0   fail:0   skip:17  time:568s
fi-skl-6700k     total:278  pass:256  dwarn:4   dfail:0   fail:0   skip:18  time:465s
fi-skl-6770hq    total:278  pass:268  dwarn:0   dfail:0   fail:0   skip:10  time:503s
fi-skl-gvtdvm    total:278  pass:265  dwarn:0   dfail:0   fail:0   skip:13  time:437s
fi-snb-2520m     total:278  pass:250  dwarn:0   dfail:0   fail:0   skip:28  time:532s
fi-snb-2600      total:278  pass:249  dwarn:0   dfail:0   fail:0   skip:29  time:404s

736e09119776656409971acbf0da829de9eb07e6 drm-tip: 2017y-05m-26d-11h-44m-35s UTC integration manifest
3a5bb9b drm/i915: Program RPCS for Broadwell
268c663 drm/i915: expose _SLICE_MASK GETPARM

== Logs ==

For more details see: https://intel-gfx-ci.01.org/CI/Patchwork_4819/
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v14 13/14] drm/i915/perf: reprogram NOA muxes at the beginning of each workload
  2017-05-26 11:56     ` [PATCH v14 13/14] drm/i915/perf: reprogram NOA muxes at the beginning of each workload Lionel Landwerlin
@ 2017-05-30 18:51       ` Lionel Landwerlin
  0 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-30 18:51 UTC (permalink / raw)
  To: intel-gfx

There is pretty obvious bug with this patch as some OA configs spans 
more registers write than what MI_LOAD_REGISTER_IMM can do (> 128).
I'll send an updated patch.

That doesn't affect patch 14 though.

-
Lionel

On 26/05/17 12:56, Lionel Landwerlin wrote:
> Dynamic slices/subslices shutdown will effectivelly loose the NOA
> configuration uploaded in the slices/subslices. When i915 perf is in
> use, we therefore need to reprogram it.
>
> Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
> ---
>   drivers/gpu/drm/i915/i915_drv.h         |  2 ++
>   drivers/gpu/drm/i915/i915_perf.c        | 64 +++++++++++++++++++++++++++++++++
>   drivers/gpu/drm/i915/intel_ringbuffer.c |  3 ++
>   3 files changed, 69 insertions(+)
>
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index cd1dc9ee05cb..efa8a0b302ca 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -2399,6 +2399,7 @@ struct drm_i915_private {
>   			const struct i915_oa_reg *mux_regs[6];
>   			int mux_regs_lens[6];
>   			int n_mux_configs;
> +			int total_n_mux_regs;
>   
>   			const struct i915_oa_reg *b_counter_regs;
>   			int b_counter_regs_len;
> @@ -3535,6 +3536,7 @@ int i915_perf_open_ioctl(struct drm_device *dev, void *data,
>   void i915_oa_init_reg_state(struct intel_engine_cs *engine,
>   			    struct i915_gem_context *ctx,
>   			    uint32_t *reg_state);
> +int i915_oa_emit_noa_config_locked(struct drm_i915_gem_request *req);
>   
>   /* i915_gem_evict.c */
>   int __must_check i915_gem_evict_something(struct i915_address_space *vm,
> diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
> index c281847eb56b..15b1f92fe5b5 100644
> --- a/drivers/gpu/drm/i915/i915_perf.c
> +++ b/drivers/gpu/drm/i915/i915_perf.c
> @@ -1438,11 +1438,24 @@ static void config_oa_regs(struct drm_i915_private *dev_priv,
>   	}
>   }
>   
> +static void count_total_mux_regs(struct drm_i915_private *dev_priv)
> +{
> +	int i;
> +
> +	dev_priv->perf.oa.total_n_mux_regs = 0;
> +	for (i = 0; i < dev_priv->perf.oa.n_mux_configs; i++) {
> +		dev_priv->perf.oa.total_n_mux_regs +=
> +			dev_priv->perf.oa.mux_regs_lens[i];
> +	}
> +}
> +
>   static int hsw_enable_metric_set(struct drm_i915_private *dev_priv)
>   {
>   	int ret = i915_oa_select_metric_set_hsw(dev_priv);
>   	int i;
>   
> +	count_total_mux_regs(dev_priv);
> +
>   	if (ret)
>   		return ret;
>   
> @@ -1756,6 +1769,8 @@ static int gen8_enable_metric_set(struct drm_i915_private *dev_priv)
>   	int ret = dev_priv->perf.oa.ops.select_metric_set(dev_priv);
>   	int i;
>   
> +	count_total_mux_regs(dev_priv);
> +
>   	if (ret)
>   		return ret;
>   
> @@ -2094,6 +2109,55 @@ void i915_oa_init_reg_state(struct intel_engine_cs *engine,
>   	gen8_update_reg_state_unlocked(ctx, reg_state);
>   }
>   
> +int i915_oa_emit_noa_config_locked(struct drm_i915_gem_request *req)
> +{
> +	struct drm_i915_private *dev_priv = req->i915;
> +	u32 *cs;
> +	int i, j;
> +
> +	lockdep_assert_held(&dev_priv->drm.struct_mutex);
> +
> +	if (!IS_GEN(dev_priv, 8, 9))
> +		return 0;
> +
> +	/* Perf not supported or not enabled. */
> +	if (!dev_priv->perf.initialized ||
> +	    !dev_priv->perf.oa.exclusive_stream)
> +		return 0;
> +
> +	cs = intel_ring_begin(req,
> +			      1 /* MI_LOAD_REGISTER_IMM */ +
> +			      dev_priv->perf.oa.total_n_mux_regs * 2 +
> +			      4 /* GDT_CHICKEN_BITS */ +
> +			      1 /* NOOP */);
> +	if (IS_ERR(cs))
> +		return PTR_ERR(cs);
> +
> +	*cs++ = MI_LOAD_REGISTER_IMM(dev_priv->perf.oa.total_n_mux_regs);
> +
> +	*cs++ = i915_mmio_reg_offset(GDT_CHICKEN_BITS);
> +	*cs++ = 0xA0;
> +
> +	for (i = 0; i < dev_priv->perf.oa.n_mux_configs; i++) {
> +		const struct i915_oa_reg *mux_regs =
> +			dev_priv->perf.oa.mux_regs[i];
> +		const int mux_regs_len = dev_priv->perf.oa.mux_regs_lens[i];
> +
> +		for (j = 0; j < mux_regs_len; j++) {
> +			*cs++ = i915_mmio_reg_offset(mux_regs[j].addr);
> +			*cs++ = mux_regs[j].value;
> +		}
> +	}
> +
> +	*cs++ = i915_mmio_reg_offset(GDT_CHICKEN_BITS);
> +	*cs++ = 0x80;
> +
> +	*cs++ = MI_NOOP;
> +	intel_ring_advance(req, cs);
> +
> +	return 0;
> +}
> +
>   /**
>    * i915_perf_read_locked - &i915_perf_stream_ops->read with error normalisation
>    * @stream: An i915 perf stream
> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
> index acd1da9b62a3..67aaaebb194b 100644
> --- a/drivers/gpu/drm/i915/intel_ringbuffer.c
> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
> @@ -1874,6 +1874,9 @@ gen8_emit_bb_start(struct drm_i915_gem_request *req,
>   			!(dispatch_flags & I915_DISPATCH_SECURE);
>   	u32 *cs;
>   
> +	/* Emit NOA config */
> +	i915_oa_emit_noa_config_locked(req);
> +
>   	cs = intel_ring_begin(req, 4);
>   	if (IS_ERR(cs))
>   		return PTR_ERR(cs);


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* [PATCH v15 00/14] Enable OA unit for Gen 8 and 9 in i915 perf
  2017-04-24  7:17 ` [PATCH 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                     ` (17 preceding siblings ...)
  2017-05-26 12:28   ` ✓ Fi.CI.BAT: success for series starting with [v2,9/15] drm/i915: expose _SLICE_MASK GETPARM (rev2) Patchwork
@ 2017-05-31 12:33   ` Lionel Landwerlin
  2017-05-31 12:33     ` [PATCH v15 01/14] drm/i915: Record both min/max eu_per_subslice in sseu_dev_info Lionel Landwerlin
                       ` (13 more replies)
  18 siblings, 14 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-31 12:33 UTC (permalink / raw)
  To: intel-gfx

Hi,

A quick update with a couple of fixes in patch 13 & 14.
No changes in the other patches.

Cheers,

Chris Wilson (3):
  drm/i915: Record both min/max eu_per_subslice in sseu_dev_info
  drm/i915: Program RPCS for Broadwell
  drm/i915: Record the sseu configuration per-context & engine

Lionel Landwerlin (6):
  drm/i915/perf: rework mux configurations queries
  drm/i915: add KBL GT2/GT3 check macros
  drm/i915/perf: add KBL support
  drm/i915/perf: add GLK support
  drm/i915/perf: reprogram NOA muxes at the beginning of each workload
  drm/i915/perf: notify sseu configuration changes

Robert Bragg (5):
  drm/i915/perf: Add 'render basic' Gen8+ OA unit configs
  drm/i915/perf: Add OA unit support for Gen 8+
  drm/i915/perf: Add more OA configs for BDW, CHV, SKL + BXT
  drm/i915/perf: per-gen timebase for checking sample freq
  drm/i915/perf: remove perf.hook_lock

 drivers/gpu/drm/i915/Makefile            |   11 +-
 drivers/gpu/drm/i915/i915_debugfs.c      |   36 +-
 drivers/gpu/drm/i915/i915_drv.h          |  125 +-
 drivers/gpu/drm/i915/i915_gem_context.c  |    9 +
 drivers/gpu/drm/i915/i915_gem_context.h  |   42 +
 drivers/gpu/drm/i915/i915_oa_bdw.c       | 5376 ++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_bdw.h       |   40 +
 drivers/gpu/drm/i915/i915_oa_bxt.c       | 2690 +++++++++++++++
 drivers/gpu/drm/i915/i915_oa_bxt.h       |   40 +
 drivers/gpu/drm/i915/i915_oa_chv.c       | 2873 ++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_chv.h       |   40 +
 drivers/gpu/drm/i915/i915_oa_glk.c       | 2602 +++++++++++++++
 drivers/gpu/drm/i915/i915_oa_glk.h       |   40 +
 drivers/gpu/drm/i915/i915_oa_hsw.c       |  263 +-
 drivers/gpu/drm/i915/i915_oa_hsw.h       |    4 +-
 drivers/gpu/drm/i915/i915_oa_kblgt2.c    | 2991 +++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_kblgt2.h    |   40 +
 drivers/gpu/drm/i915/i915_oa_kblgt3.c    | 3040 +++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_kblgt3.h    |   40 +
 drivers/gpu/drm/i915/i915_oa_sklgt2.c    | 3479 +++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_sklgt2.h    |   40 +
 drivers/gpu/drm/i915/i915_oa_sklgt3.c    | 3039 +++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_sklgt3.h    |   40 +
 drivers/gpu/drm/i915/i915_oa_sklgt4.c    | 3093 +++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_sklgt4.h    |   40 +
 drivers/gpu/drm/i915/i915_perf.c         | 1642 ++++++++-
 drivers/gpu/drm/i915/i915_reg.h          |   22 +
 drivers/gpu/drm/i915/intel_device_info.c |   32 +-
 drivers/gpu/drm/i915/intel_lrc.c         |   39 +-
 drivers/gpu/drm/i915/intel_lrc.h         |    2 +
 drivers/gpu/drm/i915/intel_ringbuffer.c  |    3 +
 include/uapi/drm/i915_drm.h              |   49 +-
 32 files changed, 31531 insertions(+), 291 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_oa_bdw.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_bdw.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_bxt.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_bxt.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_chv.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_chv.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_glk.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_glk.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_kblgt2.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_kblgt2.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_kblgt3.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_kblgt3.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt2.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt2.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt3.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt3.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt4.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt4.h

--
2.11.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* [PATCH v15 01/14] drm/i915: Record both min/max eu_per_subslice in sseu_dev_info
  2017-05-31 12:33   ` [PATCH v15 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
@ 2017-05-31 12:33     ` Lionel Landwerlin
  2017-05-31 12:33     ` [PATCH v15 02/14] drm/i915: Program RPCS for Broadwell Lionel Landwerlin
                       ` (12 subsequent siblings)
  13 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-31 12:33 UTC (permalink / raw)
  To: intel-gfx

From: Chris Wilson <chris@chris-wilson.co.uk>

When we query the available eu on each subslice, we currently only
report the max. It would also be useful to report the minimum found as
well.

When we set RPCS (power gating over the EU), we can also specify both
the min and max number of eu to configure on each slice; currently we
just set it to a single value, but the flexibility may be beneficial in
future.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
---
 drivers/gpu/drm/i915/i915_debugfs.c      | 36 +++++++++++++++++++++++---------
 drivers/gpu/drm/i915/i915_drv.h          |  3 ++-
 drivers/gpu/drm/i915/intel_device_info.c | 32 +++++++++++++++++-----------
 drivers/gpu/drm/i915/intel_lrc.c         |  4 ++--
 4 files changed, 50 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 3b088685a553..9bdf2bd59fbb 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -4497,6 +4497,7 @@ DEFINE_SIMPLE_ATTRIBUTE(i915_cache_sharing_fops,
 static void cherryview_sseu_device_status(struct drm_i915_private *dev_priv,
 					  struct sseu_dev_info *sseu)
 {
+	unsigned int min_eu_per_subslice, max_eu_per_subslice;
 	int ss_max = 2;
 	int ss;
 	u32 sig1[ss_max], sig2[ss_max];
@@ -4506,6 +4507,9 @@ static void cherryview_sseu_device_status(struct drm_i915_private *dev_priv,
 	sig2[0] = I915_READ(CHV_POWER_SS0_SIG2);
 	sig2[1] = I915_READ(CHV_POWER_SS1_SIG2);
 
+	min_eu_per_subslice = ~0u;
+	max_eu_per_subslice = 0;
+
 	for (ss = 0; ss < ss_max; ss++) {
 		unsigned int eu_cnt;
 
@@ -4520,14 +4524,18 @@ static void cherryview_sseu_device_status(struct drm_i915_private *dev_priv,
 			 ((sig1[ss] & CHV_EU210_PG_ENABLE) ? 0 : 2) +
 			 ((sig2[ss] & CHV_EU311_PG_ENABLE) ? 0 : 2);
 		sseu->eu_total += eu_cnt;
-		sseu->eu_per_subslice = max_t(unsigned int,
-					      sseu->eu_per_subslice, eu_cnt);
+		min_eu_per_subslice = min(min_eu_per_subslice, eu_cnt);
+		max_eu_per_subslice = max(max_eu_per_subslice, eu_cnt);
 	}
+
+	sseu->min_eu_per_subslice = min_eu_per_subslice;
+	sseu->max_eu_per_subslice = max_eu_per_subslice;
 }
 
 static void gen9_sseu_device_status(struct drm_i915_private *dev_priv,
 				    struct sseu_dev_info *sseu)
 {
+	unsigned int min_eu_per_subslice, max_eu_per_subslice;
 	int s_max = 3, ss_max = 4;
 	int s, ss;
 	u32 s_reg[s_max], eu_reg[2*s_max], eu_mask[2];
@@ -4553,6 +4561,9 @@ static void gen9_sseu_device_status(struct drm_i915_private *dev_priv,
 		     GEN9_PGCTL_SSB_EU210_ACK |
 		     GEN9_PGCTL_SSB_EU311_ACK;
 
+	min_eu_per_subslice = ~0u;
+	max_eu_per_subslice = 0;
+
 	for (s = 0; s < s_max; s++) {
 		if ((s_reg[s] & GEN9_PGCTL_SLICE_ACK) == 0)
 			/* skip disabled slice */
@@ -4578,11 +4589,14 @@ static void gen9_sseu_device_status(struct drm_i915_private *dev_priv,
 			eu_cnt = 2 * hweight32(eu_reg[2*s + ss/2] &
 					       eu_mask[ss%2]);
 			sseu->eu_total += eu_cnt;
-			sseu->eu_per_subslice = max_t(unsigned int,
-						      sseu->eu_per_subslice,
-						      eu_cnt);
+
+			min_eu_per_subslice = min(min_eu_per_subslice, eu_cnt);
+			max_eu_per_subslice = max(max_eu_per_subslice, eu_cnt);
 		}
 	}
+
+	sseu->min_eu_per_subslice = min_eu_per_subslice;
+	sseu->max_eu_per_subslice = max_eu_per_subslice;
 }
 
 static void broadwell_sseu_device_status(struct drm_i915_private *dev_priv,
@@ -4595,9 +4609,11 @@ static void broadwell_sseu_device_status(struct drm_i915_private *dev_priv,
 
 	if (sseu->slice_mask) {
 		sseu->subslice_mask = INTEL_INFO(dev_priv)->sseu.subslice_mask;
-		sseu->eu_per_subslice =
-				INTEL_INFO(dev_priv)->sseu.eu_per_subslice;
-		sseu->eu_total = sseu->eu_per_subslice *
+		sseu->min_eu_per_subslice =
+			INTEL_INFO(dev_priv)->sseu.min_eu_per_subslice;
+		sseu->max_eu_per_subslice =
+			INTEL_INFO(dev_priv)->sseu.max_eu_per_subslice;
+		sseu->eu_total = sseu->max_eu_per_subslice *
 				 sseu_subslice_total(sseu);
 
 		/* subtract fused off EU(s) from enabled slice(s) */
@@ -4628,8 +4644,8 @@ static void i915_print_sseu_info(struct seq_file *m, bool is_available_info,
 		   hweight8(sseu->subslice_mask));
 	seq_printf(m, "  %s EU Total: %u\n", type,
 		   sseu->eu_total);
-	seq_printf(m, "  %s EU Per Subslice: %u\n", type,
-		   sseu->eu_per_subslice);
+	seq_printf(m, "  %s EU Per Subslice: [%u, %u]\n", type,
+		   sseu->min_eu_per_subslice, sseu->max_eu_per_subslice);
 
 	if (!is_available_info)
 		return;
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index bde554eb2257..a9ce0da656dd 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -784,7 +784,8 @@ struct sseu_dev_info {
 	u8 slice_mask;
 	u8 subslice_mask;
 	u8 eu_total;
-	u8 eu_per_subslice;
+	u8 min_eu_per_subslice;
+	u8 max_eu_per_subslice;
 	u8 min_eu_in_pool;
 	/* For each slice, which subslice(s) has(have) 7 EUs (bitfield)? */
 	u8 subslice_7eu[3];
diff --git a/drivers/gpu/drm/i915/intel_device_info.c b/drivers/gpu/drm/i915/intel_device_info.c
index 3718341662c2..f21f597c5cb9 100644
--- a/drivers/gpu/drm/i915/intel_device_info.c
+++ b/drivers/gpu/drm/i915/intel_device_info.c
@@ -83,6 +83,7 @@ void intel_device_info_dump(struct drm_i915_private *dev_priv)
 static void cherryview_sseu_info_init(struct drm_i915_private *dev_priv)
 {
 	struct sseu_dev_info *sseu = &mkwrite_device_info(dev_priv)->sseu;
+	unsigned int eu_per_subslice;
 	u32 fuse, eu_dis;
 
 	fuse = I915_READ(CHV_FUSE_GT);
@@ -107,9 +108,10 @@ static void cherryview_sseu_info_init(struct drm_i915_private *dev_priv)
 	 * CHV expected to always have a uniform distribution of EU
 	 * across subslices.
 	*/
-	sseu->eu_per_subslice = sseu_subslice_total(sseu) ?
-				sseu->eu_total / sseu_subslice_total(sseu) :
-				0;
+	eu_per_subslice = sseu_subslice_total(sseu) ?
+		sseu->eu_total / sseu_subslice_total(sseu) : 0;
+	sseu->min_eu_per_subslice = eu_per_subslice;
+	sseu->max_eu_per_subslice = eu_per_subslice;
 	/*
 	 * CHV supports subslice power gating on devices with more than
 	 * one subslice, and supports EU power gating on devices with
@@ -117,13 +119,14 @@ static void cherryview_sseu_info_init(struct drm_i915_private *dev_priv)
 	*/
 	sseu->has_slice_pg = 0;
 	sseu->has_subslice_pg = sseu_subslice_total(sseu) > 1;
-	sseu->has_eu_pg = (sseu->eu_per_subslice > 2);
+	sseu->has_eu_pg = eu_per_subslice > 2;
 }
 
 static void gen9_sseu_info_init(struct drm_i915_private *dev_priv)
 {
 	struct intel_device_info *info = mkwrite_device_info(dev_priv);
 	struct sseu_dev_info *sseu = &info->sseu;
+	unsigned int eu_per_subslice;
 	int s_max = 3, ss_max = 4, eu_max = 8;
 	int s, ss;
 	u32 fuse2, eu_disable;
@@ -179,9 +182,10 @@ static void gen9_sseu_info_init(struct drm_i915_private *dev_priv)
 	 * recovery. BXT is expected to be perfectly uniform in EU
 	 * distribution.
 	*/
-	sseu->eu_per_subslice = sseu_subslice_total(sseu) ?
-				DIV_ROUND_UP(sseu->eu_total,
-					     sseu_subslice_total(sseu)) : 0;
+	eu_per_subslice = sseu_subslice_total(sseu) ?
+		DIV_ROUND_UP(sseu->eu_total, sseu_subslice_total(sseu)) : 0;
+	sseu->min_eu_per_subslice = eu_per_subslice;
+	sseu->max_eu_per_subslice = eu_per_subslice;
 	/*
 	 * SKL supports slice power gating on devices with more than
 	 * one slice, and supports EU power gating on devices with
@@ -195,7 +199,7 @@ static void gen9_sseu_info_init(struct drm_i915_private *dev_priv)
 		hweight8(sseu->slice_mask) > 1;
 	sseu->has_subslice_pg =
 		IS_GEN9_LP(dev_priv) && sseu_subslice_total(sseu) > 1;
-	sseu->has_eu_pg = sseu->eu_per_subslice > 2;
+	sseu->has_eu_pg = eu_per_subslice > 2;
 
 	if (IS_GEN9_LP(dev_priv)) {
 #define IS_SS_DISABLED(ss)	(!(sseu->subslice_mask & BIT(ss)))
@@ -228,6 +232,7 @@ static void broadwell_sseu_info_init(struct drm_i915_private *dev_priv)
 {
 	struct sseu_dev_info *sseu = &mkwrite_device_info(dev_priv)->sseu;
 	const int s_max = 3, ss_max = 3, eu_max = 8;
+	unsigned int eu_per_subslice;
 	int s, ss;
 	u32 fuse2, eu_disable[3]; /* s_max */
 
@@ -282,9 +287,10 @@ static void broadwell_sseu_info_init(struct drm_i915_private *dev_priv)
 	 * subslices with the exception that any one EU in any one subslice may
 	 * be fused off for die recovery.
 	 */
-	sseu->eu_per_subslice = sseu_subslice_total(sseu) ?
-				DIV_ROUND_UP(sseu->eu_total,
-					     sseu_subslice_total(sseu)) : 0;
+	eu_per_subslice = sseu_subslice_total(sseu) ?
+		DIV_ROUND_UP(sseu->eu_total, sseu_subslice_total(sseu)) : 0;
+	sseu->min_eu_per_subslice = eu_per_subslice;
+	sseu->max_eu_per_subslice = eu_per_subslice;
 
 	/*
 	 * BDW supports slice power gating on devices with more than
@@ -421,7 +427,9 @@ void intel_device_info_runtime_init(struct drm_i915_private *dev_priv)
 	DRM_DEBUG_DRIVER("subslice per slice: %u\n",
 			 hweight8(info->sseu.subslice_mask));
 	DRM_DEBUG_DRIVER("EU total: %u\n", info->sseu.eu_total);
-	DRM_DEBUG_DRIVER("EU per subslice: %u\n", info->sseu.eu_per_subslice);
+	DRM_DEBUG_DRIVER("EU per subslice: [%u, %u]\n",
+			 info->sseu.min_eu_per_subslice,
+			 info->sseu.max_eu_per_subslice);
 	DRM_DEBUG_DRIVER("has slice power gating: %s\n",
 			 info->sseu.has_slice_pg ? "y" : "n");
 	DRM_DEBUG_DRIVER("has subslice power gating: %s\n",
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 014b30ace8a0..9a279a559a0f 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1843,9 +1843,9 @@ make_rpcs(struct drm_i915_private *dev_priv)
 	}
 
 	if (INTEL_INFO(dev_priv)->sseu.has_eu_pg) {
-		rpcs |= INTEL_INFO(dev_priv)->sseu.eu_per_subslice <<
+		rpcs |= INTEL_INFO(dev_priv)->sseu.min_eu_per_subslice <<
 			GEN8_RPCS_EU_MIN_SHIFT;
-		rpcs |= INTEL_INFO(dev_priv)->sseu.eu_per_subslice <<
+		rpcs |= INTEL_INFO(dev_priv)->sseu.max_eu_per_subslice <<
 			GEN8_RPCS_EU_MAX_SHIFT;
 		rpcs |= GEN8_RPCS_ENABLE;
 	}
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v15 02/14] drm/i915: Program RPCS for Broadwell
  2017-05-31 12:33   ` [PATCH v15 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
  2017-05-31 12:33     ` [PATCH v15 01/14] drm/i915: Record both min/max eu_per_subslice in sseu_dev_info Lionel Landwerlin
@ 2017-05-31 12:33     ` Lionel Landwerlin
  2017-05-31 12:33     ` [PATCH v15 03/14] drm/i915: Record the sseu configuration per-context & engine Lionel Landwerlin
                       ` (11 subsequent siblings)
  13 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-31 12:33 UTC (permalink / raw)
  To: intel-gfx

From: Chris Wilson <chris@chris-wilson.co.uk>

Currently we only configure the power gating for Skylake and above, but
the configuration should equally apply to Broadwell and Braswell. Even
though, there is not as much variation as for later generations, we want
to expose control over the configuration to userspace and may want to
opt out of the "always-enabled" setting.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
---
 drivers/gpu/drm/i915/intel_lrc.c | 7 -------
 1 file changed, 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 9a279a559a0f..fec66204fe88 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1816,13 +1816,6 @@ make_rpcs(struct drm_i915_private *dev_priv)
 	u32 rpcs = 0;
 
 	/*
-	 * No explicit RPCS request is needed to ensure full
-	 * slice/subslice/EU enablement prior to Gen9.
-	*/
-	if (INTEL_GEN(dev_priv) < 9)
-		return 0;
-
-	/*
 	 * Starting in Gen9, render power gating can leave
 	 * slice/subslice/EU in a partially enabled state. We
 	 * must make an explicit request through RPCS for full
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v15 03/14] drm/i915: Record the sseu configuration per-context & engine
  2017-05-31 12:33   ` [PATCH v15 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
  2017-05-31 12:33     ` [PATCH v15 01/14] drm/i915: Record both min/max eu_per_subslice in sseu_dev_info Lionel Landwerlin
  2017-05-31 12:33     ` [PATCH v15 02/14] drm/i915: Program RPCS for Broadwell Lionel Landwerlin
@ 2017-05-31 12:33     ` Lionel Landwerlin
  2017-05-31 12:33     ` [PATCH v15 04/14] drm/i915/perf: rework mux configurations queries Lionel Landwerlin
                       ` (10 subsequent siblings)
  13 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-31 12:33 UTC (permalink / raw)
  To: intel-gfx

From: Chris Wilson <chris@chris-wilson.co.uk>

We want to expose the ability to reconfigure the slices, subslice and
eu per context and per engine. To facilitate that, store the current
configuration on the context for each engine, which is initially set
to the device default upon creation.

v2: record sseu configuration per context & engine (Chris)

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h         | 19 -------------------
 drivers/gpu/drm/i915/i915_gem_context.c |  6 ++++++
 drivers/gpu/drm/i915/i915_gem_context.h | 21 +++++++++++++++++++++
 drivers/gpu/drm/i915/intel_lrc.c        | 23 +++++++++--------------
 4 files changed, 36 insertions(+), 33 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index a9ce0da656dd..1c1ce2a5f30a 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -780,25 +780,6 @@ struct intel_csr {
 	func(overlay_needs_physical); \
 	func(supports_tv);
 
-struct sseu_dev_info {
-	u8 slice_mask;
-	u8 subslice_mask;
-	u8 eu_total;
-	u8 min_eu_per_subslice;
-	u8 max_eu_per_subslice;
-	u8 min_eu_in_pool;
-	/* For each slice, which subslice(s) has(have) 7 EUs (bitfield)? */
-	u8 subslice_7eu[3];
-	u8 has_slice_pg:1;
-	u8 has_subslice_pg:1;
-	u8 has_eu_pg:1;
-};
-
-static inline unsigned int sseu_subslice_total(const struct sseu_dev_info *sseu)
-{
-	return hweight8(sseu->slice_mask) * hweight8(sseu->subslice_mask);
-}
-
 /* Keep in gen based order, and chronological order within a gen */
 enum intel_platform {
 	INTEL_PLATFORM_UNINITIALIZED = 0,
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index c5d1666d7071..60d0bbf0ae2b 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -184,6 +184,8 @@ __create_hw_context(struct drm_i915_private *dev_priv,
 		    struct drm_i915_file_private *file_priv)
 {
 	struct i915_gem_context *ctx;
+	struct intel_engine_cs *engine;
+	enum intel_engine_id id;
 	int ret;
 
 	ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
@@ -229,6 +231,10 @@ __create_hw_context(struct drm_i915_private *dev_priv,
 	 * is no remap info, it will be a NOP. */
 	ctx->remap_slice = ALL_L3_SLICES(dev_priv);
 
+	/* On all engines, use the whole device by default */
+	for_each_engine(engine, dev_priv, id)
+		ctx->engine[id].sseu = INTEL_INFO(dev_priv)->sseu;
+
 	i915_gem_context_set_bannable(ctx);
 	ctx->ring_size = 4 * PAGE_SIZE;
 	ctx->desc_template =
diff --git a/drivers/gpu/drm/i915/i915_gem_context.h b/drivers/gpu/drm/i915/i915_gem_context.h
index 4af2ab94558b..e8247e2f6011 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.h
+++ b/drivers/gpu/drm/i915/i915_gem_context.h
@@ -39,6 +39,25 @@ struct i915_hw_ppgtt;
 struct i915_vma;
 struct intel_ring;
 
+struct sseu_dev_info {
+	u8 slice_mask;
+	u8 subslice_mask;
+	u8 eu_total;
+	u8 min_eu_per_subslice;
+	u8 max_eu_per_subslice;
+	u8 min_eu_in_pool;
+	/* For each slice, which subslice(s) has(have) 7 EUs (bitfield)? */
+	u8 subslice_7eu[3];
+	u8 has_slice_pg:1;
+	u8 has_subslice_pg:1;
+	u8 has_eu_pg:1;
+};
+
+static inline unsigned int sseu_subslice_total(const struct sseu_dev_info *sseu)
+{
+	return hweight8(sseu->slice_mask) * hweight8(sseu->subslice_mask);
+}
+
 #define DEFAULT_CONTEXT_HANDLE 0
 
 /**
@@ -151,6 +170,8 @@ struct i915_gem_context {
 		u64 lrc_desc;
 		int pin_count;
 		bool initialised;
+		/** sseu: Control eu/slice partitioning */
+		struct sseu_dev_info sseu;
 	} engine[I915_NUM_ENGINES];
 
 	/** ring_size: size for allocating the per-engine ring buffer */
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index fec66204fe88..78fb1f452891 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1810,8 +1810,7 @@ int logical_xcs_ring_init(struct intel_engine_cs *engine)
 	return logical_ring_init(engine);
 }
 
-static u32
-make_rpcs(struct drm_i915_private *dev_priv)
+static u32 make_rpcs(const struct sseu_dev_info *sseu)
 {
 	u32 rpcs = 0;
 
@@ -1821,25 +1820,21 @@ make_rpcs(struct drm_i915_private *dev_priv)
 	 * must make an explicit request through RPCS for full
 	 * enablement.
 	*/
-	if (INTEL_INFO(dev_priv)->sseu.has_slice_pg) {
+	if (sseu->has_slice_pg) {
 		rpcs |= GEN8_RPCS_S_CNT_ENABLE;
-		rpcs |= hweight8(INTEL_INFO(dev_priv)->sseu.slice_mask) <<
-			GEN8_RPCS_S_CNT_SHIFT;
+		rpcs |= hweight8(sseu->slice_mask) << GEN8_RPCS_S_CNT_SHIFT;
 		rpcs |= GEN8_RPCS_ENABLE;
 	}
 
-	if (INTEL_INFO(dev_priv)->sseu.has_subslice_pg) {
+	if (sseu->has_subslice_pg) {
 		rpcs |= GEN8_RPCS_SS_CNT_ENABLE;
-		rpcs |= hweight8(INTEL_INFO(dev_priv)->sseu.subslice_mask) <<
-			GEN8_RPCS_SS_CNT_SHIFT;
+		rpcs |= hweight8(sseu->subslice_mask) << GEN8_RPCS_SS_CNT_SHIFT;
 		rpcs |= GEN8_RPCS_ENABLE;
 	}
 
-	if (INTEL_INFO(dev_priv)->sseu.has_eu_pg) {
-		rpcs |= INTEL_INFO(dev_priv)->sseu.min_eu_per_subslice <<
-			GEN8_RPCS_EU_MIN_SHIFT;
-		rpcs |= INTEL_INFO(dev_priv)->sseu.max_eu_per_subslice <<
-			GEN8_RPCS_EU_MAX_SHIFT;
+	if (sseu->has_eu_pg) {
+		rpcs |= sseu->min_eu_per_subslice << GEN8_RPCS_EU_MIN_SHIFT;
+		rpcs |= sseu->max_eu_per_subslice << GEN8_RPCS_EU_MAX_SHIFT;
 		rpcs |= GEN8_RPCS_ENABLE;
 	}
 
@@ -1949,7 +1944,7 @@ static void execlists_init_reg_state(u32 *regs,
 	if (rcs) {
 		regs[CTX_LRI_HEADER_2] = MI_LOAD_REGISTER_IMM(1);
 		CTX_REG(regs, CTX_R_PWR_CLK_STATE, GEN8_R_PWR_CLK_STATE,
-			make_rpcs(dev_priv));
+			make_rpcs(&ctx->engine[engine->id].sseu));
 	}
 }
 
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v15 04/14] drm/i915/perf: rework mux configurations queries
  2017-05-31 12:33   ` [PATCH v15 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                       ` (2 preceding siblings ...)
  2017-05-31 12:33     ` [PATCH v15 03/14] drm/i915: Record the sseu configuration per-context & engine Lionel Landwerlin
@ 2017-05-31 12:33     ` Lionel Landwerlin
  2017-05-31 12:33     ` [PATCH v15 05/14] drm/i915/perf: Add 'render basic' Gen8+ OA unit configs Lionel Landwerlin
                       ` (9 subsequent siblings)
  13 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-31 12:33 UTC (permalink / raw)
  To: intel-gfx

Gen8+ might have mux configurations per slices/subslices. Depending on
whether slices/subslices have been fused off, only part of the
configuration needs to be applied. This change reworks the mux
configurations query mechanism to allow more than one set of registers
to be programmed.

v2: s/n_mux_regs/n_mux_configs/ (Matthew)

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h    |   6 +-
 drivers/gpu/drm/i915/i915_oa_hsw.c | 215 ++++++++++++++++++++++++-------------
 drivers/gpu/drm/i915/i915_oa_hsw.h |   4 +-
 drivers/gpu/drm/i915/i915_perf.c   |   7 +-
 4 files changed, 151 insertions(+), 81 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 1c1ce2a5f30a..1c0e6f9803bc 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2397,8 +2397,10 @@ struct drm_i915_private {
 
 			int metrics_set;
 
-			const struct i915_oa_reg *mux_regs;
-			int mux_regs_len;
+			const struct i915_oa_reg *mux_regs[1];
+			int mux_regs_lens[1];
+			int n_mux_configs;
+
 			const struct i915_oa_reg *b_counter_regs;
 			int b_counter_regs_len;
 
diff --git a/drivers/gpu/drm/i915/i915_oa_hsw.c b/drivers/gpu/drm/i915/i915_oa_hsw.c
index 4ddf756add31..8c13e0880e53 100644
--- a/drivers/gpu/drm/i915/i915_oa_hsw.c
+++ b/drivers/gpu/drm/i915/i915_oa_hsw.c
@@ -1,5 +1,7 @@
 /*
- * Autogenerated file, DO NOT EDIT manually!
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
  *
  * Copyright (c) 2015 Intel Corporation
  *
@@ -109,12 +111,21 @@ static const struct i915_oa_reg mux_config_render_basic[] = {
 	{ _MMIO(0x25428), 0x00042049 },
 };
 
-static const struct i915_oa_reg *
+static int
 get_render_basic_mux_config(struct drm_i915_private *dev_priv,
-			    int *len)
+			    const struct i915_oa_reg **regs,
+			    int *lens)
 {
-	*len = ARRAY_SIZE(mux_config_render_basic);
-	return mux_config_render_basic;
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_render_basic;
+	lens[n] = ARRAY_SIZE(mux_config_render_basic);
+	n++;
+
+	return n;
 }
 
 static const struct i915_oa_reg b_counter_config_compute_basic[] = {
@@ -172,12 +183,21 @@ static const struct i915_oa_reg mux_config_compute_basic[] = {
 	{ _MMIO(0x25428), 0x00000c03 },
 };
 
-static const struct i915_oa_reg *
+static int
 get_compute_basic_mux_config(struct drm_i915_private *dev_priv,
-			     int *len)
+			     const struct i915_oa_reg **regs,
+			     int *lens)
 {
-	*len = ARRAY_SIZE(mux_config_compute_basic);
-	return mux_config_compute_basic;
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_basic;
+	lens[n] = ARRAY_SIZE(mux_config_compute_basic);
+	n++;
+
+	return n;
 }
 
 static const struct i915_oa_reg b_counter_config_compute_extended[] = {
@@ -221,12 +241,21 @@ static const struct i915_oa_reg mux_config_compute_extended[] = {
 	{ _MMIO(0x25428), 0x00000000 },
 };
 
-static const struct i915_oa_reg *
+static int
 get_compute_extended_mux_config(struct drm_i915_private *dev_priv,
-				int *len)
+				const struct i915_oa_reg **regs,
+				int *lens)
 {
-	*len = ARRAY_SIZE(mux_config_compute_extended);
-	return mux_config_compute_extended;
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_extended;
+	lens[n] = ARRAY_SIZE(mux_config_compute_extended);
+	n++;
+
+	return n;
 }
 
 static const struct i915_oa_reg b_counter_config_memory_reads[] = {
@@ -281,12 +310,21 @@ static const struct i915_oa_reg mux_config_memory_reads[] = {
 	{ _MMIO(0x25428), 0x00000000 },
 };
 
-static const struct i915_oa_reg *
+static int
 get_memory_reads_mux_config(struct drm_i915_private *dev_priv,
-			    int *len)
+			    const struct i915_oa_reg **regs,
+			    int *lens)
 {
-	*len = ARRAY_SIZE(mux_config_memory_reads);
-	return mux_config_memory_reads;
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_memory_reads;
+	lens[n] = ARRAY_SIZE(mux_config_memory_reads);
+	n++;
+
+	return n;
 }
 
 static const struct i915_oa_reg b_counter_config_memory_writes[] = {
@@ -341,12 +379,21 @@ static const struct i915_oa_reg mux_config_memory_writes[] = {
 	{ _MMIO(0x25428), 0x00000000 },
 };
 
-static const struct i915_oa_reg *
+static int
 get_memory_writes_mux_config(struct drm_i915_private *dev_priv,
-			     int *len)
+			     const struct i915_oa_reg **regs,
+			     int *lens)
 {
-	*len = ARRAY_SIZE(mux_config_memory_writes);
-	return mux_config_memory_writes;
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_memory_writes;
+	lens[n] = ARRAY_SIZE(mux_config_memory_writes);
+	n++;
+
+	return n;
 }
 
 static const struct i915_oa_reg b_counter_config_sampler_balance[] = {
@@ -401,31 +448,40 @@ static const struct i915_oa_reg mux_config_sampler_balance[] = {
 	{ _MMIO(0x25428), 0x0004a54a },
 };
 
-static const struct i915_oa_reg *
+static int
 get_sampler_balance_mux_config(struct drm_i915_private *dev_priv,
-			       int *len)
+			       const struct i915_oa_reg **regs,
+			       int *lens)
 {
-	*len = ARRAY_SIZE(mux_config_sampler_balance);
-	return mux_config_sampler_balance;
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_sampler_balance;
+	lens[n] = ARRAY_SIZE(mux_config_sampler_balance);
+	n++;
+
+	return n;
 }
 
 int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 {
-	dev_priv->perf.oa.mux_regs = NULL;
-	dev_priv->perf.oa.mux_regs_len = 0;
+	dev_priv->perf.oa.n_mux_configs = 0;
 	dev_priv->perf.oa.b_counter_regs = NULL;
 	dev_priv->perf.oa.b_counter_regs_len = 0;
 
 	switch (dev_priv->perf.oa.metrics_set) {
 	case METRIC_SET_ID_RENDER_BASIC:
-		dev_priv->perf.oa.mux_regs =
+		dev_priv->perf.oa.n_mux_configs =
 			get_render_basic_mux_config(dev_priv,
-						    &dev_priv->perf.oa.mux_regs_len);
-		if (!dev_priv->perf.oa.mux_regs) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set");
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
 
 			/* EINVAL because *_register_sysfs already checked this
-			 * and so it wouldn't have been advertised so userspace and
+			 * and so it wouldn't have been advertised to userspace and
 			 * so shouldn't have been requested
 			 */
 			return -EINVAL;
@@ -438,14 +494,15 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 
 		return 0;
 	case METRIC_SET_ID_COMPUTE_BASIC:
-		dev_priv->perf.oa.mux_regs =
+		dev_priv->perf.oa.n_mux_configs =
 			get_compute_basic_mux_config(dev_priv,
-						     &dev_priv->perf.oa.mux_regs_len);
-		if (!dev_priv->perf.oa.mux_regs) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set");
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set\n");
 
 			/* EINVAL because *_register_sysfs already checked this
-			 * and so it wouldn't have been advertised so userspace and
+			 * and so it wouldn't have been advertised to userspace and
 			 * so shouldn't have been requested
 			 */
 			return -EINVAL;
@@ -458,14 +515,15 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 
 		return 0;
 	case METRIC_SET_ID_COMPUTE_EXTENDED:
-		dev_priv->perf.oa.mux_regs =
+		dev_priv->perf.oa.n_mux_configs =
 			get_compute_extended_mux_config(dev_priv,
-							&dev_priv->perf.oa.mux_regs_len);
-		if (!dev_priv->perf.oa.mux_regs) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTENDED\" metric set");
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTENDED\" metric set\n");
 
 			/* EINVAL because *_register_sysfs already checked this
-			 * and so it wouldn't have been advertised so userspace and
+			 * and so it wouldn't have been advertised to userspace and
 			 * so shouldn't have been requested
 			 */
 			return -EINVAL;
@@ -478,14 +536,15 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 
 		return 0;
 	case METRIC_SET_ID_MEMORY_READS:
-		dev_priv->perf.oa.mux_regs =
+		dev_priv->perf.oa.n_mux_configs =
 			get_memory_reads_mux_config(dev_priv,
-						    &dev_priv->perf.oa.mux_regs_len);
-		if (!dev_priv->perf.oa.mux_regs) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_READS\" metric set");
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_READS\" metric set\n");
 
 			/* EINVAL because *_register_sysfs already checked this
-			 * and so it wouldn't have been advertised so userspace and
+			 * and so it wouldn't have been advertised to userspace and
 			 * so shouldn't have been requested
 			 */
 			return -EINVAL;
@@ -498,14 +557,15 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 
 		return 0;
 	case METRIC_SET_ID_MEMORY_WRITES:
-		dev_priv->perf.oa.mux_regs =
+		dev_priv->perf.oa.n_mux_configs =
 			get_memory_writes_mux_config(dev_priv,
-						     &dev_priv->perf.oa.mux_regs_len);
-		if (!dev_priv->perf.oa.mux_regs) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_WRITES\" metric set");
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_WRITES\" metric set\n");
 
 			/* EINVAL because *_register_sysfs already checked this
-			 * and so it wouldn't have been advertised so userspace and
+			 * and so it wouldn't have been advertised to userspace and
 			 * so shouldn't have been requested
 			 */
 			return -EINVAL;
@@ -518,14 +578,15 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 
 		return 0;
 	case METRIC_SET_ID_SAMPLER_BALANCE:
-		dev_priv->perf.oa.mux_regs =
+		dev_priv->perf.oa.n_mux_configs =
 			get_sampler_balance_mux_config(dev_priv,
-						       &dev_priv->perf.oa.mux_regs_len);
-		if (!dev_priv->perf.oa.mux_regs) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER_BALANCE\" metric set");
+						       dev_priv->perf.oa.mux_regs,
+						       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER_BALANCE\" metric set\n");
 
 			/* EINVAL because *_register_sysfs already checked this
-			 * and so it wouldn't have been advertised so userspace and
+			 * and so it wouldn't have been advertised to userspace and
 			 * so shouldn't have been requested
 			 */
 			return -EINVAL;
@@ -677,35 +738,36 @@ static struct attribute_group group_sampler_balance = {
 int
 i915_perf_register_sysfs_hsw(struct drm_i915_private *dev_priv)
 {
-	int mux_len;
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
 	int ret = 0;
 
-	if (get_render_basic_mux_config(dev_priv, &mux_len)) {
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
 		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_basic);
 		if (ret)
 			goto error_render_basic;
 	}
-	if (get_compute_basic_mux_config(dev_priv, &mux_len)) {
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
 		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
 		if (ret)
 			goto error_compute_basic;
 	}
-	if (get_compute_extended_mux_config(dev_priv, &mux_len)) {
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens)) {
 		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
 		if (ret)
 			goto error_compute_extended;
 	}
-	if (get_memory_reads_mux_config(dev_priv, &mux_len)) {
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens)) {
 		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
 		if (ret)
 			goto error_memory_reads;
 	}
-	if (get_memory_writes_mux_config(dev_priv, &mux_len)) {
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens)) {
 		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
 		if (ret)
 			goto error_memory_writes;
 	}
-	if (get_sampler_balance_mux_config(dev_priv, &mux_len)) {
+	if (get_sampler_balance_mux_config(dev_priv, mux_regs, mux_lens)) {
 		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler_balance);
 		if (ret)
 			goto error_sampler_balance;
@@ -714,19 +776,19 @@ i915_perf_register_sysfs_hsw(struct drm_i915_private *dev_priv)
 	return 0;
 
 error_sampler_balance:
-	if (get_sampler_balance_mux_config(dev_priv, &mux_len))
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
 error_memory_writes:
-	if (get_sampler_balance_mux_config(dev_priv, &mux_len))
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
 error_memory_reads:
-	if (get_sampler_balance_mux_config(dev_priv, &mux_len))
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
 error_compute_extended:
-	if (get_sampler_balance_mux_config(dev_priv, &mux_len))
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
 error_compute_basic:
-	if (get_sampler_balance_mux_config(dev_priv, &mux_len))
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
 error_render_basic:
 	return ret;
@@ -735,18 +797,19 @@ i915_perf_register_sysfs_hsw(struct drm_i915_private *dev_priv)
 void
 i915_perf_unregister_sysfs_hsw(struct drm_i915_private *dev_priv)
 {
-	int mux_len;
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
 
-	if (get_render_basic_mux_config(dev_priv, &mux_len))
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
-	if (get_compute_basic_mux_config(dev_priv, &mux_len))
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
-	if (get_compute_extended_mux_config(dev_priv, &mux_len))
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
-	if (get_memory_reads_mux_config(dev_priv, &mux_len))
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
-	if (get_memory_writes_mux_config(dev_priv, &mux_len))
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
-	if (get_sampler_balance_mux_config(dev_priv, &mux_len))
+	if (get_sampler_balance_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_balance);
 }
diff --git a/drivers/gpu/drm/i915/i915_oa_hsw.h b/drivers/gpu/drm/i915/i915_oa_hsw.h
index 429a229b5158..6fe7e0690ef3 100644
--- a/drivers/gpu/drm/i915/i915_oa_hsw.h
+++ b/drivers/gpu/drm/i915/i915_oa_hsw.h
@@ -1,5 +1,7 @@
 /*
- * Autogenerated file, DO NOT EDIT manually!
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
  *
  * Copyright (c) 2015 Intel Corporation
  *
diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 85269bcc8372..7e56b895fd34 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -1047,6 +1047,7 @@ static void config_oa_regs(struct drm_i915_private *dev_priv,
 static int hsw_enable_metric_set(struct drm_i915_private *dev_priv)
 {
 	int ret = i915_oa_select_metric_set_hsw(dev_priv);
+	int i;
 
 	if (ret)
 		return ret;
@@ -1068,8 +1069,10 @@ static int hsw_enable_metric_set(struct drm_i915_private *dev_priv)
 	I915_WRITE(GEN6_UCGCTL1, (I915_READ(GEN6_UCGCTL1) |
 				  GEN6_CSUNIT_CLOCK_GATE_DISABLE));
 
-	config_oa_regs(dev_priv, dev_priv->perf.oa.mux_regs,
-		       dev_priv->perf.oa.mux_regs_len);
+	for (i = 0; i < dev_priv->perf.oa.n_mux_configs; i++) {
+		config_oa_regs(dev_priv, dev_priv->perf.oa.mux_regs[i],
+			       dev_priv->perf.oa.mux_regs_lens[i]);
+	}
 
 	/* It apparently takes a fairly long time for a new MUX
 	 * configuration to be be applied after these register writes.
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v15 05/14] drm/i915/perf: Add 'render basic' Gen8+ OA unit configs
  2017-05-31 12:33   ` [PATCH v15 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                       ` (3 preceding siblings ...)
  2017-05-31 12:33     ` [PATCH v15 04/14] drm/i915/perf: rework mux configurations queries Lionel Landwerlin
@ 2017-05-31 12:33     ` Lionel Landwerlin
  2017-05-31 12:33     ` [PATCH v15 06/14] drm/i915/perf: Add OA unit support for Gen 8+ Lionel Landwerlin
                       ` (8 subsequent siblings)
  13 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-31 12:33 UTC (permalink / raw)
  To: intel-gfx

From: Robert Bragg <robert@sixbynine.org>

Adds a static OA unit, MUX, B Counter + Flex EU configurations for basic
render metrics on Broadwell, Cherryview, Skylake and Broxton. These are
auto generated from an XML description of metric sets, currently
maintained in gputop, ref:

 https://github.com/rib/gputop
 > gputop-data/oa-*.xml
 > scripts/i915-perf-kernelgen.py

 $ make -C gputop-data -f Makefile.xml WHITELIST=RenderBasic

v2: add newlines to debug messages + fix comment (Matthew Auld)

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/Makefile         |   8 +-
 drivers/gpu/drm/i915/i915_drv.h       |   6 +-
 drivers/gpu/drm/i915/i915_oa_bdw.c    | 392 ++++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_bdw.h    |  40 ++++
 drivers/gpu/drm/i915/i915_oa_bxt.c    | 248 +++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_bxt.h    |  40 ++++
 drivers/gpu/drm/i915/i915_oa_chv.c    | 238 +++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_chv.h    |  40 ++++
 drivers/gpu/drm/i915/i915_oa_sklgt2.c | 238 +++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_sklgt2.h |  40 ++++
 drivers/gpu/drm/i915/i915_oa_sklgt3.c | 249 +++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_sklgt3.h |  40 ++++
 drivers/gpu/drm/i915/i915_oa_sklgt4.c | 260 ++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_sklgt4.h |  40 ++++
 14 files changed, 1876 insertions(+), 3 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_oa_bdw.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_bdw.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_bxt.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_bxt.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_chv.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_chv.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt2.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt2.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt3.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt3.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt4.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_sklgt4.h

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index 16dccf550412..49a628cdef9e 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -129,7 +129,13 @@ i915-y += i915_vgpu.o
 
 # perf code
 i915-y += i915_perf.o \
-	  i915_oa_hsw.o
+	  i915_oa_hsw.o \
+	  i915_oa_bdw.o \
+	  i915_oa_chv.o \
+	  i915_oa_sklgt2.o \
+	  i915_oa_sklgt3.o \
+	  i915_oa_sklgt4.o \
+	  i915_oa_bxt.o
 
 ifeq ($(CONFIG_DRM_I915_GVT),y)
 i915-y += intel_gvt.o
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 1c0e6f9803bc..a9aaff0f15e0 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2397,12 +2397,14 @@ struct drm_i915_private {
 
 			int metrics_set;
 
-			const struct i915_oa_reg *mux_regs[1];
-			int mux_regs_lens[1];
+			const struct i915_oa_reg *mux_regs[2];
+			int mux_regs_lens[2];
 			int n_mux_configs;
 
 			const struct i915_oa_reg *b_counter_regs;
 			int b_counter_regs_len;
+			const struct i915_oa_reg *flex_regs;
+			int flex_regs_len;
 
 			struct {
 				struct i915_vma *vma;
diff --git a/drivers/gpu/drm/i915/i915_oa_bdw.c b/drivers/gpu/drm/i915/i915_oa_bdw.c
new file mode 100644
index 000000000000..9a11c03b4ecb
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_bdw.c
@@ -0,0 +1,392 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/sysfs.h>
+
+#include "i915_drv.h"
+#include "i915_oa_bdw.h"
+
+enum metric_set_id {
+	METRIC_SET_ID_RENDER_BASIC = 1,
+};
+
+int i915_oa_n_builtin_metric_sets_bdw = 1;
+
+static const struct i915_oa_reg b_counter_config_render_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic_0_slices_0x01[] = {
+	{ _MMIO(0x9888), 0x143f000f },
+	{ _MMIO(0x9888), 0x14110014 },
+	{ _MMIO(0x9888), 0x14310014 },
+	{ _MMIO(0x9888), 0x14bf000f },
+	{ _MMIO(0x9888), 0x118a0317 },
+	{ _MMIO(0x9888), 0x13837be0 },
+	{ _MMIO(0x9888), 0x3b800060 },
+	{ _MMIO(0x9888), 0x3d800005 },
+	{ _MMIO(0x9888), 0x005c4000 },
+	{ _MMIO(0x9888), 0x065c8000 },
+	{ _MMIO(0x9888), 0x085cc000 },
+	{ _MMIO(0x9888), 0x003d8000 },
+	{ _MMIO(0x9888), 0x183d0800 },
+	{ _MMIO(0x9888), 0x0a3f0023 },
+	{ _MMIO(0x9888), 0x103f0000 },
+	{ _MMIO(0x9888), 0x00584000 },
+	{ _MMIO(0x9888), 0x08584000 },
+	{ _MMIO(0x9888), 0x0a5a4000 },
+	{ _MMIO(0x9888), 0x005b4000 },
+	{ _MMIO(0x9888), 0x0e5b8000 },
+	{ _MMIO(0x9888), 0x185b2400 },
+	{ _MMIO(0x9888), 0x0a1d4000 },
+	{ _MMIO(0x9888), 0x0c1f0800 },
+	{ _MMIO(0x9888), 0x0e1faa00 },
+	{ _MMIO(0x9888), 0x00384000 },
+	{ _MMIO(0x9888), 0x0e384000 },
+	{ _MMIO(0x9888), 0x16384000 },
+	{ _MMIO(0x9888), 0x18380001 },
+	{ _MMIO(0x9888), 0x00392000 },
+	{ _MMIO(0x9888), 0x06398000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a391000 },
+	{ _MMIO(0x9888), 0x00104000 },
+	{ _MMIO(0x9888), 0x08104000 },
+	{ _MMIO(0x9888), 0x00110030 },
+	{ _MMIO(0x9888), 0x08110031 },
+	{ _MMIO(0x9888), 0x10110000 },
+	{ _MMIO(0x9888), 0x00134000 },
+	{ _MMIO(0x9888), 0x16130020 },
+	{ _MMIO(0x9888), 0x06308000 },
+	{ _MMIO(0x9888), 0x08308000 },
+	{ _MMIO(0x9888), 0x06311800 },
+	{ _MMIO(0x9888), 0x08311880 },
+	{ _MMIO(0x9888), 0x10310000 },
+	{ _MMIO(0x9888), 0x0e334000 },
+	{ _MMIO(0x9888), 0x16330080 },
+	{ _MMIO(0x9888), 0x0abf1180 },
+	{ _MMIO(0x9888), 0x10bf0000 },
+	{ _MMIO(0x9888), 0x0ada8000 },
+	{ _MMIO(0x9888), 0x0a9d8000 },
+	{ _MMIO(0x9888), 0x109f0002 },
+	{ _MMIO(0x9888), 0x0ab94000 },
+	{ _MMIO(0x9888), 0x0d888000 },
+	{ _MMIO(0x9888), 0x038a0380 },
+	{ _MMIO(0x9888), 0x058a000e },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8a00a0 },
+	{ _MMIO(0x9888), 0x078a0000 },
+	{ _MMIO(0x9888), 0x098a0000 },
+	{ _MMIO(0x9888), 0x238b2820 },
+	{ _MMIO(0x9888), 0x258b2550 },
+	{ _MMIO(0x9888), 0x198c1000 },
+	{ _MMIO(0x9888), 0x0b8d8000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa0 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x0d831021 },
+	{ _MMIO(0x9888), 0x0f83572f },
+	{ _MMIO(0x9888), 0x01835680 },
+	{ _MMIO(0x9888), 0x0383002c },
+	{ _MMIO(0x9888), 0x11830000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830001 },
+	{ _MMIO(0x9888), 0x05830000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x05844000 },
+	{ _MMIO(0x9888), 0x1b80c137 },
+	{ _MMIO(0x9888), 0x1d80c147 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x15804000 },
+	{ _MMIO(0x9888), 0x4d801110 },
+	{ _MMIO(0x9888), 0x4f800331 },
+	{ _MMIO(0x9888), 0x43800802 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45801465 },
+	{ _MMIO(0x9888), 0x53801111 },
+	{ _MMIO(0x9888), 0x478014a5 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800ca5 },
+	{ _MMIO(0x9888), 0x41800003 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic_1_slices_0x02[] = {
+	{ _MMIO(0x9888), 0x143f000f },
+	{ _MMIO(0x9888), 0x14bf000f },
+	{ _MMIO(0x9888), 0x14910014 },
+	{ _MMIO(0x9888), 0x14b10014 },
+	{ _MMIO(0x9888), 0x118a0317 },
+	{ _MMIO(0x9888), 0x13837be0 },
+	{ _MMIO(0x9888), 0x3b800060 },
+	{ _MMIO(0x9888), 0x3d800005 },
+	{ _MMIO(0x9888), 0x0a3f0023 },
+	{ _MMIO(0x9888), 0x103f0000 },
+	{ _MMIO(0x9888), 0x0a5a4000 },
+	{ _MMIO(0x9888), 0x0a1d4000 },
+	{ _MMIO(0x9888), 0x0e1f8000 },
+	{ _MMIO(0x9888), 0x0a391000 },
+	{ _MMIO(0x9888), 0x00dc4000 },
+	{ _MMIO(0x9888), 0x06dc8000 },
+	{ _MMIO(0x9888), 0x08dcc000 },
+	{ _MMIO(0x9888), 0x00bd8000 },
+	{ _MMIO(0x9888), 0x18bd0800 },
+	{ _MMIO(0x9888), 0x0abf1180 },
+	{ _MMIO(0x9888), 0x10bf0000 },
+	{ _MMIO(0x9888), 0x00d84000 },
+	{ _MMIO(0x9888), 0x08d84000 },
+	{ _MMIO(0x9888), 0x0ada8000 },
+	{ _MMIO(0x9888), 0x00db4000 },
+	{ _MMIO(0x9888), 0x0edb8000 },
+	{ _MMIO(0x9888), 0x18db2400 },
+	{ _MMIO(0x9888), 0x0a9d8000 },
+	{ _MMIO(0x9888), 0x0c9f0800 },
+	{ _MMIO(0x9888), 0x0e9f2a00 },
+	{ _MMIO(0x9888), 0x109f0002 },
+	{ _MMIO(0x9888), 0x00b84000 },
+	{ _MMIO(0x9888), 0x0eb84000 },
+	{ _MMIO(0x9888), 0x16b84000 },
+	{ _MMIO(0x9888), 0x18b80001 },
+	{ _MMIO(0x9888), 0x00b92000 },
+	{ _MMIO(0x9888), 0x06b98000 },
+	{ _MMIO(0x9888), 0x08b9a000 },
+	{ _MMIO(0x9888), 0x0ab94000 },
+	{ _MMIO(0x9888), 0x00904000 },
+	{ _MMIO(0x9888), 0x08904000 },
+	{ _MMIO(0x9888), 0x00910030 },
+	{ _MMIO(0x9888), 0x08910031 },
+	{ _MMIO(0x9888), 0x10910000 },
+	{ _MMIO(0x9888), 0x00934000 },
+	{ _MMIO(0x9888), 0x16930020 },
+	{ _MMIO(0x9888), 0x06b08000 },
+	{ _MMIO(0x9888), 0x08b08000 },
+	{ _MMIO(0x9888), 0x06b11800 },
+	{ _MMIO(0x9888), 0x08b11880 },
+	{ _MMIO(0x9888), 0x10b10000 },
+	{ _MMIO(0x9888), 0x0eb34000 },
+	{ _MMIO(0x9888), 0x16b30080 },
+	{ _MMIO(0x9888), 0x01888000 },
+	{ _MMIO(0x9888), 0x0d88b800 },
+	{ _MMIO(0x9888), 0x038a0380 },
+	{ _MMIO(0x9888), 0x058a000e },
+	{ _MMIO(0x9888), 0x1b8a0080 },
+	{ _MMIO(0x9888), 0x078a0000 },
+	{ _MMIO(0x9888), 0x098a0000 },
+	{ _MMIO(0x9888), 0x238b2840 },
+	{ _MMIO(0x9888), 0x258b26a0 },
+	{ _MMIO(0x9888), 0x018c4000 },
+	{ _MMIO(0x9888), 0x0f8c4000 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c1100 },
+	{ _MMIO(0x9888), 0x018d2000 },
+	{ _MMIO(0x9888), 0x078d8000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8d8000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa0 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x0d831021 },
+	{ _MMIO(0x9888), 0x0f83572f },
+	{ _MMIO(0x9888), 0x01835680 },
+	{ _MMIO(0x9888), 0x0383002c },
+	{ _MMIO(0x9888), 0x11830000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830001 },
+	{ _MMIO(0x9888), 0x05830000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x05844000 },
+	{ _MMIO(0x9888), 0x1b80c137 },
+	{ _MMIO(0x9888), 0x1d80c147 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x15804000 },
+	{ _MMIO(0x9888), 0x4d801550 },
+	{ _MMIO(0x9888), 0x4f800331 },
+	{ _MMIO(0x9888), 0x43800802 },
+	{ _MMIO(0x9888), 0x51800400 },
+	{ _MMIO(0x9888), 0x458004a1 },
+	{ _MMIO(0x9888), 0x53805555 },
+	{ _MMIO(0x9888), 0x47800421 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f801421 },
+	{ _MMIO(0x9888), 0x41800845 },
+};
+
+static int
+get_render_basic_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 2);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 2);
+
+	if (INTEL_INFO(dev_priv)->sseu.slice_mask & 0x01) {
+		regs[n] = mux_config_render_basic_0_slices_0x01;
+		lens[n] = ARRAY_SIZE(mux_config_render_basic_0_slices_0x01);
+		n++;
+	}
+	if (INTEL_INFO(dev_priv)->sseu.slice_mask & 0x02) {
+		regs[n] = mux_config_render_basic_1_slices_0x02;
+		lens[n] = ARRAY_SIZE(mux_config_render_basic_1_slices_0x02);
+		n++;
+	}
+
+	return n;
+}
+
+int i915_oa_select_metric_set_bdw(struct drm_i915_private *dev_priv)
+{
+	dev_priv->perf.oa.n_mux_configs = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_basic_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "b541bd57-0e0f-4154-b4c0-5858010a2bf7",
+	.attrs =  attrs_render_basic,
+};
+
+int
+i915_perf_register_sysfs_bdw(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+	int ret = 0;
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+		if (ret)
+			goto error_render_basic;
+	}
+
+	return 0;
+
+error_render_basic:
+	return ret;
+}
+
+void
+i915_perf_unregister_sysfs_bdw(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+}
diff --git a/drivers/gpu/drm/i915/i915_oa_bdw.h b/drivers/gpu/drm/i915/i915_oa_bdw.h
new file mode 100644
index 000000000000..6363ff9f64c0
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_bdw.h
@@ -0,0 +1,40 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_OA_BDW_H__
+#define __I915_OA_BDW_H__
+
+extern int i915_oa_n_builtin_metric_sets_bdw;
+
+extern int i915_oa_select_metric_set_bdw(struct drm_i915_private *dev_priv);
+
+extern int i915_perf_register_sysfs_bdw(struct drm_i915_private *dev_priv);
+
+extern void i915_perf_unregister_sysfs_bdw(struct drm_i915_private *dev_priv);
+
+#endif
diff --git a/drivers/gpu/drm/i915/i915_oa_bxt.c b/drivers/gpu/drm/i915/i915_oa_bxt.c
new file mode 100644
index 000000000000..345ec1d3faa7
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_bxt.c
@@ -0,0 +1,248 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/sysfs.h>
+
+#include "i915_drv.h"
+#include "i915_oa_bxt.h"
+
+enum metric_set_id {
+	METRIC_SET_ID_RENDER_BASIC = 1,
+};
+
+int i915_oa_n_builtin_metric_sets_bxt = 1;
+
+static const struct i915_oa_reg b_counter_config_render_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic_0_sku_gte_0x03[] = {
+	{ _MMIO(0x9888), 0x166c00f0 },
+	{ _MMIO(0x9888), 0x12120280 },
+	{ _MMIO(0x9888), 0x12320280 },
+	{ _MMIO(0x9888), 0x11930317 },
+	{ _MMIO(0x9888), 0x159303df },
+	{ _MMIO(0x9888), 0x3f900c00 },
+	{ _MMIO(0x9888), 0x419000a0 },
+	{ _MMIO(0x9888), 0x002d1000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x082d5000 },
+	{ _MMIO(0x9888), 0x0a2d1000 },
+	{ _MMIO(0x9888), 0x0c2e0800 },
+	{ _MMIO(0x9888), 0x0e2e5900 },
+	{ _MMIO(0x9888), 0x0a4c8000 },
+	{ _MMIO(0x9888), 0x0c4c8000 },
+	{ _MMIO(0x9888), 0x0e4c4000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e2000 },
+	{ _MMIO(0x9888), 0x1c4f0010 },
+	{ _MMIO(0x9888), 0x0a6c0053 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1a0fcc00 },
+	{ _MMIO(0x9888), 0x1c0f0002 },
+	{ _MMIO(0x9888), 0x1c2c0040 },
+	{ _MMIO(0x9888), 0x00101000 },
+	{ _MMIO(0x9888), 0x04101000 },
+	{ _MMIO(0x9888), 0x00114000 },
+	{ _MMIO(0x9888), 0x08114000 },
+	{ _MMIO(0x9888), 0x00120020 },
+	{ _MMIO(0x9888), 0x08120021 },
+	{ _MMIO(0x9888), 0x00141000 },
+	{ _MMIO(0x9888), 0x08141000 },
+	{ _MMIO(0x9888), 0x02308000 },
+	{ _MMIO(0x9888), 0x04302000 },
+	{ _MMIO(0x9888), 0x06318000 },
+	{ _MMIO(0x9888), 0x08318000 },
+	{ _MMIO(0x9888), 0x06320800 },
+	{ _MMIO(0x9888), 0x08320840 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x06344000 },
+	{ _MMIO(0x9888), 0x08344000 },
+	{ _MMIO(0x9888), 0x0d931831 },
+	{ _MMIO(0x9888), 0x0f939f3f },
+	{ _MMIO(0x9888), 0x01939e80 },
+	{ _MMIO(0x9888), 0x039303bc },
+	{ _MMIO(0x9888), 0x0593000e },
+	{ _MMIO(0x9888), 0x1993002a },
+	{ _MMIO(0x9888), 0x07930000 },
+	{ _MMIO(0x9888), 0x09930000 },
+	{ _MMIO(0x9888), 0x1d900177 },
+	{ _MMIO(0x9888), 0x1f900187 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25904000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x53901110 },
+	{ _MMIO(0x9888), 0x43900423 },
+	{ _MMIO(0x9888), 0x55900111 },
+	{ _MMIO(0x9888), 0x47900c02 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900020 },
+	{ _MMIO(0x9888), 0x59901111 },
+	{ _MMIO(0x9888), 0x4b900421 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900001 },
+	{ _MMIO(0x9888), 0x45900821 },
+};
+
+static int
+get_render_basic_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	if (dev_priv->drm.pdev->revision >= 0x03) {
+		regs[n] = mux_config_render_basic_0_sku_gte_0x03;
+		lens[n] = ARRAY_SIZE(mux_config_render_basic_0_sku_gte_0x03);
+		n++;
+	}
+
+	return n;
+}
+
+int i915_oa_select_metric_set_bxt(struct drm_i915_private *dev_priv)
+{
+	dev_priv->perf.oa.n_mux_configs = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_basic_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "22b9519a-e9ba-4c41-8b54-f4f8ca14fa0a",
+	.attrs =  attrs_render_basic,
+};
+
+int
+i915_perf_register_sysfs_bxt(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+	int ret = 0;
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+		if (ret)
+			goto error_render_basic;
+	}
+
+	return 0;
+
+error_render_basic:
+	return ret;
+}
+
+void
+i915_perf_unregister_sysfs_bxt(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+}
diff --git a/drivers/gpu/drm/i915/i915_oa_bxt.h b/drivers/gpu/drm/i915/i915_oa_bxt.h
new file mode 100644
index 000000000000..6cf7ba746e7e
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_bxt.h
@@ -0,0 +1,40 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_OA_BXT_H__
+#define __I915_OA_BXT_H__
+
+extern int i915_oa_n_builtin_metric_sets_bxt;
+
+extern int i915_oa_select_metric_set_bxt(struct drm_i915_private *dev_priv);
+
+extern int i915_perf_register_sysfs_bxt(struct drm_i915_private *dev_priv);
+
+extern void i915_perf_unregister_sysfs_bxt(struct drm_i915_private *dev_priv);
+
+#endif
diff --git a/drivers/gpu/drm/i915/i915_oa_chv.c b/drivers/gpu/drm/i915/i915_oa_chv.c
new file mode 100644
index 000000000000..b15f6c980d11
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_chv.c
@@ -0,0 +1,238 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/sysfs.h>
+
+#include "i915_drv.h"
+#include "i915_oa_chv.h"
+
+enum metric_set_id {
+	METRIC_SET_ID_RENDER_BASIC = 1,
+};
+
+int i915_oa_n_builtin_metric_sets_chv = 1;
+
+static const struct i915_oa_reg b_counter_config_render_basic[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic[] = {
+	{ _MMIO(0x9888), 0x59800000 },
+	{ _MMIO(0x9888), 0x59800001 },
+	{ _MMIO(0x9888), 0x285a0006 },
+	{ _MMIO(0x9888), 0x2c110014 },
+	{ _MMIO(0x9888), 0x2e110000 },
+	{ _MMIO(0x9888), 0x2c310014 },
+	{ _MMIO(0x9888), 0x2e310000 },
+	{ _MMIO(0x9888), 0x2b8303df },
+	{ _MMIO(0x9888), 0x3580024f },
+	{ _MMIO(0x9888), 0x00580888 },
+	{ _MMIO(0x9888), 0x1e5a0015 },
+	{ _MMIO(0x9888), 0x205a0014 },
+	{ _MMIO(0x9888), 0x045a0000 },
+	{ _MMIO(0x9888), 0x025a0000 },
+	{ _MMIO(0x9888), 0x02180500 },
+	{ _MMIO(0x9888), 0x00190555 },
+	{ _MMIO(0x9888), 0x021d0500 },
+	{ _MMIO(0x9888), 0x021f0a00 },
+	{ _MMIO(0x9888), 0x00380444 },
+	{ _MMIO(0x9888), 0x02390500 },
+	{ _MMIO(0x9888), 0x003a0666 },
+	{ _MMIO(0x9888), 0x00100111 },
+	{ _MMIO(0x9888), 0x06110030 },
+	{ _MMIO(0x9888), 0x0a110031 },
+	{ _MMIO(0x9888), 0x0e110046 },
+	{ _MMIO(0x9888), 0x04110000 },
+	{ _MMIO(0x9888), 0x00110000 },
+	{ _MMIO(0x9888), 0x00130111 },
+	{ _MMIO(0x9888), 0x00300444 },
+	{ _MMIO(0x9888), 0x08310030 },
+	{ _MMIO(0x9888), 0x0c310031 },
+	{ _MMIO(0x9888), 0x10310046 },
+	{ _MMIO(0x9888), 0x04310000 },
+	{ _MMIO(0x9888), 0x00310000 },
+	{ _MMIO(0x9888), 0x00330444 },
+	{ _MMIO(0x9888), 0x038a0a00 },
+	{ _MMIO(0x9888), 0x018b0fff },
+	{ _MMIO(0x9888), 0x038b0a00 },
+	{ _MMIO(0x9888), 0x01855000 },
+	{ _MMIO(0x9888), 0x03850055 },
+	{ _MMIO(0x9888), 0x13830021 },
+	{ _MMIO(0x9888), 0x15830020 },
+	{ _MMIO(0x9888), 0x1783002f },
+	{ _MMIO(0x9888), 0x1983002e },
+	{ _MMIO(0x9888), 0x1b83002d },
+	{ _MMIO(0x9888), 0x1d83002c },
+	{ _MMIO(0x9888), 0x05830000 },
+	{ _MMIO(0x9888), 0x01840555 },
+	{ _MMIO(0x9888), 0x03840500 },
+	{ _MMIO(0x9888), 0x23800074 },
+	{ _MMIO(0x9888), 0x2580007d },
+	{ _MMIO(0x9888), 0x05800000 },
+	{ _MMIO(0x9888), 0x01805000 },
+	{ _MMIO(0x9888), 0x03800055 },
+	{ _MMIO(0x9888), 0x01865000 },
+	{ _MMIO(0x9888), 0x03860055 },
+	{ _MMIO(0x9888), 0x01875000 },
+	{ _MMIO(0x9888), 0x03870055 },
+	{ _MMIO(0x9888), 0x418000aa },
+	{ _MMIO(0x9888), 0x4380000a },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x4780000a },
+	{ _MMIO(0x9888), 0x49800000 },
+	{ _MMIO(0x9888), 0x4b800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x55800000 },
+	{ _MMIO(0x9888), 0x57800000 },
+	{ _MMIO(0x9888), 0x59800000 },
+};
+
+static int
+get_render_basic_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_render_basic;
+	lens[n] = ARRAY_SIZE(mux_config_render_basic);
+	n++;
+
+	return n;
+}
+
+int i915_oa_select_metric_set_chv(struct drm_i915_private *dev_priv)
+{
+	dev_priv->perf.oa.n_mux_configs = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_basic_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "9d8a3af5-c02c-4a4a-b947-f1672469e0fb",
+	.attrs =  attrs_render_basic,
+};
+
+int
+i915_perf_register_sysfs_chv(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+	int ret = 0;
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+		if (ret)
+			goto error_render_basic;
+	}
+
+	return 0;
+
+error_render_basic:
+	return ret;
+}
+
+void
+i915_perf_unregister_sysfs_chv(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+}
diff --git a/drivers/gpu/drm/i915/i915_oa_chv.h b/drivers/gpu/drm/i915/i915_oa_chv.h
new file mode 100644
index 000000000000..8b8bdc26d726
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_chv.h
@@ -0,0 +1,40 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_OA_CHV_H__
+#define __I915_OA_CHV_H__
+
+extern int i915_oa_n_builtin_metric_sets_chv;
+
+extern int i915_oa_select_metric_set_chv(struct drm_i915_private *dev_priv);
+
+extern int i915_perf_register_sysfs_chv(struct drm_i915_private *dev_priv);
+
+extern void i915_perf_unregister_sysfs_chv(struct drm_i915_private *dev_priv);
+
+#endif
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt2.c b/drivers/gpu/drm/i915/i915_oa_sklgt2.c
new file mode 100644
index 000000000000..9ab9d21ec335
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt2.c
@@ -0,0 +1,238 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/sysfs.h>
+
+#include "i915_drv.h"
+#include "i915_oa_sklgt2.h"
+
+enum metric_set_id {
+	METRIC_SET_ID_RENDER_BASIC = 1,
+};
+
+int i915_oa_n_builtin_metric_sets_sklgt2 = 1;
+
+static const struct i915_oa_reg b_counter_config_render_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic_1_sku_gte_0x02[] = {
+	{ _MMIO(0x9888), 0x166c01e0 },
+	{ _MMIO(0x9888), 0x12170280 },
+	{ _MMIO(0x9888), 0x12370280 },
+	{ _MMIO(0x9888), 0x11930317 },
+	{ _MMIO(0x9888), 0x159303df },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x1a4e0080 },
+	{ _MMIO(0x9888), 0x0a6c0053 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x0a1b4000 },
+	{ _MMIO(0x9888), 0x1c1c0001 },
+	{ _MMIO(0x9888), 0x002f1000 },
+	{ _MMIO(0x9888), 0x042f1000 },
+	{ _MMIO(0x9888), 0x004c4000 },
+	{ _MMIO(0x9888), 0x0a4c8400 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0d2000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f6600 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x162c2200 },
+	{ _MMIO(0x9888), 0x062d8000 },
+	{ _MMIO(0x9888), 0x082d8000 },
+	{ _MMIO(0x9888), 0x00133000 },
+	{ _MMIO(0x9888), 0x08133000 },
+	{ _MMIO(0x9888), 0x00170020 },
+	{ _MMIO(0x9888), 0x08170021 },
+	{ _MMIO(0x9888), 0x10170000 },
+	{ _MMIO(0x9888), 0x0633c000 },
+	{ _MMIO(0x9888), 0x0833c000 },
+	{ _MMIO(0x9888), 0x06370800 },
+	{ _MMIO(0x9888), 0x08370840 },
+	{ _MMIO(0x9888), 0x10370000 },
+	{ _MMIO(0x9888), 0x0d933031 },
+	{ _MMIO(0x9888), 0x0f933e3f },
+	{ _MMIO(0x9888), 0x01933d00 },
+	{ _MMIO(0x9888), 0x0393073c },
+	{ _MMIO(0x9888), 0x0593000e },
+	{ _MMIO(0x9888), 0x1d930000 },
+	{ _MMIO(0x9888), 0x19930000 },
+	{ _MMIO(0x9888), 0x1b930000 },
+	{ _MMIO(0x9888), 0x1d900157 },
+	{ _MMIO(0x9888), 0x1f900158 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d908000 },
+	{ _MMIO(0x9888), 0x2f908000 },
+	{ _MMIO(0x9888), 0x31908000 },
+	{ _MMIO(0x9888), 0x15908000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1190001f },
+	{ _MMIO(0x9888), 0x51904400 },
+	{ _MMIO(0x9888), 0x41900020 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c21 },
+	{ _MMIO(0x9888), 0x47900061 },
+	{ _MMIO(0x9888), 0x57904440 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900004 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x53904444 },
+};
+
+static int
+get_render_basic_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	if (dev_priv->drm.pdev->revision >= 0x02) {
+		regs[n] = mux_config_render_basic_1_sku_gte_0x02;
+		lens[n] = ARRAY_SIZE(mux_config_render_basic_1_sku_gte_0x02);
+		n++;
+	}
+
+	return n;
+}
+
+int i915_oa_select_metric_set_sklgt2(struct drm_i915_private *dev_priv)
+{
+	dev_priv->perf.oa.n_mux_configs = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_basic_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "f519e481-24d2-4d42-87c9-3fdd12c00202",
+	.attrs =  attrs_render_basic,
+};
+
+int
+i915_perf_register_sysfs_sklgt2(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+	int ret = 0;
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+		if (ret)
+			goto error_render_basic;
+	}
+
+	return 0;
+
+error_render_basic:
+	return ret;
+}
+
+void
+i915_perf_unregister_sysfs_sklgt2(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+}
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt2.h b/drivers/gpu/drm/i915/i915_oa_sklgt2.h
new file mode 100644
index 000000000000..f4397baf3328
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt2.h
@@ -0,0 +1,40 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_OA_SKLGT2_H__
+#define __I915_OA_SKLGT2_H__
+
+extern int i915_oa_n_builtin_metric_sets_sklgt2;
+
+extern int i915_oa_select_metric_set_sklgt2(struct drm_i915_private *dev_priv);
+
+extern int i915_perf_register_sysfs_sklgt2(struct drm_i915_private *dev_priv);
+
+extern void i915_perf_unregister_sysfs_sklgt2(struct drm_i915_private *dev_priv);
+
+#endif
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt3.c b/drivers/gpu/drm/i915/i915_oa_sklgt3.c
new file mode 100644
index 000000000000..e32d3b3ad77a
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt3.c
@@ -0,0 +1,249 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/sysfs.h>
+
+#include "i915_drv.h"
+#include "i915_oa_sklgt3.h"
+
+enum metric_set_id {
+	METRIC_SET_ID_RENDER_BASIC = 1,
+};
+
+int i915_oa_n_builtin_metric_sets_sklgt3 = 1;
+
+static const struct i915_oa_reg b_counter_config_render_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic[] = {
+	{ _MMIO(0x9888), 0x166c01e0 },
+	{ _MMIO(0x9888), 0x12170280 },
+	{ _MMIO(0x9888), 0x12370280 },
+	{ _MMIO(0x9888), 0x16ec01e0 },
+	{ _MMIO(0x9888), 0x11930317 },
+	{ _MMIO(0x9888), 0x159303df },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x1a4e0380 },
+	{ _MMIO(0x9888), 0x0a6c0053 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x0a1b4000 },
+	{ _MMIO(0x9888), 0x1c1c0001 },
+	{ _MMIO(0x9888), 0x002f1000 },
+	{ _MMIO(0x9888), 0x042f1000 },
+	{ _MMIO(0x9888), 0x004c4000 },
+	{ _MMIO(0x9888), 0x0a4c8400 },
+	{ _MMIO(0x9888), 0x0c4c0002 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f6600 },
+	{ _MMIO(0x9888), 0x100f0001 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x162ca200 },
+	{ _MMIO(0x9888), 0x062d8000 },
+	{ _MMIO(0x9888), 0x082d8000 },
+	{ _MMIO(0x9888), 0x00133000 },
+	{ _MMIO(0x9888), 0x08133000 },
+	{ _MMIO(0x9888), 0x00170020 },
+	{ _MMIO(0x9888), 0x08170021 },
+	{ _MMIO(0x9888), 0x10170000 },
+	{ _MMIO(0x9888), 0x0633c000 },
+	{ _MMIO(0x9888), 0x0833c000 },
+	{ _MMIO(0x9888), 0x06370800 },
+	{ _MMIO(0x9888), 0x08370840 },
+	{ _MMIO(0x9888), 0x10370000 },
+	{ _MMIO(0x9888), 0x1ace0200 },
+	{ _MMIO(0x9888), 0x0aec5300 },
+	{ _MMIO(0x9888), 0x10ec0000 },
+	{ _MMIO(0x9888), 0x1cec0000 },
+	{ _MMIO(0x9888), 0x0a9b8000 },
+	{ _MMIO(0x9888), 0x1c9c0002 },
+	{ _MMIO(0x9888), 0x0ccc0002 },
+	{ _MMIO(0x9888), 0x0a8d8000 },
+	{ _MMIO(0x9888), 0x108f0001 },
+	{ _MMIO(0x9888), 0x16ac8000 },
+	{ _MMIO(0x9888), 0x0d933031 },
+	{ _MMIO(0x9888), 0x0f933e3f },
+	{ _MMIO(0x9888), 0x01933d00 },
+	{ _MMIO(0x9888), 0x0393073c },
+	{ _MMIO(0x9888), 0x0593000e },
+	{ _MMIO(0x9888), 0x1d930000 },
+	{ _MMIO(0x9888), 0x19930000 },
+	{ _MMIO(0x9888), 0x1b930000 },
+	{ _MMIO(0x9888), 0x1d900157 },
+	{ _MMIO(0x9888), 0x1f900158 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d908000 },
+	{ _MMIO(0x9888), 0x2f908000 },
+	{ _MMIO(0x9888), 0x31908000 },
+	{ _MMIO(0x9888), 0x15908000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1190003f },
+	{ _MMIO(0x9888), 0x51907710 },
+	{ _MMIO(0x9888), 0x419020a0 },
+	{ _MMIO(0x9888), 0x55901515 },
+	{ _MMIO(0x9888), 0x45900529 },
+	{ _MMIO(0x9888), 0x47901025 },
+	{ _MMIO(0x9888), 0x57907770 },
+	{ _MMIO(0x9888), 0x49902100 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900108 },
+	{ _MMIO(0x9888), 0x59900007 },
+	{ _MMIO(0x9888), 0x43902108 },
+	{ _MMIO(0x9888), 0x53907777 },
+};
+
+static int
+get_render_basic_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_render_basic;
+	lens[n] = ARRAY_SIZE(mux_config_render_basic);
+	n++;
+
+	return n;
+}
+
+int i915_oa_select_metric_set_sklgt3(struct drm_i915_private *dev_priv)
+{
+	dev_priv->perf.oa.n_mux_configs = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_basic_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "4616d450-2393-4836-8146-53c5ed84d359",
+	.attrs =  attrs_render_basic,
+};
+
+int
+i915_perf_register_sysfs_sklgt3(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+	int ret = 0;
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+		if (ret)
+			goto error_render_basic;
+	}
+
+	return 0;
+
+error_render_basic:
+	return ret;
+}
+
+void
+i915_perf_unregister_sysfs_sklgt3(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+}
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt3.h b/drivers/gpu/drm/i915/i915_oa_sklgt3.h
new file mode 100644
index 000000000000..c0accb1f9b74
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt3.h
@@ -0,0 +1,40 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_OA_SKLGT3_H__
+#define __I915_OA_SKLGT3_H__
+
+extern int i915_oa_n_builtin_metric_sets_sklgt3;
+
+extern int i915_oa_select_metric_set_sklgt3(struct drm_i915_private *dev_priv);
+
+extern int i915_perf_register_sysfs_sklgt3(struct drm_i915_private *dev_priv);
+
+extern void i915_perf_unregister_sysfs_sklgt3(struct drm_i915_private *dev_priv);
+
+#endif
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt4.c b/drivers/gpu/drm/i915/i915_oa_sklgt4.c
new file mode 100644
index 000000000000..ed034f190a6c
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt4.c
@@ -0,0 +1,260 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/sysfs.h>
+
+#include "i915_drv.h"
+#include "i915_oa_sklgt4.h"
+
+enum metric_set_id {
+	METRIC_SET_ID_RENDER_BASIC = 1,
+};
+
+int i915_oa_n_builtin_metric_sets_sklgt4 = 1;
+
+static const struct i915_oa_reg b_counter_config_render_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic[] = {
+	{ _MMIO(0x9888), 0x166c01e0 },
+	{ _MMIO(0x9888), 0x12170280 },
+	{ _MMIO(0x9888), 0x12370280 },
+	{ _MMIO(0x9888), 0x16ec01e0 },
+	{ _MMIO(0x9888), 0x176c01e0 },
+	{ _MMIO(0x9888), 0x11930317 },
+	{ _MMIO(0x9888), 0x159303df },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x1a4e03b0 },
+	{ _MMIO(0x9888), 0x0a6c0053 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x0a1b4000 },
+	{ _MMIO(0x9888), 0x1c1c0001 },
+	{ _MMIO(0x9888), 0x002f1000 },
+	{ _MMIO(0x9888), 0x042f1000 },
+	{ _MMIO(0x9888), 0x004c4000 },
+	{ _MMIO(0x9888), 0x0a4ca400 },
+	{ _MMIO(0x9888), 0x0c4c0002 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f5600 },
+	{ _MMIO(0x9888), 0x100f0001 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x162caa00 },
+	{ _MMIO(0x9888), 0x062d8000 },
+	{ _MMIO(0x9888), 0x00133000 },
+	{ _MMIO(0x9888), 0x08133000 },
+	{ _MMIO(0x9888), 0x00170020 },
+	{ _MMIO(0x9888), 0x08170021 },
+	{ _MMIO(0x9888), 0x10170000 },
+	{ _MMIO(0x9888), 0x0633c000 },
+	{ _MMIO(0x9888), 0x06370800 },
+	{ _MMIO(0x9888), 0x10370000 },
+	{ _MMIO(0x9888), 0x1ace0230 },
+	{ _MMIO(0x9888), 0x0aec5300 },
+	{ _MMIO(0x9888), 0x10ec0000 },
+	{ _MMIO(0x9888), 0x1cec0000 },
+	{ _MMIO(0x9888), 0x0a9b8000 },
+	{ _MMIO(0x9888), 0x1c9c0002 },
+	{ _MMIO(0x9888), 0x0acc2000 },
+	{ _MMIO(0x9888), 0x0ccc0002 },
+	{ _MMIO(0x9888), 0x088d8000 },
+	{ _MMIO(0x9888), 0x0a8d8000 },
+	{ _MMIO(0x9888), 0x0e8f1000 },
+	{ _MMIO(0x9888), 0x108f0001 },
+	{ _MMIO(0x9888), 0x16ac8800 },
+	{ _MMIO(0x9888), 0x1b4e0020 },
+	{ _MMIO(0x9888), 0x096c5300 },
+	{ _MMIO(0x9888), 0x116c0000 },
+	{ _MMIO(0x9888), 0x1d6c0000 },
+	{ _MMIO(0x9888), 0x091b8000 },
+	{ _MMIO(0x9888), 0x1b1c8000 },
+	{ _MMIO(0x9888), 0x0b4c2000 },
+	{ _MMIO(0x9888), 0x090d8000 },
+	{ _MMIO(0x9888), 0x0f0f1000 },
+	{ _MMIO(0x9888), 0x172c0800 },
+	{ _MMIO(0x9888), 0x0d933031 },
+	{ _MMIO(0x9888), 0x0f933e3f },
+	{ _MMIO(0x9888), 0x01933d00 },
+	{ _MMIO(0x9888), 0x0393073c },
+	{ _MMIO(0x9888), 0x0593000e },
+	{ _MMIO(0x9888), 0x1d930000 },
+	{ _MMIO(0x9888), 0x19930000 },
+	{ _MMIO(0x9888), 0x1b930000 },
+	{ _MMIO(0x9888), 0x1d900157 },
+	{ _MMIO(0x9888), 0x1f900158 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d908000 },
+	{ _MMIO(0x9888), 0x2f908000 },
+	{ _MMIO(0x9888), 0x31908000 },
+	{ _MMIO(0x9888), 0x15908000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1190003f },
+	{ _MMIO(0x9888), 0x5190ff30 },
+	{ _MMIO(0x9888), 0x41900060 },
+	{ _MMIO(0x9888), 0x55903033 },
+	{ _MMIO(0x9888), 0x45901421 },
+	{ _MMIO(0x9888), 0x47900803 },
+	{ _MMIO(0x9888), 0x5790fff1 },
+	{ _MMIO(0x9888), 0x49900001 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x5990000f },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x5390ffff },
+};
+
+static int
+get_render_basic_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_render_basic;
+	lens[n] = ARRAY_SIZE(mux_config_render_basic);
+	n++;
+
+	return n;
+}
+
+int i915_oa_select_metric_set_sklgt4(struct drm_i915_private *dev_priv)
+{
+	dev_priv->perf.oa.n_mux_configs = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_basic_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "bad77c24-cc64-480d-99bf-e7b740713800",
+	.attrs =  attrs_render_basic,
+};
+
+int
+i915_perf_register_sysfs_sklgt4(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+	int ret = 0;
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+		if (ret)
+			goto error_render_basic;
+	}
+
+	return 0;
+
+error_render_basic:
+	return ret;
+}
+
+void
+i915_perf_unregister_sysfs_sklgt4(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+}
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt4.h b/drivers/gpu/drm/i915/i915_oa_sklgt4.h
new file mode 100644
index 000000000000..1b718f15f62e
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt4.h
@@ -0,0 +1,40 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_OA_SKLGT4_H__
+#define __I915_OA_SKLGT4_H__
+
+extern int i915_oa_n_builtin_metric_sets_sklgt4;
+
+extern int i915_oa_select_metric_set_sklgt4(struct drm_i915_private *dev_priv);
+
+extern int i915_perf_register_sysfs_sklgt4(struct drm_i915_private *dev_priv);
+
+extern void i915_perf_unregister_sysfs_sklgt4(struct drm_i915_private *dev_priv);
+
+#endif
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v15 06/14] drm/i915/perf: Add OA unit support for Gen 8+
  2017-05-31 12:33   ` [PATCH v15 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                       ` (4 preceding siblings ...)
  2017-05-31 12:33     ` [PATCH v15 05/14] drm/i915/perf: Add 'render basic' Gen8+ OA unit configs Lionel Landwerlin
@ 2017-05-31 12:33     ` Lionel Landwerlin
  2017-06-01 11:48       ` Chris Wilson
  2017-05-31 12:33     ` [PATCH v15 07/14] drm/i915/perf: Add more OA configs for BDW, CHV, SKL + BXT Lionel Landwerlin
                       ` (7 subsequent siblings)
  13 siblings, 1 reply; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-31 12:33 UTC (permalink / raw)
  To: intel-gfx

From: Robert Bragg <robert@sixbynine.org>

Enables access to OA unit metrics for BDW, CHV, SKL and BXT which all
share (more-or-less) the same OA unit design.

Of particular note in comparison to Haswell: some OA unit HW config
state has become per-context state and as a consequence it is somewhat
more complicated to manage synchronous state changes from the cpu while
there's no guarantee of what context (if any) is currently actively
running on the gpu.

The periodic sampling frequency which can be particularly useful for
system-wide analysis (as opposed to command stream synchronised
MI_REPORT_PERF_COUNT commands) is perhaps the most surprising state to
have become per-context save and restored (while the OABUFFER
destination is still a shared, system-wide resource).

This support for gen8+ takes care to consider a number of timing
challenges involved in synchronously updating per-context state
primarily by programming all config state from the cpu and updating all
current and saved contexts synchronously while the OA unit is still
disabled.

The driver intentionally avoids depending on command streamer
programming to update OA state considering the lack of synchronization
between the automatic loading of OACTXCONTROL state (that includes the
periodic sampling state and enable state) on context restore and the
parsing of any general purpose BB the driver can control. I.e. this
implementation is careful to avoid the possibility of a context restore
temporarily enabling any out-of-date periodic sampling state. In
addition to the risk of transiently-out-of-date state being loaded
automatically; there are also internal HW latencies involved in the
loading of MUX configurations which would be difficult to account for
from the command streamer (and we only want to enable the unit when once
the MUX configuration is complete).

Since the Gen8+ OA unit design no longer supports clock gating the unit
off for a single given context (which effectively stopped any progress
of counters while any other context was running) and instead supports
tagging OA reports with a context ID for filtering on the CPU, it means
we can no longer hide the system-wide progress of counters from a
non-privileged application only interested in metrics for its own
context. Although we could theoretically try and subtract the progress
of other contexts before forwarding reports via read() we aren't in a
position to filter reports captured via MI_REPORT_PERF_COUNT commands.
As a result, for Gen8+, we always require the
dev.i915.perf_stream_paranoid to be unset for any access to OA metrics
if not root.

v5: Drain submitted requests when enabling metric set to ensure no
    lite-restore erases the context image we just updated (Lionel)

v6: In addition to drain, switch to kernel context & update all
    context in place (Chris)

v7: Add missing mutex_unlock() if switching to kernel context fails
    (Matthew)

v8: Simplify OA period/flex-eu-counters programming by using the
    batchbuffer instead of modifying ctx-image (Lionel)

v9: Back to updating the context image (due to erroneous testing,
    batchbuffer programming the OA unit doesn't actually work)
    (Lionel)
    Pin context before updating context image (Chris)
    Drop MMIO programming now that we switch to a kernel context with
    right values in initial context image (Chris)

v10: Just pin_map the contexts we want to modify or let the
     configuration happen on first use (Chris)

v11: Update kernel context OA config through the batchbuffer rather
     than on the fly ctx-image update (Lionel)

v12: Rework OA context registers update again by swithing away from
     user contexts and reconfiguring the kernel context through the
     batchbuffer and updating all the other contexts' context image.
     Also take care to lock slice/subslice configuration when OA is
     on. (Lionel)

v13: Request rpcs updates on all engine when updating the OA config
     (Lionel)

v14: Drop any kind of rpcs management now that we monitor sseu
     configuration changes in a later patch (Lionel)
     Remove usleep after programming the NOA configs on Gen8+, this
     doesn't seem to be needed (Lionel)

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com> \o/
---
 drivers/gpu/drm/i915/i915_drv.h  |   45 +-
 drivers/gpu/drm/i915/i915_perf.c | 1005 +++++++++++++++++++++++++++++++++++---
 drivers/gpu/drm/i915/i915_reg.h  |   22 +
 drivers/gpu/drm/i915/intel_lrc.c |    2 +
 drivers/gpu/drm/i915/intel_lrc.h |    2 +
 include/uapi/drm/i915_drm.h      |   19 +-
 6 files changed, 1000 insertions(+), 95 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index a9aaff0f15e0..f59cbe99e4b4 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1998,9 +1998,17 @@ struct i915_oa_ops {
 	void (*init_oa_buffer)(struct drm_i915_private *dev_priv);
 
 	/**
-	 * @enable_metric_set: Applies any MUX configuration to set up the
-	 * Boolean and Custom (B/C) counters that are part of the counter
-	 * reports being sampled. May apply system constraints such as
+	 * @select_metric_set: The auto generated code that checks whether a
+	 * requested OA config is applicable to the system and if so sets up
+	 * the mux, oa and flex eu register config pointers according to the
+	 * current dev_priv->perf.oa.metrics_set.
+	 */
+	int (*select_metric_set)(struct drm_i915_private *dev_priv);
+
+	/**
+	 * @enable_metric_set: Selects and applies any MUX configuration to set
+	 * up the Boolean and Custom (B/C) counters that are part of the
+	 * counter reports being sampled. May apply system constraints such as
 	 * disabling EU clock gating as required.
 	 */
 	int (*enable_metric_set)(struct drm_i915_private *dev_priv);
@@ -2031,20 +2039,13 @@ struct i915_oa_ops {
 		    size_t *offset);
 
 	/**
-	 * @oa_buffer_check: Check for OA buffer data + update tail
-	 *
-	 * This is either called via fops or the poll check hrtimer (atomic
-	 * ctx) without any locks taken.
+	 * @oa_hw_tail_read: read the OA tail pointer register
 	 *
-	 * It's safe to read OA config state here unlocked, assuming that this
-	 * is only called while the stream is enabled, while the global OA
-	 * configuration can't be modified.
-	 *
-	 * Efficiency is more important than avoiding some false positives
-	 * here, which will be handled gracefully - likely resulting in an
-	 * %EAGAIN error for userspace.
+	 * In particular this enables us to share all the fiddly code for
+	 * handling the OA unit tail pointer race that affects multiple
+	 * generations.
 	 */
-	bool (*oa_buffer_check)(struct drm_i915_private *dev_priv);
+	u32 (*oa_hw_tail_read)(struct drm_i915_private *dev_priv);
 };
 
 struct intel_cdclk_state {
@@ -2409,6 +2410,7 @@ struct drm_i915_private {
 			struct {
 				struct i915_vma *vma;
 				u8 *vaddr;
+				u32 last_ctx_id;
 				int format;
 				int format_size;
 
@@ -2478,6 +2480,14 @@ struct drm_i915_private {
 			} oa_buffer;
 
 			u32 gen7_latched_oastatus1;
+			u32 ctx_oactxctrl_offset;
+			u32 ctx_flexeu0_offset;
+
+			/* The RPT_ID/reason field for Gen8+ includes a bit
+			 * to determine if the CTX ID in the report is valid
+			 * but the specific bit differs between Gen 8 and 9
+			 */
+			u32 gen8_valid_ctx_bit;
 
 			struct i915_oa_ops ops;
 			const struct i915_oa_format *oa_formats;
@@ -2788,6 +2798,8 @@ intel_info(const struct drm_i915_private *dev_priv)
 #define IS_KBL_ULX(dev_priv)	(INTEL_DEVID(dev_priv) == 0x590E || \
 				 INTEL_DEVID(dev_priv) == 0x5915 || \
 				 INTEL_DEVID(dev_priv) == 0x591E)
+#define IS_SKL_GT2(dev_priv)	(IS_SKYLAKE(dev_priv) && \
+				 (INTEL_DEVID(dev_priv) & 0x00F0) == 0x0010)
 #define IS_SKL_GT3(dev_priv)	(IS_SKYLAKE(dev_priv) && \
 				 (INTEL_DEVID(dev_priv) & 0x00F0) == 0x0020)
 #define IS_SKL_GT4(dev_priv)	(IS_SKYLAKE(dev_priv) && \
@@ -3517,6 +3529,9 @@ i915_gem_context_lookup_timeline(struct i915_gem_context *ctx,
 
 int i915_perf_open_ioctl(struct drm_device *dev, void *data,
 			 struct drm_file *file);
+void i915_oa_init_reg_state(struct intel_engine_cs *engine,
+			    struct i915_gem_context *ctx,
+			    uint32_t *reg_state);
 
 /* i915_gem_evict.c */
 int __must_check i915_gem_evict_something(struct i915_address_space *vm,
diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 7e56b895fd34..9f524a0ec727 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -196,6 +196,12 @@
 
 #include "i915_drv.h"
 #include "i915_oa_hsw.h"
+#include "i915_oa_bdw.h"
+#include "i915_oa_chv.h"
+#include "i915_oa_sklgt2.h"
+#include "i915_oa_sklgt3.h"
+#include "i915_oa_sklgt4.h"
+#include "i915_oa_bxt.h"
 
 /* HW requires this to be a power of two, between 128k and 16M, though driver
  * is currently generally designed assuming the largest 16M size is used such
@@ -215,7 +221,7 @@
  *
  * Although this can be observed explicitly while copying reports to userspace
  * by checking for a zeroed report-id field in tail reports, we want to account
- * for this earlier, as part of the _oa_buffer_check to avoid lots of redundant
+ * for this earlier, as part of the oa_buffer_check to avoid lots of redundant
  * read() attempts.
  *
  * In effect we define a tail pointer for reading that lags the real tail
@@ -237,7 +243,7 @@
  * indicates that an updated tail pointer is needed.
  *
  * Most of the implementation details for this workaround are in
- * gen7_oa_buffer_check_unlocked() and gen7_appand_oa_reports()
+ * oa_buffer_check_unlocked() and _append_oa_reports()
  *
  * Note for posterity: previously the driver used to define an effective tail
  * pointer that lagged the real pointer by a 'tail margin' measured in bytes
@@ -272,6 +278,13 @@ static u32 i915_perf_stream_paranoid = true;
 
 #define INVALID_CTX_ID 0xffffffff
 
+/* On Gen8+ automatically triggered OA reports include a 'reason' field... */
+#define OAREPORT_REASON_MASK           0x3f
+#define OAREPORT_REASON_SHIFT          19
+#define OAREPORT_REASON_TIMER          (1<<0)
+#define OAREPORT_REASON_CTX_SWITCH     (1<<3)
+#define OAREPORT_REASON_CLK_RATIO      (1<<5)
+
 
 /* For sysctl proc_dointvec_minmax of i915_oa_max_sample_rate
  *
@@ -303,6 +316,13 @@ static struct i915_oa_format hsw_oa_formats[I915_OA_FORMAT_MAX] = {
 	[I915_OA_FORMAT_C4_B8]	    = { 7, 64 },
 };
 
+static struct i915_oa_format gen8_plus_oa_formats[I915_OA_FORMAT_MAX] = {
+	[I915_OA_FORMAT_A12]		    = { 0, 64 },
+	[I915_OA_FORMAT_A12_B8_C8]	    = { 2, 128 },
+	[I915_OA_FORMAT_A32u40_A4u32_B8_C8] = { 5, 256 },
+	[I915_OA_FORMAT_C4_B8]		    = { 7, 64 },
+};
+
 #define SAMPLE_OA_REPORT      (1<<0)
 
 /**
@@ -332,8 +352,20 @@ struct perf_open_properties {
 	int oa_period_exponent;
 };
 
+static u32 gen8_oa_hw_tail_read(struct drm_i915_private *dev_priv)
+{
+	return I915_READ(GEN8_OATAILPTR) & GEN8_OATAILPTR_MASK;
+}
+
+static u32 gen7_oa_hw_tail_read(struct drm_i915_private *dev_priv)
+{
+	u32 oastatus1 = I915_READ(GEN7_OASTATUS1);
+
+	return oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
+}
+
 /**
- * gen7_oa_buffer_check_unlocked - check for data and update tail ptr state
+ * oa_buffer_check_unlocked - check for data and update tail ptr state
  * @dev_priv: i915 device instance
  *
  * This is either called via fops (for blocking reads in user ctx) or the poll
@@ -356,12 +388,11 @@ struct perf_open_properties {
  *
  * Returns: %true if the OA buffer contains data, else %false
  */
-static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
+static bool oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
 {
 	int report_size = dev_priv->perf.oa.oa_buffer.format_size;
 	unsigned long flags;
 	unsigned int aged_idx;
-	u32 oastatus1;
 	u32 head, hw_tail, aged_tail, aging_tail;
 	u64 now;
 
@@ -381,8 +412,7 @@ static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
 	aged_tail = dev_priv->perf.oa.oa_buffer.tails[aged_idx].offset;
 	aging_tail = dev_priv->perf.oa.oa_buffer.tails[!aged_idx].offset;
 
-	oastatus1 = I915_READ(GEN7_OASTATUS1);
-	hw_tail = oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
+	hw_tail = dev_priv->perf.oa.ops.oa_hw_tail_read(dev_priv);
 
 	/* The tail pointer increases in 64 byte increments,
 	 * not in report_size steps...
@@ -404,6 +434,7 @@ static bool gen7_oa_buffer_check_unlocked(struct drm_i915_private *dev_priv)
 	if (aging_tail != INVALID_TAIL_PTR &&
 	    ((now - dev_priv->perf.oa.oa_buffer.aging_timestamp) >
 	     OA_TAIL_MARGIN_NSEC)) {
+
 		aged_idx ^= 1;
 		dev_priv->perf.oa.oa_buffer.aged_tail_idx = aged_idx;
 
@@ -553,6 +584,287 @@ static int append_oa_sample(struct i915_perf_stream *stream,
  *
  * Returns: 0 on success, negative error code on failure.
  */
+static int gen8_append_oa_reports(struct i915_perf_stream *stream,
+				  char __user *buf,
+				  size_t count,
+				  size_t *offset)
+{
+	struct drm_i915_private *dev_priv = stream->dev_priv;
+	int report_size = dev_priv->perf.oa.oa_buffer.format_size;
+	u8 *oa_buf_base = dev_priv->perf.oa.oa_buffer.vaddr;
+	u32 gtt_offset = i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
+	u32 mask = (OA_BUFFER_SIZE - 1);
+	size_t start_offset = *offset;
+	unsigned long flags;
+	unsigned int aged_tail_idx;
+	u32 head, tail;
+	u32 taken;
+	int ret = 0;
+
+	if (WARN_ON(!stream->enabled))
+		return -EIO;
+
+	spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	head = dev_priv->perf.oa.oa_buffer.head;
+	aged_tail_idx = dev_priv->perf.oa.oa_buffer.aged_tail_idx;
+	tail = dev_priv->perf.oa.oa_buffer.tails[aged_tail_idx].offset;
+
+	spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	/* An invalid tail pointer here means we're still waiting for the poll
+	 * hrtimer callback to give us a pointer
+	 */
+	if (tail == INVALID_TAIL_PTR)
+		return -EAGAIN;
+
+	/* NB: oa_buffer.head/tail include the gtt_offset which we don't want
+	 * while indexing relative to oa_buf_base.
+	 */
+	head -= gtt_offset;
+	tail -= gtt_offset;
+
+	/* An out of bounds or misaligned head or tail pointer implies a driver
+	 * bug since we validate + align the tail pointers we read from the
+	 * hardware and we are in full control of the head pointer which should
+	 * only be incremented by multiples of the report size (notably also
+	 * all a power of two).
+	 */
+	if (WARN_ONCE(head > OA_BUFFER_SIZE || head % report_size ||
+		      tail > OA_BUFFER_SIZE || tail % report_size,
+		      "Inconsistent OA buffer pointers: head = %u, tail = %u\n",
+		      head, tail))
+		return -EIO;
+
+
+	for (/* none */;
+	     (taken = OA_TAKEN(tail, head));
+	     head = (head + report_size) & mask) {
+		u8 *report = oa_buf_base + head;
+		u32 *report32 = (void *)report;
+		u32 ctx_id;
+		u32 reason;
+
+		/* All the report sizes factor neatly into the buffer
+		 * size so we never expect to see a report split
+		 * between the beginning and end of the buffer.
+		 *
+		 * Given the initial alignment check a misalignment
+		 * here would imply a driver bug that would result
+		 * in an overrun.
+		 */
+		if (WARN_ON((OA_BUFFER_SIZE - head) < report_size)) {
+			DRM_ERROR("Spurious OA head ptr: non-integral report offset\n");
+			break;
+		}
+
+		/* The reason field includes flags identifying what
+		 * triggered this specific report (mostly timer
+		 * triggered or e.g. due to a context switch).
+		 *
+		 * This field is never expected to be zero so we can
+		 * check that the report isn't invalid before copying
+		 * it to userspace...
+		 */
+		reason = ((report32[0] >> OAREPORT_REASON_SHIFT) &
+			  OAREPORT_REASON_MASK);
+		if (reason == 0) {
+			if (__ratelimit(&dev_priv->perf.oa.spurious_report_rs))
+				DRM_NOTE("Skipping spurious, invalid OA report\n");
+			continue;
+		}
+
+		/* XXX: Just keep the lower 21 bits for now since I'm not
+		 * entirely sure if the HW touches any of the higher bits in
+		 * this field
+		 */
+		ctx_id = report32[2] & 0x1fffff;
+
+		/* Squash whatever is in the CTX_ID field if it's marked as
+		 * invalid to be sure we avoid false-positive, single-context
+		 * filtering below...
+		 *
+		 * Note: that we don't clear the valid_ctx_bit so userspace can
+		 * understand that the ID has been squashed by the kernel.
+		 */
+		if (!(report32[0] & dev_priv->perf.oa.gen8_valid_ctx_bit))
+			ctx_id = report32[2] = INVALID_CTX_ID;
+
+		/* NB: For Gen 8 the OA unit no longer supports clock gating
+		 * off for a specific context and the kernel can't securely
+		 * stop the counters from updating as system-wide / global
+		 * values.
+		 *
+		 * Automatic reports now include a context ID so reports can be
+		 * filtered on the cpu but it's not worth trying to
+		 * automatically subtract/hide counter progress for other
+		 * contexts while filtering since we can't stop userspace
+		 * issuing MI_REPORT_PERF_COUNT commands which would still
+		 * provide a side-band view of the real values.
+		 *
+		 * To allow userspace (such as Mesa/GL_INTEL_performance_query)
+		 * to normalize counters for a single filtered context then it
+		 * needs be forwarded bookend context-switch reports so that it
+		 * can track switches in between MI_REPORT_PERF_COUNT commands
+		 * and can itself subtract/ignore the progress of counters
+		 * associated with other contexts. Note that the hardware
+		 * automatically triggers reports when switching to a new
+		 * context which are tagged with the ID of the newly active
+		 * context. To avoid the complexity (and likely fragility) of
+		 * reading ahead while parsing reports to try and minimize
+		 * forwarding redundant context switch reports (i.e. between
+		 * other, unrelated contexts) we simply elect to forward them
+		 * all.
+		 *
+		 * We don't rely solely on the reason field to identify context
+		 * switches since it's not-uncommon for periodic samples to
+		 * identify a switch before any 'context switch' report.
+		 */
+		if (!dev_priv->perf.oa.exclusive_stream->ctx ||
+		    dev_priv->perf.oa.specific_ctx_id == ctx_id ||
+		    (dev_priv->perf.oa.oa_buffer.last_ctx_id ==
+		     dev_priv->perf.oa.specific_ctx_id) ||
+		    reason & OAREPORT_REASON_CTX_SWITCH) {
+
+			/* While filtering for a single context we avoid
+			 * leaking the IDs of other contexts.
+			 */
+			if (dev_priv->perf.oa.exclusive_stream->ctx &&
+			    dev_priv->perf.oa.specific_ctx_id != ctx_id) {
+				report32[2] = INVALID_CTX_ID;
+			}
+
+			ret = append_oa_sample(stream, buf, count, offset,
+					       report);
+			if (ret)
+				break;
+
+			dev_priv->perf.oa.oa_buffer.last_ctx_id = ctx_id;
+		}
+
+		/* The above reason field sanity check is based on
+		 * the assumption that the OA buffer is initially
+		 * zeroed and we reset the field after copying so the
+		 * check is still meaningful once old reports start
+		 * being overwritten.
+		 */
+		report32[0] = 0;
+	}
+
+	if (start_offset != *offset) {
+		spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+		/* We removed the gtt_offset for the copy loop above, indexing
+		 * relative to oa_buf_base so put back here...
+		 */
+		head += gtt_offset;
+
+		I915_WRITE(GEN8_OAHEADPTR, head & GEN8_OAHEADPTR_MASK);
+		dev_priv->perf.oa.oa_buffer.head = head;
+
+		spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+	}
+
+	return ret;
+}
+
+/**
+ * gen8_oa_read - copy status records then buffered OA reports
+ * @stream: An i915-perf stream opened for OA metrics
+ * @buf: destination buffer given by userspace
+ * @count: the number of bytes userspace wants to read
+ * @offset: (inout): the current position for writing into @buf
+ *
+ * Checks OA unit status registers and if necessary appends corresponding
+ * status records for userspace (such as for a buffer full condition) and then
+ * initiate appending any buffered OA reports.
+ *
+ * Updates @offset according to the number of bytes successfully copied into
+ * the userspace buffer.
+ *
+ * NB: some data may be successfully copied to the userspace buffer
+ * even if an error is returned, and this is reflected in the
+ * updated @offset.
+ *
+ * Returns: zero on success or a negative error code
+ */
+static int gen8_oa_read(struct i915_perf_stream *stream,
+			char __user *buf,
+			size_t count,
+			size_t *offset)
+{
+	struct drm_i915_private *dev_priv = stream->dev_priv;
+	u32 oastatus;
+	int ret;
+
+	if (WARN_ON(!dev_priv->perf.oa.oa_buffer.vaddr))
+		return -EIO;
+
+	oastatus = I915_READ(GEN8_OASTATUS);
+
+	/* We treat OABUFFER_OVERFLOW as a significant error:
+	 *
+	 * Although theoretically we could handle this more gracefully
+	 * sometimes, some Gens don't correctly suppress certain
+	 * automatically triggered reports in this condition and so we
+	 * have to assume that old reports are now being trampled
+	 * over.
+	 *
+	 * Considering how we don't currently give userspace control
+	 * over the OA buffer size and always configure a large 16MB
+	 * buffer, then a buffer overflow does anyway likely indicate
+	 * that something has gone quite badly wrong.
+	 */
+	if (oastatus & GEN8_OASTATUS_OABUFFER_OVERFLOW) {
+		ret = append_oa_status(stream, buf, count, offset,
+				       DRM_I915_PERF_RECORD_OA_BUFFER_LOST);
+		if (ret)
+			return ret;
+
+		DRM_DEBUG("OA buffer overflow (exponent = %d): force restart\n",
+			  dev_priv->perf.oa.period_exponent);
+
+		dev_priv->perf.oa.ops.oa_disable(dev_priv);
+		dev_priv->perf.oa.ops.oa_enable(dev_priv);
+
+		/* Note: .oa_enable() is expected to re-init the oabuffer
+		 * and reset GEN8_OASTATUS for us
+		 */
+		oastatus = I915_READ(GEN8_OASTATUS);
+	}
+
+	if (oastatus & GEN8_OASTATUS_REPORT_LOST) {
+		ret = append_oa_status(stream, buf, count, offset,
+				       DRM_I915_PERF_RECORD_OA_REPORT_LOST);
+		if (ret)
+			return ret;
+		I915_WRITE(GEN8_OASTATUS,
+			   oastatus & ~GEN8_OASTATUS_REPORT_LOST);
+	}
+
+	return gen8_append_oa_reports(stream, buf, count, offset);
+}
+
+/**
+ * Copies all buffered OA reports into userspace read() buffer.
+ * @stream: An i915-perf stream opened for OA metrics
+ * @buf: destination buffer given by userspace
+ * @count: the number of bytes userspace wants to read
+ * @offset: (inout): the current position for writing into @buf
+ *
+ * Notably any error condition resulting in a short read (-%ENOSPC or
+ * -%EFAULT) will be returned even though one or more records may
+ * have been successfully copied. In this case it's up to the caller
+ * to decide if the error should be squashed before returning to
+ * userspace.
+ *
+ * Note: reports are consumed from the head, and appended to the
+ * tail, so the tail chases the head?... If you think that's mad
+ * and back-to-front you're not alone, but this follows the
+ * Gen PRM naming convention.
+ *
+ * Returns: 0 on success, negative error code on failure.
+ */
 static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 				  char __user *buf,
 				  size_t count,
@@ -732,7 +1044,8 @@ static int gen7_oa_read(struct i915_perf_stream *stream,
 		if (ret)
 			return ret;
 
-		DRM_DEBUG("OA buffer overflow: force restart\n");
+		DRM_DEBUG("OA buffer overflow (exponent = %d): force restart\n",
+			  dev_priv->perf.oa.period_exponent);
 
 		dev_priv->perf.oa.ops.oa_disable(dev_priv);
 		dev_priv->perf.oa.ops.oa_enable(dev_priv);
@@ -775,7 +1088,7 @@ static int i915_oa_wait_unlocked(struct i915_perf_stream *stream)
 		return -EIO;
 
 	return wait_event_interruptible(dev_priv->perf.oa.poll_wq,
-					dev_priv->perf.oa.ops.oa_buffer_check(dev_priv));
+					oa_buffer_check_unlocked(dev_priv));
 }
 
 /**
@@ -832,30 +1145,43 @@ static int i915_oa_read(struct i915_perf_stream *stream,
 static int oa_get_render_ctx_id(struct i915_perf_stream *stream)
 {
 	struct drm_i915_private *dev_priv = stream->dev_priv;
-	struct intel_engine_cs *engine = dev_priv->engine[RCS];
-	struct intel_ring *ring;
-	int ret;
 
-	ret = i915_mutex_lock_interruptible(&dev_priv->drm);
-	if (ret)
-		return ret;
+	if (i915.enable_execlists)
+		dev_priv->perf.oa.specific_ctx_id = stream->ctx->hw_id;
+	else {
+		struct intel_engine_cs *engine = dev_priv->engine[RCS];
+		struct intel_ring *ring;
+		int ret;
 
-	/* As the ID is the gtt offset of the context's vma we pin
-	 * the vma to ensure the ID remains fixed.
-	 *
-	 * NB: implied RCS engine...
-	 */
-	ring = engine->context_pin(engine, stream->ctx);
-	mutex_unlock(&dev_priv->drm.struct_mutex);
-	if (IS_ERR(ring))
-		return PTR_ERR(ring);
+		ret = i915_mutex_lock_interruptible(&dev_priv->drm);
+		if (ret)
+			return ret;
 
-	/* Explicitly track the ID (instead of calling i915_ggtt_offset()
-	 * on the fly) considering the difference with gen8+ and
-	 * execlists
-	 */
-	dev_priv->perf.oa.specific_ctx_id =
-		i915_ggtt_offset(stream->ctx->engine[engine->id].state);
+		/* As the ID is the gtt offset of the context's vma we
+		 * pin the vma to ensure the ID remains fixed.
+		 *
+		 * NB: implied RCS engine...
+		 */
+		ring = engine->context_pin(engine, stream->ctx);
+		mutex_unlock(&dev_priv->drm.struct_mutex);
+		if (IS_ERR(ring))
+			return PTR_ERR(ring);
+
+		/* As the ID is the gtt offset of the context's vma we pin
+		 * the vma to ensure the ID remains fixed.
+		 */
+		ring = engine->context_pin(engine, stream->ctx);
+		mutex_unlock(&dev_priv->drm.struct_mutex);
+		if (IS_ERR(ring))
+			return PTR_ERR(ring);
+
+		/* Explicitly track the ID (instead of calling
+		 * i915_ggtt_offset() on the fly) considering the difference
+		 * with gen8+ and execlists
+		 */
+		dev_priv->perf.oa.specific_ctx_id =
+			i915_ggtt_offset(stream->ctx->engine[engine->id].state);
+	}
 
 	return 0;
 }
@@ -870,14 +1196,19 @@ static int oa_get_render_ctx_id(struct i915_perf_stream *stream)
 static void oa_put_render_ctx_id(struct i915_perf_stream *stream)
 {
 	struct drm_i915_private *dev_priv = stream->dev_priv;
-	struct intel_engine_cs *engine = dev_priv->engine[RCS];
 
-	mutex_lock(&dev_priv->drm.struct_mutex);
+	if (i915.enable_execlists) {
+		dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
+	} else {
+		struct intel_engine_cs *engine = dev_priv->engine[RCS];
 
-	dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
-	engine->context_unpin(engine, stream->ctx);
+		mutex_lock(&dev_priv->drm.struct_mutex);
 
-	mutex_unlock(&dev_priv->drm.struct_mutex);
+		dev_priv->perf.oa.specific_ctx_id = INVALID_CTX_ID;
+		engine->context_unpin(engine, stream->ctx);
+
+		mutex_unlock(&dev_priv->drm.struct_mutex);
+	}
 }
 
 static void
@@ -901,6 +1232,11 @@ static void i915_oa_stream_destroy(struct i915_perf_stream *stream)
 
 	BUG_ON(stream != dev_priv->perf.oa.exclusive_stream);
 
+	/* Unset exclusive_stream first, it might be checked while
+	 * disabling the metric set on gen8+.
+	 */
+	dev_priv->perf.oa.exclusive_stream = NULL;
+
 	dev_priv->perf.oa.ops.disable_metric_set(dev_priv);
 
 	free_oa_buffer(dev_priv);
@@ -911,8 +1247,6 @@ static void i915_oa_stream_destroy(struct i915_perf_stream *stream)
 	if (stream->ctx)
 		oa_put_render_ctx_id(stream);
 
-	dev_priv->perf.oa.exclusive_stream = NULL;
-
 	if (dev_priv->perf.oa.spurious_report_rs.missed) {
 		DRM_NOTE("%d spurious OA report notices suppressed due to ratelimiting\n",
 			 dev_priv->perf.oa.spurious_report_rs.missed);
@@ -967,6 +1301,61 @@ static void gen7_init_oa_buffer(struct drm_i915_private *dev_priv)
 	dev_priv->perf.oa.pollin = false;
 }
 
+static void gen8_init_oa_buffer(struct drm_i915_private *dev_priv)
+{
+	u32 gtt_offset = i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
+	unsigned long flags;
+
+	spin_lock_irqsave(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	I915_WRITE(GEN8_OASTATUS, 0);
+	I915_WRITE(GEN8_OAHEADPTR, gtt_offset);
+	dev_priv->perf.oa.oa_buffer.head = gtt_offset;
+
+	I915_WRITE(GEN8_OABUFFER_UDW, 0);
+
+	/* PRM says:
+	 *
+	 *  "This MMIO must be set before the OATAILPTR
+	 *  register and after the OAHEADPTR register. This is
+	 *  to enable proper functionality of the overflow
+	 *  bit."
+	 */
+	I915_WRITE(GEN8_OABUFFER, gtt_offset |
+		   OABUFFER_SIZE_16M | OA_MEM_SELECT_GGTT);
+	I915_WRITE(GEN8_OATAILPTR, gtt_offset & GEN8_OATAILPTR_MASK);
+
+	/* Mark that we need updated tail pointers to read from... */
+	dev_priv->perf.oa.oa_buffer.tails[0].offset = INVALID_TAIL_PTR;
+	dev_priv->perf.oa.oa_buffer.tails[1].offset = INVALID_TAIL_PTR;
+
+	/* Reset state used to recognise context switches, affecting which
+	 * reports we will forward to userspace while filtering for a single
+	 * context.
+	 */
+	dev_priv->perf.oa.oa_buffer.last_ctx_id = INVALID_CTX_ID;
+
+	spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
+
+	/* NB: although the OA buffer will initially be allocated
+	 * zeroed via shmfs (and so this memset is redundant when
+	 * first allocating), we may re-init the OA buffer, either
+	 * when re-enabling a stream or in error/reset paths.
+	 *
+	 * The reason we clear the buffer for each re-init is for the
+	 * sanity check in gen8_append_oa_reports() that looks at the
+	 * reason field to make sure it's non-zero which relies on
+	 * the assumption that new reports are being written to zeroed
+	 * memory...
+	 */
+	memset(dev_priv->perf.oa.oa_buffer.vaddr, 0, OA_BUFFER_SIZE);
+
+	/* Maybe make ->pollin per-stream state if we support multiple
+	 * concurrent streams in the future.
+	 */
+	dev_priv->perf.oa.pollin = false;
+}
+
 static int alloc_oa_buffer(struct drm_i915_private *dev_priv)
 {
 	struct drm_i915_gem_object *bo;
@@ -1114,6 +1503,311 @@ static void hsw_disable_metric_set(struct drm_i915_private *dev_priv)
 				      ~GT_NOA_ENABLE));
 }
 
+/* NB: It must always remain pointer safe to run this even if the OA unit
+ * has been disabled.
+ *
+ * It's fine to put out-of-date values into these per-context registers
+ * in the case that the OA unit has been disabled.
+ */
+static void gen8_update_reg_state_unlocked(struct i915_gem_context *ctx,
+					   u32 *reg_state)
+{
+	struct drm_i915_private *dev_priv = ctx->i915;
+	const struct i915_oa_reg *flex_regs = dev_priv->perf.oa.flex_regs;
+	int n_flex_regs = dev_priv->perf.oa.flex_regs_len;
+	u32 ctx_oactxctrl = dev_priv->perf.oa.ctx_oactxctrl_offset;
+	u32 ctx_flexeu0 = dev_priv->perf.oa.ctx_flexeu0_offset;
+	/* The MMIO offsets for Flex EU registers aren't contiguous */
+	u32 flex_mmio[] = {
+		i915_mmio_reg_offset(EU_PERF_CNTL0),
+		i915_mmio_reg_offset(EU_PERF_CNTL1),
+		i915_mmio_reg_offset(EU_PERF_CNTL2),
+		i915_mmio_reg_offset(EU_PERF_CNTL3),
+		i915_mmio_reg_offset(EU_PERF_CNTL4),
+		i915_mmio_reg_offset(EU_PERF_CNTL5),
+		i915_mmio_reg_offset(EU_PERF_CNTL6),
+	};
+	int i;
+
+	reg_state[ctx_oactxctrl] = i915_mmio_reg_offset(GEN8_OACTXCONTROL);
+	reg_state[ctx_oactxctrl+1] = (dev_priv->perf.oa.period_exponent <<
+				      GEN8_OA_TIMER_PERIOD_SHIFT) |
+				     (dev_priv->perf.oa.periodic ?
+				      GEN8_OA_TIMER_ENABLE : 0) |
+				     GEN8_OA_COUNTER_RESUME;
+
+	for (i = 0; i < ARRAY_SIZE(flex_mmio); i++) {
+		u32 state_offset = ctx_flexeu0 + i * 2;
+		u32 mmio = flex_mmio[i];
+
+		/* This arbitrary default will select the 'EU FPU0 Pipeline
+		 * Active' event. In the future it's anticipated that there
+		 * will be an explicit 'No Event' we can select, but not yet...
+		 */
+		u32 value = 0;
+		int j;
+
+		for (j = 0; j < n_flex_regs; j++) {
+			if (i915_mmio_reg_offset(flex_regs[j].addr) == mmio) {
+				value = flex_regs[j].value;
+				break;
+			}
+		}
+
+		reg_state[state_offset] = mmio;
+		reg_state[state_offset+1] = value;
+	}
+}
+
+/* Same as gen8_update_reg_state_unlocked only through the batchbuffer. This
+ * is only used by the kernel context. */
+static int gen8_emit_oa_config(struct drm_i915_gem_request *req)
+{
+	struct drm_i915_private *dev_priv = req->i915;
+	const struct i915_oa_reg *flex_regs = dev_priv->perf.oa.flex_regs;
+	int n_flex_regs = dev_priv->perf.oa.flex_regs_len;
+	/* The MMIO offsets for Flex EU registers aren't contiguous */
+	u32 flex_mmio[] = {
+		i915_mmio_reg_offset(EU_PERF_CNTL0),
+		i915_mmio_reg_offset(EU_PERF_CNTL1),
+		i915_mmio_reg_offset(EU_PERF_CNTL2),
+		i915_mmio_reg_offset(EU_PERF_CNTL3),
+		i915_mmio_reg_offset(EU_PERF_CNTL4),
+		i915_mmio_reg_offset(EU_PERF_CNTL5),
+		i915_mmio_reg_offset(EU_PERF_CNTL6),
+	};
+	u32 *cs;
+	int i;
+
+	cs = intel_ring_begin(req, n_flex_regs * 2 + 4);
+	if (IS_ERR(cs))
+		return PTR_ERR(cs);
+
+	*cs++ = MI_LOAD_REGISTER_IMM(n_flex_regs + 1);
+
+	*cs++ = i915_mmio_reg_offset(GEN8_OACTXCONTROL);
+	*cs++ = (dev_priv->perf.oa.period_exponent << GEN8_OA_TIMER_PERIOD_SHIFT) |
+		(dev_priv->perf.oa.periodic ? GEN8_OA_TIMER_ENABLE : 0) |
+		GEN8_OA_COUNTER_RESUME;
+
+	for (i = 0; i < ARRAY_SIZE(flex_mmio); i++) {
+		u32 mmio = flex_mmio[i];
+
+		/* This arbitrary default will select the 'EU FPU0 Pipeline
+		 * Active' event. In the future it's anticipated that there
+		 * will be an explicit 'No Event' we can select, but not yet...
+		 */
+		u32 value = 0;
+		int j;
+
+		for (j = 0; j < n_flex_regs; j++) {
+			if (i915_mmio_reg_offset(flex_regs[j].addr) == mmio) {
+				value = flex_regs[j].value;
+				break;
+			}
+		}
+
+		*cs++ = mmio;
+		*cs++ = value;
+	}
+
+	*cs++ = MI_NOOP;
+	intel_ring_advance(req, cs);
+
+	return 0;
+}
+
+static int gen8_switch_to_updated_kernel_context(struct drm_i915_private *dev_priv)
+{
+	struct intel_engine_cs *engine = dev_priv->engine[RCS];
+	struct i915_gem_timeline *timeline;
+	struct drm_i915_gem_request *req;
+	int ret;
+
+	lockdep_assert_held(&dev_priv->drm.struct_mutex);
+
+	i915_gem_retire_requests(dev_priv);
+
+	req = i915_gem_request_alloc(engine, dev_priv->kernel_context);
+	if (IS_ERR(req))
+		return PTR_ERR(req);
+
+	ret = gen8_emit_oa_config(req);
+	if (ret)
+		return ret;
+
+	/* Queue this switch after all other activity */
+	list_for_each_entry(timeline, &dev_priv->gt.timelines, link) {
+		struct drm_i915_gem_request *prev;
+		struct intel_timeline *tl;
+
+		tl = &timeline->engine[engine->id];
+		prev = i915_gem_active_raw(&tl->last_request,
+					   &dev_priv->drm.struct_mutex);
+		if (prev)
+			i915_sw_fence_await_sw_fence_gfp(&req->submit,
+							 &prev->submit,
+							 GFP_KERNEL);
+	}
+
+	ret = i915_switch_context(req);
+	i915_add_request(req);
+
+	return ret;
+}
+
+/* Manages updating the per-context aspects of the OA stream
+ * configuration across all contexts.
+ *
+ * The awkward consideration here is that OACTXCONTROL controls the
+ * exponent for periodic sampling which is primarily used for system
+ * wide profiling where we'd like a consistent sampling period even in
+ * the face of context switches.
+ *
+ * Our approach of updating the register state context (as opposed to
+ * say using a workaround batch buffer) ensures that the hardware
+ * won't automatically reload an out-of-date timer exponent even
+ * transiently before a WA BB could be parsed.
+ *
+ * This function needs to:
+ * - Ensure the currently running context's per-context OA state is
+ *   updated
+ * - Ensure that all existing contexts will have the correct per-context
+ *   OA state if they are scheduled for use.
+ * - Ensure any new contexts will be initialized with the correct
+ *   per-context OA state.
+ *
+ * Note: it's only the RCS/Render context that has any OA state.
+ */
+static int gen8_configure_all_contexts(struct drm_i915_private *dev_priv,
+				       bool interruptible)
+{
+	struct i915_gem_context *ctx;
+	int ret;
+	unsigned int wait_flags = I915_WAIT_LOCKED;
+
+	if (interruptible) {
+		ret = i915_mutex_lock_interruptible(&dev_priv->drm);
+		if (ret)
+			return ret;
+
+		wait_flags |= I915_WAIT_INTERRUPTIBLE;
+	} else {
+		mutex_lock(&dev_priv->drm.struct_mutex);
+	}
+
+	/* Switch away from any user context. */
+	ret = gen8_switch_to_updated_kernel_context(dev_priv);
+	if (ret)
+		goto out;
+
+	/* The OA register config is setup through the context image. This image
+	 * might be written to by the GPU on context switch (in particular on
+	 * lite-restore). This means we can't safely update a context's image,
+	 * if this context is scheduled/submitted to run on the GPU.
+	 *
+	 * We could emit the OA register config through the batch buffer but
+	 * this might leave small interval of time where the OA unit is
+	 * configured at an invalid sampling period.
+	 *
+	 * So far the best way to work around this issue seems to be draining
+	 * the GPU from any submitted work.
+	 */
+	ret = i915_gem_wait_for_idle(dev_priv, wait_flags);
+	if (ret)
+		goto out;
+
+	/* Update all contexts now that we've stalled the submission. */
+	list_for_each_entry(ctx, &dev_priv->context_list, link) {
+		struct intel_context *ce = &ctx->engine[RCS];
+		u32 *regs;
+
+		/* OA settings will be set upon first use */
+		if (!ce->state)
+			continue;
+
+		regs = i915_gem_object_pin_map(ce->state->obj, I915_MAP_WB);
+		if (IS_ERR(regs)) {
+			ret = PTR_ERR(regs);
+			goto out;
+		}
+
+		ce->state->obj->mm.dirty = true;
+		regs += LRC_STATE_PN * PAGE_SIZE / sizeof(*regs);
+
+		gen8_update_reg_state_unlocked(ctx, regs);
+
+		i915_gem_object_unpin_map(ce->state->obj);
+	}
+
+ out:
+	mutex_unlock(&dev_priv->drm.struct_mutex);
+
+	return ret;
+}
+
+static int gen8_enable_metric_set(struct drm_i915_private *dev_priv)
+{
+	int ret = dev_priv->perf.oa.ops.select_metric_set(dev_priv);
+	int i;
+
+	if (ret)
+		return ret;
+
+	/* We disable slice/unslice clock ratio change reports on SKL since
+	 * they are too noisy. The HW generates a lot of redundant reports
+	 * where the ratio hasn't really changed causing a lot of redundant
+	 * work to processes and increasing the chances we'll hit buffer
+	 * overruns.
+	 *
+	 * Although we don't currently use the 'disable overrun' OABUFFER
+	 * feature it's worth noting that clock ratio reports have to be
+	 * disabled before considering to use that feature since the HW doesn't
+	 * correctly block these reports.
+	 *
+	 * Currently none of the high-level metrics we have depend on knowing
+	 * this ratio to normalize.
+	 *
+	 * Note: This register is not power context saved and restored, but
+	 * that's OK considering that we disable RC6 while the OA unit is
+	 * enabled.
+	 *
+	 * The _INCLUDE_CLK_RATIO bit allows the slice/unslice frequency to
+	 * be read back from automatically triggered reports, as part of the
+	 * RPT_ID field.
+	 */
+	if (IS_SKYLAKE(dev_priv) || IS_BROXTON(dev_priv)) {
+		I915_WRITE(GEN8_OA_DEBUG,
+			   _MASKED_BIT_ENABLE(GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS |
+					      GEN9_OA_DEBUG_INCLUDE_CLK_RATIO));
+	}
+
+	/* Update all contexts prior writing the mux configurations as we need
+	 * to make sure all slices/subslices are ON before writing to NOA
+	 * registers. */
+	ret = gen8_configure_all_contexts(dev_priv, true);
+	if (ret)
+		return ret;
+
+	I915_WRITE(GDT_CHICKEN_BITS, 0xA0);
+	for (i = 0; i < dev_priv->perf.oa.n_mux_configs; i++) {
+		config_oa_regs(dev_priv, dev_priv->perf.oa.mux_regs[i],
+			       dev_priv->perf.oa.mux_regs_lens[i]);
+	}
+	I915_WRITE(GDT_CHICKEN_BITS, 0x80);
+
+	config_oa_regs(dev_priv, dev_priv->perf.oa.b_counter_regs,
+		       dev_priv->perf.oa.b_counter_regs_len);
+
+	return 0;
+}
+
+static void gen8_disable_metric_set(struct drm_i915_private *dev_priv)
+{
+	/* Reset all contexts' slices/subslices configurations. */
+	gen8_configure_all_contexts(dev_priv, false);
+}
+
 static void gen7_update_oacontrol_locked(struct drm_i915_private *dev_priv)
 {
 	lockdep_assert_held(&dev_priv->perf.hook_lock);
@@ -1158,6 +1852,29 @@ static void gen7_oa_enable(struct drm_i915_private *dev_priv)
 	spin_unlock_irqrestore(&dev_priv->perf.hook_lock, flags);
 }
 
+static void gen8_oa_enable(struct drm_i915_private *dev_priv)
+{
+	u32 report_format = dev_priv->perf.oa.oa_buffer.format;
+
+	/* Reset buf pointers so we don't forward reports from before now.
+	 *
+	 * Think carefully if considering trying to avoid this, since it
+	 * also ensures status flags and the buffer itself are cleared
+	 * in error paths, and we have checks for invalid reports based
+	 * on the assumption that certain fields are written to zeroed
+	 * memory which this helps maintains.
+	 */
+	gen8_init_oa_buffer(dev_priv);
+
+	/* Note: we don't rely on the hardware to perform single context
+	 * filtering and instead filter on the cpu based on the context-id
+	 * field of reports
+	 */
+	I915_WRITE(GEN8_OACONTROL, (report_format <<
+				    GEN8_OA_REPORT_FORMAT_SHIFT) |
+				   GEN8_OA_COUNTER_ENABLE);
+}
+
 /**
  * i915_oa_stream_enable - handle `I915_PERF_IOCTL_ENABLE` for OA stream
  * @stream: An i915 perf stream opened for OA metrics
@@ -1184,6 +1901,11 @@ static void gen7_oa_disable(struct drm_i915_private *dev_priv)
 	I915_WRITE(GEN7_OACONTROL, 0);
 }
 
+static void gen8_oa_disable(struct drm_i915_private *dev_priv)
+{
+	I915_WRITE(GEN8_OACONTROL, 0);
+}
+
 /**
  * i915_oa_stream_disable - handle `I915_PERF_IOCTL_DISABLE` for OA stream
  * @stream: An i915 perf stream opened for OA metrics
@@ -1362,6 +2084,21 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream,
 	return ret;
 }
 
+void i915_oa_init_reg_state(struct intel_engine_cs *engine,
+			    struct i915_gem_context *ctx,
+			    u32 *reg_state)
+{
+	struct drm_i915_private *dev_priv = engine->i915;
+
+	if (engine->id != RCS)
+		return;
+
+	if (!dev_priv->perf.initialized)
+		return;
+
+	gen8_update_reg_state_unlocked(ctx, reg_state);
+}
+
 /**
  * i915_perf_read_locked - &i915_perf_stream_ops->read with error normalisation
  * @stream: An i915 perf stream
@@ -1487,7 +2224,7 @@ static enum hrtimer_restart oa_poll_check_timer_cb(struct hrtimer *hrtimer)
 		container_of(hrtimer, typeof(*dev_priv),
 			     perf.oa.poll_check_timer);
 
-	if (dev_priv->perf.oa.ops.oa_buffer_check(dev_priv)) {
+	if (oa_buffer_check_unlocked(dev_priv)) {
 		dev_priv->perf.oa.pollin = true;
 		wake_up(&dev_priv->perf.oa.poll_wq);
 	}
@@ -1776,6 +2513,7 @@ i915_perf_open_ioctl_locked(struct drm_i915_private *dev_priv,
 	struct i915_gem_context *specific_ctx = NULL;
 	struct i915_perf_stream *stream = NULL;
 	unsigned long f_flags = 0;
+	bool privileged_op = true;
 	int stream_fd;
 	int ret;
 
@@ -1793,12 +2531,28 @@ i915_perf_open_ioctl_locked(struct drm_i915_private *dev_priv,
 		}
 	}
 
+	/* On Haswell the OA unit supports clock gating off for a specific
+	 * context and in this mode there's no visibility of metrics for the
+	 * rest of the system, which we consider acceptable for a
+	 * non-privileged client.
+	 *
+	 * For Gen8+ the OA unit no longer supports clock gating off for a
+	 * specific context and the kernel can't securely stop the counters
+	 * from updating as system-wide / global values. Even though we can
+	 * filter reports based on the included context ID we can't block
+	 * clients from seeing the raw / global counter values via
+	 * MI_REPORT_PERF_COUNT commands and so consider it a privileged op to
+	 * enable the OA unit by default.
+	 */
+	if (IS_HASWELL(dev_priv) && specific_ctx)
+		privileged_op = false;
+
 	/* Similar to perf's kernel.perf_paranoid_cpu sysctl option
 	 * we check a dev.i915.perf_stream_paranoid sysctl option
 	 * to determine if it's ok to access system wide OA counters
 	 * without CAP_SYS_ADMIN privileges.
 	 */
-	if (!specific_ctx &&
+	if (privileged_op &&
 	    i915_perf_stream_paranoid && !capable(CAP_SYS_ADMIN)) {
 		DRM_DEBUG("Insufficient privileges to open system-wide i915 perf stream\n");
 		ret = -EACCES;
@@ -2070,9 +2824,6 @@ int i915_perf_open_ioctl(struct drm_device *dev, void *data,
  */
 void i915_perf_register(struct drm_i915_private *dev_priv)
 {
-	if (!IS_HASWELL(dev_priv))
-		return;
-
 	if (!dev_priv->perf.initialized)
 		return;
 
@@ -2088,11 +2839,38 @@ void i915_perf_register(struct drm_i915_private *dev_priv)
 	if (!dev_priv->perf.metrics_kobj)
 		goto exit;
 
-	if (i915_perf_register_sysfs_hsw(dev_priv)) {
-		kobject_put(dev_priv->perf.metrics_kobj);
-		dev_priv->perf.metrics_kobj = NULL;
+	if (IS_HASWELL(dev_priv)) {
+		if (i915_perf_register_sysfs_hsw(dev_priv))
+			goto sysfs_error;
+	} else if (IS_BROADWELL(dev_priv)) {
+		if (i915_perf_register_sysfs_bdw(dev_priv))
+			goto sysfs_error;
+	} else if (IS_CHERRYVIEW(dev_priv)) {
+		if (i915_perf_register_sysfs_chv(dev_priv))
+			goto sysfs_error;
+	} else if (IS_SKYLAKE(dev_priv)) {
+		if (IS_SKL_GT2(dev_priv)) {
+			if (i915_perf_register_sysfs_sklgt2(dev_priv))
+				goto sysfs_error;
+		} else if (IS_SKL_GT3(dev_priv)) {
+			if (i915_perf_register_sysfs_sklgt3(dev_priv))
+				goto sysfs_error;
+		} else if (IS_SKL_GT4(dev_priv)) {
+			if (i915_perf_register_sysfs_sklgt4(dev_priv))
+				goto sysfs_error;
+		} else
+			goto sysfs_error;
+	} else if (IS_BROXTON(dev_priv)) {
+		if (i915_perf_register_sysfs_bxt(dev_priv))
+			goto sysfs_error;
 	}
 
+	goto exit;
+
+sysfs_error:
+	kobject_put(dev_priv->perf.metrics_kobj);
+	dev_priv->perf.metrics_kobj = NULL;
+
 exit:
 	mutex_unlock(&dev_priv->perf.lock);
 }
@@ -2108,13 +2886,24 @@ void i915_perf_register(struct drm_i915_private *dev_priv)
  */
 void i915_perf_unregister(struct drm_i915_private *dev_priv)
 {
-	if (!IS_HASWELL(dev_priv))
-		return;
-
 	if (!dev_priv->perf.metrics_kobj)
 		return;
 
-	i915_perf_unregister_sysfs_hsw(dev_priv);
+	if (IS_HASWELL(dev_priv))
+		i915_perf_unregister_sysfs_hsw(dev_priv);
+	else if (IS_BROADWELL(dev_priv))
+		i915_perf_unregister_sysfs_bdw(dev_priv);
+	else if (IS_CHERRYVIEW(dev_priv))
+		i915_perf_unregister_sysfs_chv(dev_priv);
+	else if (IS_SKYLAKE(dev_priv)) {
+		if (IS_SKL_GT2(dev_priv))
+			i915_perf_unregister_sysfs_sklgt2(dev_priv);
+		else if (IS_SKL_GT3(dev_priv))
+			i915_perf_unregister_sysfs_sklgt3(dev_priv);
+		else if (IS_SKL_GT4(dev_priv))
+			i915_perf_unregister_sysfs_sklgt4(dev_priv);
+	} else if (IS_BROXTON(dev_priv))
+		i915_perf_unregister_sysfs_bxt(dev_priv);
 
 	kobject_put(dev_priv->perf.metrics_kobj);
 	dev_priv->perf.metrics_kobj = NULL;
@@ -2173,36 +2962,105 @@ static struct ctl_table dev_root[] = {
  */
 void i915_perf_init(struct drm_i915_private *dev_priv)
 {
-	if (!IS_HASWELL(dev_priv))
-		return;
-
-	hrtimer_init(&dev_priv->perf.oa.poll_check_timer,
-		     CLOCK_MONOTONIC, HRTIMER_MODE_REL);
-	dev_priv->perf.oa.poll_check_timer.function = oa_poll_check_timer_cb;
-	init_waitqueue_head(&dev_priv->perf.oa.poll_wq);
+	dev_priv->perf.oa.n_builtin_sets = 0;
+
+	if (IS_HASWELL(dev_priv)) {
+		dev_priv->perf.oa.ops.init_oa_buffer = gen7_init_oa_buffer;
+		dev_priv->perf.oa.ops.enable_metric_set = hsw_enable_metric_set;
+		dev_priv->perf.oa.ops.disable_metric_set = hsw_disable_metric_set;
+		dev_priv->perf.oa.ops.oa_enable = gen7_oa_enable;
+		dev_priv->perf.oa.ops.oa_disable = gen7_oa_disable;
+		dev_priv->perf.oa.ops.read = gen7_oa_read;
+		dev_priv->perf.oa.ops.oa_hw_tail_read =
+			gen7_oa_hw_tail_read;
+
+		dev_priv->perf.oa.oa_formats = hsw_oa_formats;
+
+		dev_priv->perf.oa.n_builtin_sets =
+			i915_oa_n_builtin_metric_sets_hsw;
+	} else if (i915.enable_execlists) {
+		/* Note: that although we could theoretically also support the
+		 * legacy ringbuffer mode on BDW (and earlier iterations of
+		 * this driver, before upstreaming did this) it didn't seem
+		 * worth the complexity to maintain now that BDW+ enable
+		 * execlist mode by default.
+		 */
 
-	INIT_LIST_HEAD(&dev_priv->perf.streams);
-	mutex_init(&dev_priv->perf.lock);
-	spin_lock_init(&dev_priv->perf.hook_lock);
-	spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);
+		if (IS_GEN8(dev_priv)) {
+			dev_priv->perf.oa.ctx_oactxctrl_offset = 0x120;
+			dev_priv->perf.oa.ctx_flexeu0_offset = 0x2ce;
+			dev_priv->perf.oa.gen8_valid_ctx_bit = (1<<25);
+
+			if (IS_BROADWELL(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_bdw;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_bdw;
+			} else if (IS_CHERRYVIEW(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_chv;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_chv;
+			}
+		} else if (IS_GEN9(dev_priv)) {
+			dev_priv->perf.oa.ctx_oactxctrl_offset = 0x128;
+			dev_priv->perf.oa.ctx_flexeu0_offset = 0x3de;
+			dev_priv->perf.oa.gen8_valid_ctx_bit = (1<<16);
+
+			if (IS_SKL_GT2(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_sklgt2;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_sklgt2;
+			} else if (IS_SKL_GT3(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_sklgt3;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_sklgt3;
+			} else if (IS_SKL_GT4(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_sklgt4;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_sklgt4;
+			} else if (IS_BROXTON(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_bxt;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_bxt;
+			}
+		}
 
-	dev_priv->perf.oa.ops.init_oa_buffer = gen7_init_oa_buffer;
-	dev_priv->perf.oa.ops.enable_metric_set = hsw_enable_metric_set;
-	dev_priv->perf.oa.ops.disable_metric_set = hsw_disable_metric_set;
-	dev_priv->perf.oa.ops.oa_enable = gen7_oa_enable;
-	dev_priv->perf.oa.ops.oa_disable = gen7_oa_disable;
-	dev_priv->perf.oa.ops.read = gen7_oa_read;
-	dev_priv->perf.oa.ops.oa_buffer_check =
-		gen7_oa_buffer_check_unlocked;
+		if (dev_priv->perf.oa.n_builtin_sets) {
+			dev_priv->perf.oa.ops.init_oa_buffer = gen8_init_oa_buffer;
+			dev_priv->perf.oa.ops.enable_metric_set =
+				gen8_enable_metric_set;
+			dev_priv->perf.oa.ops.disable_metric_set =
+				gen8_disable_metric_set;
+			dev_priv->perf.oa.ops.oa_enable = gen8_oa_enable;
+			dev_priv->perf.oa.ops.oa_disable = gen8_oa_disable;
+			dev_priv->perf.oa.ops.read = gen8_oa_read;
+			dev_priv->perf.oa.ops.oa_hw_tail_read =
+				gen8_oa_hw_tail_read;
+
+			dev_priv->perf.oa.oa_formats = gen8_plus_oa_formats;
+		}
+	}
 
-	dev_priv->perf.oa.oa_formats = hsw_oa_formats;
+	if (dev_priv->perf.oa.n_builtin_sets) {
+		hrtimer_init(&dev_priv->perf.oa.poll_check_timer,
+				CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+		dev_priv->perf.oa.poll_check_timer.function = oa_poll_check_timer_cb;
+		init_waitqueue_head(&dev_priv->perf.oa.poll_wq);
 
-	dev_priv->perf.oa.n_builtin_sets =
-		i915_oa_n_builtin_metric_sets_hsw;
+		INIT_LIST_HEAD(&dev_priv->perf.streams);
+		mutex_init(&dev_priv->perf.lock);
+		spin_lock_init(&dev_priv->perf.hook_lock);
+		spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);
 
-	dev_priv->perf.sysctl_header = register_sysctl_table(dev_root);
+		dev_priv->perf.sysctl_header = register_sysctl_table(dev_root);
 
-	dev_priv->perf.initialized = true;
+		dev_priv->perf.initialized = true;
+	}
 }
 
 /**
@@ -2217,5 +3075,6 @@ void i915_perf_fini(struct drm_i915_private *dev_priv)
 	unregister_sysctl_table(dev_priv->perf.sysctl_header);
 
 	memset(&dev_priv->perf.oa.ops, 0, sizeof(dev_priv->perf.oa.ops));
+
 	dev_priv->perf.initialized = false;
 }
diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index 231ee86625cd..ecebc60cd445 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -653,6 +653,12 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 
 #define GEN8_OACTXID _MMIO(0x2364)
 
+#define GEN8_OA_DEBUG _MMIO(0x2B04)
+#define  GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS    (1<<5)
+#define  GEN9_OA_DEBUG_INCLUDE_CLK_RATIO	    (1<<6)
+#define  GEN9_OA_DEBUG_DISABLE_GO_1_0_REPORTS	    (1<<2)
+#define  GEN9_OA_DEBUG_DISABLE_CTX_SWITCH_REPORTS   (1<<1)
+
 #define GEN8_OACONTROL _MMIO(0x2B00)
 #define  GEN8_OA_REPORT_FORMAT_A12	    (0<<2)
 #define  GEN8_OA_REPORT_FORMAT_A12_B8_C8    (2<<2)
@@ -674,6 +680,7 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 #define  GEN7_OABUFFER_STOP_RESUME_ENABLE   (1<<1)
 #define  GEN7_OABUFFER_RESUME		    (1<<0)
 
+#define GEN8_OABUFFER_UDW _MMIO(0x23b4)
 #define GEN8_OABUFFER _MMIO(0x2b14)
 
 #define GEN7_OASTATUS1 _MMIO(0x2364)
@@ -692,7 +699,9 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 #define  GEN8_OASTATUS_REPORT_LOST	    (1<<0)
 
 #define GEN8_OAHEADPTR _MMIO(0x2B0C)
+#define GEN8_OAHEADPTR_MASK    0xffffffc0
 #define GEN8_OATAILPTR _MMIO(0x2B10)
+#define GEN8_OATAILPTR_MASK    0xffffffc0
 
 #define OABUFFER_SIZE_128K  (0<<3)
 #define OABUFFER_SIZE_256K  (1<<3)
@@ -705,7 +714,17 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 
 #define OA_MEM_SELECT_GGTT  (1<<0)
 
+/*
+ * Flexible, Aggregate EU Counter Registers.
+ * Note: these aren't contiguous
+ */
 #define EU_PERF_CNTL0	    _MMIO(0xe458)
+#define EU_PERF_CNTL1	    _MMIO(0xe558)
+#define EU_PERF_CNTL2	    _MMIO(0xe658)
+#define EU_PERF_CNTL3	    _MMIO(0xe758)
+#define EU_PERF_CNTL4	    _MMIO(0xe45c)
+#define EU_PERF_CNTL5	    _MMIO(0xe55c)
+#define EU_PERF_CNTL6	    _MMIO(0xe65c)
 
 #define GDT_CHICKEN_BITS    _MMIO(0x9840)
 #define GT_NOA_ENABLE	    0x00000080
@@ -2325,6 +2344,9 @@ enum skl_disp_power_wells {
 #define   GEN8_RC_SEMA_IDLE_MSG_DISABLE	(1 << 12)
 #define   GEN8_FF_DOP_CLOCK_GATE_DISABLE	(1<<10)
 
+#define GEN6_RCS_PWR_FSM _MMIO(0x22ac)
+#define GEN9_RCS_FE_FSM2 _MMIO(0x22a4)
+
 /* Fuse readout registers for GT */
 #define CHV_FUSE_GT			_MMIO(VLV_DISPLAY_BASE + 0x2168)
 #define   CHV_FGT_DISABLE_SS0		(1 << 10)
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 78fb1f452891..ad37e2b236b0 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1945,6 +1945,8 @@ static void execlists_init_reg_state(u32 *regs,
 		regs[CTX_LRI_HEADER_2] = MI_LOAD_REGISTER_IMM(1);
 		CTX_REG(regs, CTX_R_PWR_CLK_STATE, GEN8_R_PWR_CLK_STATE,
 			make_rpcs(&ctx->engine[engine->id].sseu));
+
+		i915_oa_init_reg_state(engine, ctx, regs);
 	}
 }
 
diff --git a/drivers/gpu/drm/i915/intel_lrc.h b/drivers/gpu/drm/i915/intel_lrc.h
index 52b3a1fd4059..eb7bb51c0a4a 100644
--- a/drivers/gpu/drm/i915/intel_lrc.h
+++ b/drivers/gpu/drm/i915/intel_lrc.h
@@ -62,6 +62,8 @@ enum {
 	INTEL_CONTEXT_SCHEDULE_OUT,
 };
 
+struct sseu_dev_info;
+
 /* Logical Rings */
 void intel_logical_ring_stop(struct intel_engine_cs *engine);
 void intel_logical_ring_cleanup(struct intel_engine_cs *engine);
diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
index f24a80d2d42e..d561b1f98b20 100644
--- a/include/uapi/drm/i915_drm.h
+++ b/include/uapi/drm/i915_drm.h
@@ -1308,13 +1308,18 @@ struct drm_i915_gem_context_param {
 };
 
 enum drm_i915_oa_format {
-	I915_OA_FORMAT_A13 = 1,
-	I915_OA_FORMAT_A29,
-	I915_OA_FORMAT_A13_B8_C8,
-	I915_OA_FORMAT_B4_C8,
-	I915_OA_FORMAT_A45_B8_C8,
-	I915_OA_FORMAT_B4_C8_A16,
-	I915_OA_FORMAT_C4_B8,
+	I915_OA_FORMAT_A13 = 1,	    /* HSW only */
+	I915_OA_FORMAT_A29,	    /* HSW only */
+	I915_OA_FORMAT_A13_B8_C8,   /* HSW only */
+	I915_OA_FORMAT_B4_C8,	    /* HSW only */
+	I915_OA_FORMAT_A45_B8_C8,   /* HSW only */
+	I915_OA_FORMAT_B4_C8_A16,   /* HSW only */
+	I915_OA_FORMAT_C4_B8,	    /* HSW+ */
+
+	/* Gen8+ */
+	I915_OA_FORMAT_A12,
+	I915_OA_FORMAT_A12_B8_C8,
+	I915_OA_FORMAT_A32u40_A4u32_B8_C8,
 
 	I915_OA_FORMAT_MAX	    /* non-ABI */
 };
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v15 07/14] drm/i915/perf: Add more OA configs for BDW, CHV, SKL + BXT
  2017-05-31 12:33   ` [PATCH v15 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                       ` (5 preceding siblings ...)
  2017-05-31 12:33     ` [PATCH v15 06/14] drm/i915/perf: Add OA unit support for Gen 8+ Lionel Landwerlin
@ 2017-05-31 12:33     ` Lionel Landwerlin
  2017-05-31 12:33     ` [PATCH v15 08/14] drm/i915/perf: per-gen timebase for checking sample freq Lionel Landwerlin
                       ` (6 subsequent siblings)
  13 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-31 12:33 UTC (permalink / raw)
  To: intel-gfx

From: Robert Bragg <robert@sixbynine.org>

These are auto generated from an XML description of metric sets,
currently maintained in gputop, ref:

 https://github.com/rib/gputop
 > gputop-data/oa-*.xml
 > scripts/i915-perf-kernelgen.py

 $ make -C gputop-data -f Makefile.xml

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h       |    4 +-
 drivers/gpu/drm/i915/i915_oa_bdw.c    | 5066 ++++++++++++++++++++++++++++++++-
 drivers/gpu/drm/i915/i915_oa_bxt.c    | 2464 +++++++++++++++-
 drivers/gpu/drm/i915/i915_oa_chv.c    | 2657 ++++++++++++++++-
 drivers/gpu/drm/i915/i915_oa_hsw.c    |   48 +
 drivers/gpu/drm/i915/i915_oa_sklgt2.c | 3323 ++++++++++++++++++++-
 drivers/gpu/drm/i915/i915_oa_sklgt3.c | 2872 ++++++++++++++++++-
 drivers/gpu/drm/i915/i915_oa_sklgt4.c | 2915 ++++++++++++++++++-
 8 files changed, 19161 insertions(+), 188 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index f59cbe99e4b4..45c86b2ec968 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2398,8 +2398,8 @@ struct drm_i915_private {
 
 			int metrics_set;
 
-			const struct i915_oa_reg *mux_regs[2];
-			int mux_regs_lens[2];
+			const struct i915_oa_reg *mux_regs[6];
+			int mux_regs_lens[6];
 			int n_mux_configs;
 
 			const struct i915_oa_reg *b_counter_regs;
diff --git a/drivers/gpu/drm/i915/i915_oa_bdw.c b/drivers/gpu/drm/i915/i915_oa_bdw.c
index 9a11c03b4ecb..d4462c2aaaee 100644
--- a/drivers/gpu/drm/i915/i915_oa_bdw.c
+++ b/drivers/gpu/drm/i915/i915_oa_bdw.c
@@ -33,9 +33,30 @@
 
 enum metric_set_id {
 	METRIC_SET_ID_RENDER_BASIC = 1,
+	METRIC_SET_ID_COMPUTE_BASIC,
+	METRIC_SET_ID_RENDER_PIPE_PROFILE,
+	METRIC_SET_ID_MEMORY_READS,
+	METRIC_SET_ID_MEMORY_WRITES,
+	METRIC_SET_ID_COMPUTE_EXTENDED,
+	METRIC_SET_ID_COMPUTE_L3_CACHE,
+	METRIC_SET_ID_DATA_PORT_READS_COALESCING,
+	METRIC_SET_ID_DATA_PORT_WRITES_COALESCING,
+	METRIC_SET_ID_HDC_AND_SF,
+	METRIC_SET_ID_L3_1,
+	METRIC_SET_ID_L3_2,
+	METRIC_SET_ID_L3_3,
+	METRIC_SET_ID_L3_4,
+	METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND,
+	METRIC_SET_ID_SAMPLER_1,
+	METRIC_SET_ID_SAMPLER_2,
+	METRIC_SET_ID_TDL_1,
+	METRIC_SET_ID_TDL_2,
+	METRIC_SET_ID_COMPUTE_EXTRA,
+	METRIC_SET_ID_VME_PIPE,
+	METRIC_SET_ID_TEST_OA,
 };
 
-int i915_oa_n_builtin_metric_sets_bdw = 1;
+int i915_oa_n_builtin_metric_sets_bdw = 22;
 
 static const struct i915_oa_reg b_counter_config_render_basic[] = {
 	{ _MMIO(0x2710), 0x00000000 },
@@ -300,66 +321,4819 @@ get_render_basic_mux_config(struct drm_i915_private *dev_priv,
 	return n;
 }
 
+static const struct i915_oa_reg b_counter_config_compute_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic_0_slices_0x01[] = {
+	{ _MMIO(0x9888), 0x105c00e0 },
+	{ _MMIO(0x9888), 0x105800e0 },
+	{ _MMIO(0x9888), 0x103800e0 },
+	{ _MMIO(0x9888), 0x3580001a },
+	{ _MMIO(0x9888), 0x3b800060 },
+	{ _MMIO(0x9888), 0x3d800005 },
+	{ _MMIO(0x9888), 0x065c2100 },
+	{ _MMIO(0x9888), 0x0a5c0041 },
+	{ _MMIO(0x9888), 0x0c5c6600 },
+	{ _MMIO(0x9888), 0x005c6580 },
+	{ _MMIO(0x9888), 0x085c8000 },
+	{ _MMIO(0x9888), 0x0e5c8000 },
+	{ _MMIO(0x9888), 0x00580042 },
+	{ _MMIO(0x9888), 0x08582080 },
+	{ _MMIO(0x9888), 0x0c58004c },
+	{ _MMIO(0x9888), 0x0e582580 },
+	{ _MMIO(0x9888), 0x005b4000 },
+	{ _MMIO(0x9888), 0x185b1000 },
+	{ _MMIO(0x9888), 0x1a5b0104 },
+	{ _MMIO(0x9888), 0x0c1fa800 },
+	{ _MMIO(0x9888), 0x0e1faa00 },
+	{ _MMIO(0x9888), 0x101f02aa },
+	{ _MMIO(0x9888), 0x08380042 },
+	{ _MMIO(0x9888), 0x0a382080 },
+	{ _MMIO(0x9888), 0x0e38404c },
+	{ _MMIO(0x9888), 0x0238404b },
+	{ _MMIO(0x9888), 0x00384000 },
+	{ _MMIO(0x9888), 0x16380000 },
+	{ _MMIO(0x9888), 0x18381145 },
+	{ _MMIO(0x9888), 0x04380000 },
+	{ _MMIO(0x9888), 0x0039a000 },
+	{ _MMIO(0x9888), 0x06398000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0c39a000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x02392000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8aaaa0 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x238b02a0 },
+	{ _MMIO(0x9888), 0x258b5550 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x1f850a80 },
+	{ _MMIO(0x9888), 0x2185aaa0 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x03844000 },
+	{ _MMIO(0x9888), 0x17808137 },
+	{ _MMIO(0x9888), 0x1980c147 },
+	{ _MMIO(0x9888), 0x1b80c0e5 },
+	{ _MMIO(0x9888), 0x1d80c0e3 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x13804000 },
+	{ _MMIO(0x9888), 0x15800000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d801000 },
+	{ _MMIO(0x9888), 0x4f800111 },
+	{ _MMIO(0x9888), 0x43800062 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800062 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800062 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f801062 },
+	{ _MMIO(0x9888), 0x41801084 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic_2_slices_0x02[] = {
+	{ _MMIO(0x9888), 0x10dc00e0 },
+	{ _MMIO(0x9888), 0x10d800e0 },
+	{ _MMIO(0x9888), 0x10b800e0 },
+	{ _MMIO(0x9888), 0x3580001a },
+	{ _MMIO(0x9888), 0x3b800060 },
+	{ _MMIO(0x9888), 0x3d800005 },
+	{ _MMIO(0x9888), 0x06dc2100 },
+	{ _MMIO(0x9888), 0x0adc0041 },
+	{ _MMIO(0x9888), 0x0cdc6600 },
+	{ _MMIO(0x9888), 0x00dc6580 },
+	{ _MMIO(0x9888), 0x08dc8000 },
+	{ _MMIO(0x9888), 0x0edc8000 },
+	{ _MMIO(0x9888), 0x00d80042 },
+	{ _MMIO(0x9888), 0x08d82080 },
+	{ _MMIO(0x9888), 0x0cd8004c },
+	{ _MMIO(0x9888), 0x0ed82580 },
+	{ _MMIO(0x9888), 0x00db4000 },
+	{ _MMIO(0x9888), 0x18db1000 },
+	{ _MMIO(0x9888), 0x1adb0104 },
+	{ _MMIO(0x9888), 0x0c9fa800 },
+	{ _MMIO(0x9888), 0x0e9faa00 },
+	{ _MMIO(0x9888), 0x109f02aa },
+	{ _MMIO(0x9888), 0x08b80042 },
+	{ _MMIO(0x9888), 0x0ab82080 },
+	{ _MMIO(0x9888), 0x0eb8404c },
+	{ _MMIO(0x9888), 0x02b8404b },
+	{ _MMIO(0x9888), 0x00b84000 },
+	{ _MMIO(0x9888), 0x16b80000 },
+	{ _MMIO(0x9888), 0x18b81145 },
+	{ _MMIO(0x9888), 0x04b80000 },
+	{ _MMIO(0x9888), 0x00b9a000 },
+	{ _MMIO(0x9888), 0x06b98000 },
+	{ _MMIO(0x9888), 0x08b9a000 },
+	{ _MMIO(0x9888), 0x0ab9a000 },
+	{ _MMIO(0x9888), 0x0cb9a000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x02b92000 },
+	{ _MMIO(0x9888), 0x01888000 },
+	{ _MMIO(0x9888), 0x0d88f800 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x238b0540 },
+	{ _MMIO(0x9888), 0x258baaa0 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x018c4000 },
+	{ _MMIO(0x9888), 0x0f8c4000 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c5500 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x018da000 },
+	{ _MMIO(0x9888), 0x078d8000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x038d2000 },
+	{ _MMIO(0x9888), 0x1f850a80 },
+	{ _MMIO(0x9888), 0x2185aaa0 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x03844000 },
+	{ _MMIO(0x9888), 0x17808137 },
+	{ _MMIO(0x9888), 0x1980c147 },
+	{ _MMIO(0x9888), 0x1b80c0e5 },
+	{ _MMIO(0x9888), 0x1d80c0e3 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x13804000 },
+	{ _MMIO(0x9888), 0x15800000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d805000 },
+	{ _MMIO(0x9888), 0x4f800555 },
+	{ _MMIO(0x9888), 0x43800062 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800062 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800062 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800062 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static int
+get_compute_basic_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 2);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 2);
+
+	if (INTEL_INFO(dev_priv)->sseu.slice_mask & 0x01) {
+		regs[n] = mux_config_compute_basic_0_slices_0x01;
+		lens[n] = ARRAY_SIZE(mux_config_compute_basic_0_slices_0x01);
+		n++;
+	}
+	if (INTEL_INFO(dev_priv)->sseu.slice_mask & 0x02) {
+		regs[n] = mux_config_compute_basic_2_slices_0x02;
+		lens[n] = ARRAY_SIZE(mux_config_compute_basic_2_slices_0x02);
+		n++;
+	}
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_render_pipe_profile[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ffea },
+	{ _MMIO(0x2774), 0x00007ffc },
+	{ _MMIO(0x2778), 0x0007affa },
+	{ _MMIO(0x277c), 0x0000f5fd },
+	{ _MMIO(0x2780), 0x00079ffa },
+	{ _MMIO(0x2784), 0x0000f3fb },
+	{ _MMIO(0x2788), 0x0007bf7a },
+	{ _MMIO(0x278c), 0x0000f7e7 },
+	{ _MMIO(0x2790), 0x0007fefa },
+	{ _MMIO(0x2794), 0x0000f7cf },
+	{ _MMIO(0x2798), 0x00077ffa },
+	{ _MMIO(0x279c), 0x0000efdf },
+	{ _MMIO(0x27a0), 0x0006fffa },
+	{ _MMIO(0x27a4), 0x0000cfbf },
+	{ _MMIO(0x27a8), 0x0003fffa },
+	{ _MMIO(0x27ac), 0x00005f7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_pipe_profile[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile[] = {
+	{ _MMIO(0x9888), 0x0a1e0000 },
+	{ _MMIO(0x9888), 0x0c1f000f },
+	{ _MMIO(0x9888), 0x10176800 },
+	{ _MMIO(0x9888), 0x1191001f },
+	{ _MMIO(0x9888), 0x0b880320 },
+	{ _MMIO(0x9888), 0x01890c40 },
+	{ _MMIO(0x9888), 0x118a1c00 },
+	{ _MMIO(0x9888), 0x118d7c00 },
+	{ _MMIO(0x9888), 0x118e0020 },
+	{ _MMIO(0x9888), 0x118f4c00 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x13900001 },
+	{ _MMIO(0x9888), 0x065c4000 },
+	{ _MMIO(0x9888), 0x0c3d8000 },
+	{ _MMIO(0x9888), 0x06584000 },
+	{ _MMIO(0x9888), 0x0c5b4000 },
+	{ _MMIO(0x9888), 0x081e0040 },
+	{ _MMIO(0x9888), 0x0e1e0000 },
+	{ _MMIO(0x9888), 0x021f5400 },
+	{ _MMIO(0x9888), 0x001f0000 },
+	{ _MMIO(0x9888), 0x101f0010 },
+	{ _MMIO(0x9888), 0x0e1f0080 },
+	{ _MMIO(0x9888), 0x0c384000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x0c13c000 },
+	{ _MMIO(0x9888), 0x06164000 },
+	{ _MMIO(0x9888), 0x06170012 },
+	{ _MMIO(0x9888), 0x00170000 },
+	{ _MMIO(0x9888), 0x01910005 },
+	{ _MMIO(0x9888), 0x07880002 },
+	{ _MMIO(0x9888), 0x01880c00 },
+	{ _MMIO(0x9888), 0x0f880000 },
+	{ _MMIO(0x9888), 0x0d880000 },
+	{ _MMIO(0x9888), 0x05880000 },
+	{ _MMIO(0x9888), 0x09890032 },
+	{ _MMIO(0x9888), 0x078a0800 },
+	{ _MMIO(0x9888), 0x0f8a0a00 },
+	{ _MMIO(0x9888), 0x198a4000 },
+	{ _MMIO(0x9888), 0x1b8a2000 },
+	{ _MMIO(0x9888), 0x1d8a0000 },
+	{ _MMIO(0x9888), 0x038a4000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x238b54c0 },
+	{ _MMIO(0x9888), 0x258baa55 },
+	{ _MMIO(0x9888), 0x278b0019 },
+	{ _MMIO(0x9888), 0x198c0100 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x0f8d0015 },
+	{ _MMIO(0x9888), 0x018d1000 },
+	{ _MMIO(0x9888), 0x098d8000 },
+	{ _MMIO(0x9888), 0x0b8df000 },
+	{ _MMIO(0x9888), 0x0d8d3000 },
+	{ _MMIO(0x9888), 0x038de000 },
+	{ _MMIO(0x9888), 0x058d3000 },
+	{ _MMIO(0x9888), 0x0d8e0004 },
+	{ _MMIO(0x9888), 0x058e000c },
+	{ _MMIO(0x9888), 0x098e0000 },
+	{ _MMIO(0x9888), 0x078e0000 },
+	{ _MMIO(0x9888), 0x038e0000 },
+	{ _MMIO(0x9888), 0x0b8f0020 },
+	{ _MMIO(0x9888), 0x198f0c00 },
+	{ _MMIO(0x9888), 0x078f8000 },
+	{ _MMIO(0x9888), 0x098f4000 },
+	{ _MMIO(0x9888), 0x0b900980 },
+	{ _MMIO(0x9888), 0x03900d80 },
+	{ _MMIO(0x9888), 0x01900000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d801111 },
+	{ _MMIO(0x9888), 0x3d800800 },
+	{ _MMIO(0x9888), 0x4f801011 },
+	{ _MMIO(0x9888), 0x43800443 },
+	{ _MMIO(0x9888), 0x51801111 },
+	{ _MMIO(0x9888), 0x45800422 },
+	{ _MMIO(0x9888), 0x53801111 },
+	{ _MMIO(0x9888), 0x47800c60 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800422 },
+	{ _MMIO(0x9888), 0x41800021 },
+};
+
+static int
+get_render_pipe_profile_mux_config(struct drm_i915_private *dev_priv,
+				   const struct i915_oa_reg **regs,
+				   int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_render_pipe_profile;
+	lens[n] = ARRAY_SIZE(mux_config_render_pipe_profile);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_reads[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f872 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_reads[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads[] = {
+	{ _MMIO(0x9888), 0x198b0343 },
+	{ _MMIO(0x9888), 0x13845800 },
+	{ _MMIO(0x9888), 0x15840018 },
+	{ _MMIO(0x9888), 0x3580001a },
+	{ _MMIO(0x9888), 0x038b6300 },
+	{ _MMIO(0x9888), 0x058b6b62 },
+	{ _MMIO(0x9888), 0x078b006a },
+	{ _MMIO(0x9888), 0x118b0000 },
+	{ _MMIO(0x9888), 0x238b0000 },
+	{ _MMIO(0x9888), 0x258b0000 },
+	{ _MMIO(0x9888), 0x1f85a080 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385000a },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x01840018 },
+	{ _MMIO(0x9888), 0x07844c80 },
+	{ _MMIO(0x9888), 0x09840d9a },
+	{ _MMIO(0x9888), 0x0b840e9c },
+	{ _MMIO(0x9888), 0x0d840f9e },
+	{ _MMIO(0x9888), 0x0f840010 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x03848000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x2f8000e5 },
+	{ _MMIO(0x9888), 0x138080e3 },
+	{ _MMIO(0x9888), 0x1580c0e1 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x11804000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f804000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3d800800 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800842 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800842 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47801042 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800084 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static int
+get_memory_reads_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_memory_reads;
+	lens[n] = ARRAY_SIZE(mux_config_memory_reads);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_writes[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f822 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_writes[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes[] = {
+	{ _MMIO(0x9888), 0x198b0343 },
+	{ _MMIO(0x9888), 0x13845400 },
+	{ _MMIO(0x9888), 0x3580001a },
+	{ _MMIO(0x9888), 0x3d800805 },
+	{ _MMIO(0x9888), 0x038b6300 },
+	{ _MMIO(0x9888), 0x058b6b62 },
+	{ _MMIO(0x9888), 0x078b006a },
+	{ _MMIO(0x9888), 0x118b0000 },
+	{ _MMIO(0x9888), 0x238b0000 },
+	{ _MMIO(0x9888), 0x258b0000 },
+	{ _MMIO(0x9888), 0x1f85a080 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x23850002 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x01840010 },
+	{ _MMIO(0x9888), 0x07844880 },
+	{ _MMIO(0x9888), 0x09840992 },
+	{ _MMIO(0x9888), 0x0b840a94 },
+	{ _MMIO(0x9888), 0x0d840b96 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x03848000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x2d800147 },
+	{ _MMIO(0x9888), 0x2f8000e5 },
+	{ _MMIO(0x9888), 0x138080e3 },
+	{ _MMIO(0x9888), 0x1580c0e1 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x11804000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f800000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800842 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800842 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47801082 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800084 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static int
+get_memory_writes_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_memory_writes;
+	lens[n] = ARRAY_SIZE(mux_config_memory_writes);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extended[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fc2a },
+	{ _MMIO(0x2774), 0x0000bf00 },
+	{ _MMIO(0x2778), 0x0007fc6a },
+	{ _MMIO(0x277c), 0x0000bf00 },
+	{ _MMIO(0x2780), 0x0007fc92 },
+	{ _MMIO(0x2784), 0x0000bf00 },
+	{ _MMIO(0x2788), 0x0007fca2 },
+	{ _MMIO(0x278c), 0x0000bf00 },
+	{ _MMIO(0x2790), 0x0007fc32 },
+	{ _MMIO(0x2794), 0x0000bf00 },
+	{ _MMIO(0x2798), 0x0007fc9a },
+	{ _MMIO(0x279c), 0x0000bf00 },
+	{ _MMIO(0x27a0), 0x0007fe6a },
+	{ _MMIO(0x27a4), 0x0000bf00 },
+	{ _MMIO(0x27a8), 0x0007fe7a },
+	{ _MMIO(0x27ac), 0x0000bf00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extended[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended_0_subslices_0x01[] = {
+	{ _MMIO(0x9888), 0x143d0160 },
+	{ _MMIO(0x9888), 0x163d2800 },
+	{ _MMIO(0x9888), 0x183d0120 },
+	{ _MMIO(0x9888), 0x105800e0 },
+	{ _MMIO(0x9888), 0x005cc000 },
+	{ _MMIO(0x9888), 0x065c8000 },
+	{ _MMIO(0x9888), 0x085cc000 },
+	{ _MMIO(0x9888), 0x0a5cc000 },
+	{ _MMIO(0x9888), 0x0c5cc000 },
+	{ _MMIO(0x9888), 0x0e5cc000 },
+	{ _MMIO(0x9888), 0x025cc000 },
+	{ _MMIO(0x9888), 0x045cc000 },
+	{ _MMIO(0x9888), 0x003d0011 },
+	{ _MMIO(0x9888), 0x063d0900 },
+	{ _MMIO(0x9888), 0x083d0a13 },
+	{ _MMIO(0x9888), 0x0a3d0b15 },
+	{ _MMIO(0x9888), 0x0c3d2317 },
+	{ _MMIO(0x9888), 0x043d21b7 },
+	{ _MMIO(0x9888), 0x103d0000 },
+	{ _MMIO(0x9888), 0x0e3d0000 },
+	{ _MMIO(0x9888), 0x1a3d0000 },
+	{ _MMIO(0x9888), 0x0e5825c1 },
+	{ _MMIO(0x9888), 0x00586100 },
+	{ _MMIO(0x9888), 0x0258204c },
+	{ _MMIO(0x9888), 0x06588000 },
+	{ _MMIO(0x9888), 0x0858c000 },
+	{ _MMIO(0x9888), 0x0a58c000 },
+	{ _MMIO(0x9888), 0x0c58c000 },
+	{ _MMIO(0x9888), 0x0458c000 },
+	{ _MMIO(0x9888), 0x005b4000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x185b5400 },
+	{ _MMIO(0x9888), 0x1a5b0155 },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x045b4000 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x085b4000 },
+	{ _MMIO(0x9888), 0x0a5b4000 },
+	{ _MMIO(0x9888), 0x0c1fa800 },
+	{ _MMIO(0x9888), 0x0e1faa2a },
+	{ _MMIO(0x9888), 0x101f02aa },
+	{ _MMIO(0x9888), 0x00384000 },
+	{ _MMIO(0x9888), 0x0e384000 },
+	{ _MMIO(0x9888), 0x16384000 },
+	{ _MMIO(0x9888), 0x18381555 },
+	{ _MMIO(0x9888), 0x02384000 },
+	{ _MMIO(0x9888), 0x04384000 },
+	{ _MMIO(0x9888), 0x06384000 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0039a000 },
+	{ _MMIO(0x9888), 0x06398000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0c39a000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8aaaa0 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x238b2aa0 },
+	{ _MMIO(0x9888), 0x258b5551 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa2 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800420 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800421 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended_2_subslices_0x02[] = {
+	{ _MMIO(0x9888), 0x105c00e0 },
+	{ _MMIO(0x9888), 0x145b0160 },
+	{ _MMIO(0x9888), 0x165b2800 },
+	{ _MMIO(0x9888), 0x185b0120 },
+	{ _MMIO(0x9888), 0x0e5c25c1 },
+	{ _MMIO(0x9888), 0x005c6100 },
+	{ _MMIO(0x9888), 0x025c204c },
+	{ _MMIO(0x9888), 0x065c8000 },
+	{ _MMIO(0x9888), 0x085cc000 },
+	{ _MMIO(0x9888), 0x0a5cc000 },
+	{ _MMIO(0x9888), 0x0c5cc000 },
+	{ _MMIO(0x9888), 0x045cc000 },
+	{ _MMIO(0x9888), 0x005b0011 },
+	{ _MMIO(0x9888), 0x065b0900 },
+	{ _MMIO(0x9888), 0x085b0a13 },
+	{ _MMIO(0x9888), 0x0a5b0b15 },
+	{ _MMIO(0x9888), 0x0c5b2317 },
+	{ _MMIO(0x9888), 0x045b21b7 },
+	{ _MMIO(0x9888), 0x105b0000 },
+	{ _MMIO(0x9888), 0x0e5b0000 },
+	{ _MMIO(0x9888), 0x1a5b0000 },
+	{ _MMIO(0x9888), 0x0c1fa800 },
+	{ _MMIO(0x9888), 0x0e1faa2a },
+	{ _MMIO(0x9888), 0x101f02aa },
+	{ _MMIO(0x9888), 0x00384000 },
+	{ _MMIO(0x9888), 0x0e384000 },
+	{ _MMIO(0x9888), 0x16384000 },
+	{ _MMIO(0x9888), 0x18381555 },
+	{ _MMIO(0x9888), 0x02384000 },
+	{ _MMIO(0x9888), 0x04384000 },
+	{ _MMIO(0x9888), 0x06384000 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0039a000 },
+	{ _MMIO(0x9888), 0x06398000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0c39a000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8aaaa0 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x238b2aa0 },
+	{ _MMIO(0x9888), 0x258b5551 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa2 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800420 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800421 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended_4_subslices_0x04[] = {
+	{ _MMIO(0x9888), 0x103800e0 },
+	{ _MMIO(0x9888), 0x143a0160 },
+	{ _MMIO(0x9888), 0x163a2800 },
+	{ _MMIO(0x9888), 0x183a0120 },
+	{ _MMIO(0x9888), 0x0c1fa800 },
+	{ _MMIO(0x9888), 0x0e1faa2a },
+	{ _MMIO(0x9888), 0x101f02aa },
+	{ _MMIO(0x9888), 0x0e38a5c1 },
+	{ _MMIO(0x9888), 0x0038a100 },
+	{ _MMIO(0x9888), 0x0238204c },
+	{ _MMIO(0x9888), 0x16388000 },
+	{ _MMIO(0x9888), 0x183802aa },
+	{ _MMIO(0x9888), 0x04380000 },
+	{ _MMIO(0x9888), 0x06380000 },
+	{ _MMIO(0x9888), 0x08388000 },
+	{ _MMIO(0x9888), 0x0a388000 },
+	{ _MMIO(0x9888), 0x0039a000 },
+	{ _MMIO(0x9888), 0x06398000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0c39a000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x003a0011 },
+	{ _MMIO(0x9888), 0x063a0900 },
+	{ _MMIO(0x9888), 0x083a0a13 },
+	{ _MMIO(0x9888), 0x0a3a0b15 },
+	{ _MMIO(0x9888), 0x0c3a2317 },
+	{ _MMIO(0x9888), 0x043a21b7 },
+	{ _MMIO(0x9888), 0x103a0000 },
+	{ _MMIO(0x9888), 0x0e3a0000 },
+	{ _MMIO(0x9888), 0x1a3a0000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8aaaa0 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x238b2aa0 },
+	{ _MMIO(0x9888), 0x258b5551 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa2 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800420 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800421 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended_1_subslices_0x08[] = {
+	{ _MMIO(0x9888), 0x14bd0160 },
+	{ _MMIO(0x9888), 0x16bd2800 },
+	{ _MMIO(0x9888), 0x18bd0120 },
+	{ _MMIO(0x9888), 0x10d800e0 },
+	{ _MMIO(0x9888), 0x00dcc000 },
+	{ _MMIO(0x9888), 0x06dc8000 },
+	{ _MMIO(0x9888), 0x08dcc000 },
+	{ _MMIO(0x9888), 0x0adcc000 },
+	{ _MMIO(0x9888), 0x0cdcc000 },
+	{ _MMIO(0x9888), 0x0edcc000 },
+	{ _MMIO(0x9888), 0x02dcc000 },
+	{ _MMIO(0x9888), 0x04dcc000 },
+	{ _MMIO(0x9888), 0x00bd0011 },
+	{ _MMIO(0x9888), 0x06bd0900 },
+	{ _MMIO(0x9888), 0x08bd0a13 },
+	{ _MMIO(0x9888), 0x0abd0b15 },
+	{ _MMIO(0x9888), 0x0cbd2317 },
+	{ _MMIO(0x9888), 0x04bd21b7 },
+	{ _MMIO(0x9888), 0x10bd0000 },
+	{ _MMIO(0x9888), 0x0ebd0000 },
+	{ _MMIO(0x9888), 0x1abd0000 },
+	{ _MMIO(0x9888), 0x0ed825c1 },
+	{ _MMIO(0x9888), 0x00d86100 },
+	{ _MMIO(0x9888), 0x02d8204c },
+	{ _MMIO(0x9888), 0x06d88000 },
+	{ _MMIO(0x9888), 0x08d8c000 },
+	{ _MMIO(0x9888), 0x0ad8c000 },
+	{ _MMIO(0x9888), 0x0cd8c000 },
+	{ _MMIO(0x9888), 0x04d8c000 },
+	{ _MMIO(0x9888), 0x00db4000 },
+	{ _MMIO(0x9888), 0x0edb4000 },
+	{ _MMIO(0x9888), 0x18db5400 },
+	{ _MMIO(0x9888), 0x1adb0155 },
+	{ _MMIO(0x9888), 0x02db4000 },
+	{ _MMIO(0x9888), 0x04db4000 },
+	{ _MMIO(0x9888), 0x06db4000 },
+	{ _MMIO(0x9888), 0x08db4000 },
+	{ _MMIO(0x9888), 0x0adb4000 },
+	{ _MMIO(0x9888), 0x0c9fa800 },
+	{ _MMIO(0x9888), 0x0e9faa2a },
+	{ _MMIO(0x9888), 0x109f02aa },
+	{ _MMIO(0x9888), 0x00b84000 },
+	{ _MMIO(0x9888), 0x0eb84000 },
+	{ _MMIO(0x9888), 0x16b84000 },
+	{ _MMIO(0x9888), 0x18b81555 },
+	{ _MMIO(0x9888), 0x02b84000 },
+	{ _MMIO(0x9888), 0x04b84000 },
+	{ _MMIO(0x9888), 0x06b84000 },
+	{ _MMIO(0x9888), 0x08b84000 },
+	{ _MMIO(0x9888), 0x0ab84000 },
+	{ _MMIO(0x9888), 0x00b9a000 },
+	{ _MMIO(0x9888), 0x06b98000 },
+	{ _MMIO(0x9888), 0x08b9a000 },
+	{ _MMIO(0x9888), 0x0ab9a000 },
+	{ _MMIO(0x9888), 0x0cb9a000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x02b9a000 },
+	{ _MMIO(0x9888), 0x04b9a000 },
+	{ _MMIO(0x9888), 0x01888000 },
+	{ _MMIO(0x9888), 0x0d88f800 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x238b5540 },
+	{ _MMIO(0x9888), 0x258baaa2 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x018c4000 },
+	{ _MMIO(0x9888), 0x0f8c4000 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c5500 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x018da000 },
+	{ _MMIO(0x9888), 0x078d8000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa2 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800420 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800421 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended_3_subslices_0x10[] = {
+	{ _MMIO(0x9888), 0x10dc00e0 },
+	{ _MMIO(0x9888), 0x14db0160 },
+	{ _MMIO(0x9888), 0x16db2800 },
+	{ _MMIO(0x9888), 0x18db0120 },
+	{ _MMIO(0x9888), 0x0edc25c1 },
+	{ _MMIO(0x9888), 0x00dc6100 },
+	{ _MMIO(0x9888), 0x02dc204c },
+	{ _MMIO(0x9888), 0x06dc8000 },
+	{ _MMIO(0x9888), 0x08dcc000 },
+	{ _MMIO(0x9888), 0x0adcc000 },
+	{ _MMIO(0x9888), 0x0cdcc000 },
+	{ _MMIO(0x9888), 0x04dcc000 },
+	{ _MMIO(0x9888), 0x00db0011 },
+	{ _MMIO(0x9888), 0x06db0900 },
+	{ _MMIO(0x9888), 0x08db0a13 },
+	{ _MMIO(0x9888), 0x0adb0b15 },
+	{ _MMIO(0x9888), 0x0cdb2317 },
+	{ _MMIO(0x9888), 0x04db21b7 },
+	{ _MMIO(0x9888), 0x10db0000 },
+	{ _MMIO(0x9888), 0x0edb0000 },
+	{ _MMIO(0x9888), 0x1adb0000 },
+	{ _MMIO(0x9888), 0x0c9fa800 },
+	{ _MMIO(0x9888), 0x0e9faa2a },
+	{ _MMIO(0x9888), 0x109f02aa },
+	{ _MMIO(0x9888), 0x00b84000 },
+	{ _MMIO(0x9888), 0x0eb84000 },
+	{ _MMIO(0x9888), 0x16b84000 },
+	{ _MMIO(0x9888), 0x18b81555 },
+	{ _MMIO(0x9888), 0x02b84000 },
+	{ _MMIO(0x9888), 0x04b84000 },
+	{ _MMIO(0x9888), 0x06b84000 },
+	{ _MMIO(0x9888), 0x08b84000 },
+	{ _MMIO(0x9888), 0x0ab84000 },
+	{ _MMIO(0x9888), 0x00b9a000 },
+	{ _MMIO(0x9888), 0x06b98000 },
+	{ _MMIO(0x9888), 0x08b9a000 },
+	{ _MMIO(0x9888), 0x0ab9a000 },
+	{ _MMIO(0x9888), 0x0cb9a000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x02b9a000 },
+	{ _MMIO(0x9888), 0x04b9a000 },
+	{ _MMIO(0x9888), 0x01888000 },
+	{ _MMIO(0x9888), 0x0d88f800 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x238b5540 },
+	{ _MMIO(0x9888), 0x258baaa2 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x018c4000 },
+	{ _MMIO(0x9888), 0x0f8c4000 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c5500 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x018da000 },
+	{ _MMIO(0x9888), 0x078d8000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa2 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800420 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800421 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended_5_subslices_0x20[] = {
+	{ _MMIO(0x9888), 0x10b800e0 },
+	{ _MMIO(0x9888), 0x14ba0160 },
+	{ _MMIO(0x9888), 0x16ba2800 },
+	{ _MMIO(0x9888), 0x18ba0120 },
+	{ _MMIO(0x9888), 0x0c9fa800 },
+	{ _MMIO(0x9888), 0x0e9faa2a },
+	{ _MMIO(0x9888), 0x109f02aa },
+	{ _MMIO(0x9888), 0x0eb8a5c1 },
+	{ _MMIO(0x9888), 0x00b8a100 },
+	{ _MMIO(0x9888), 0x02b8204c },
+	{ _MMIO(0x9888), 0x16b88000 },
+	{ _MMIO(0x9888), 0x18b802aa },
+	{ _MMIO(0x9888), 0x04b80000 },
+	{ _MMIO(0x9888), 0x06b80000 },
+	{ _MMIO(0x9888), 0x08b88000 },
+	{ _MMIO(0x9888), 0x0ab88000 },
+	{ _MMIO(0x9888), 0x00b9a000 },
+	{ _MMIO(0x9888), 0x06b98000 },
+	{ _MMIO(0x9888), 0x08b9a000 },
+	{ _MMIO(0x9888), 0x0ab9a000 },
+	{ _MMIO(0x9888), 0x0cb9a000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x02b9a000 },
+	{ _MMIO(0x9888), 0x04b9a000 },
+	{ _MMIO(0x9888), 0x00ba0011 },
+	{ _MMIO(0x9888), 0x06ba0900 },
+	{ _MMIO(0x9888), 0x08ba0a13 },
+	{ _MMIO(0x9888), 0x0aba0b15 },
+	{ _MMIO(0x9888), 0x0cba2317 },
+	{ _MMIO(0x9888), 0x04ba21b7 },
+	{ _MMIO(0x9888), 0x10ba0000 },
+	{ _MMIO(0x9888), 0x0eba0000 },
+	{ _MMIO(0x9888), 0x1aba0000 },
+	{ _MMIO(0x9888), 0x01888000 },
+	{ _MMIO(0x9888), 0x0d88f800 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x238b5540 },
+	{ _MMIO(0x9888), 0x258baaa2 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x018c4000 },
+	{ _MMIO(0x9888), 0x0f8c4000 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c5500 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x018da000 },
+	{ _MMIO(0x9888), 0x078d8000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa2 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800420 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800421 },
+	{ _MMIO(0x9888), 0x41800000 },
+};
+
+static int
+get_compute_extended_mux_config(struct drm_i915_private *dev_priv,
+				const struct i915_oa_reg **regs,
+				int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 6);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 6);
+
+	if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x01) {
+		regs[n] = mux_config_compute_extended_0_subslices_0x01;
+		lens[n] = ARRAY_SIZE(mux_config_compute_extended_0_subslices_0x01);
+		n++;
+	}
+	if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x08) {
+		regs[n] = mux_config_compute_extended_1_subslices_0x08;
+		lens[n] = ARRAY_SIZE(mux_config_compute_extended_1_subslices_0x08);
+		n++;
+	}
+	if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x02) {
+		regs[n] = mux_config_compute_extended_2_subslices_0x02;
+		lens[n] = ARRAY_SIZE(mux_config_compute_extended_2_subslices_0x02);
+		n++;
+	}
+	if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x10) {
+		regs[n] = mux_config_compute_extended_3_subslices_0x10;
+		lens[n] = ARRAY_SIZE(mux_config_compute_extended_3_subslices_0x10);
+		n++;
+	}
+	if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x04) {
+		regs[n] = mux_config_compute_extended_4_subslices_0x04;
+		lens[n] = ARRAY_SIZE(mux_config_compute_extended_4_subslices_0x04);
+		n++;
+	}
+	if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x20) {
+		regs[n] = mux_config_compute_extended_5_subslices_0x20;
+		lens[n] = ARRAY_SIZE(mux_config_compute_extended_5_subslices_0x20);
+		n++;
+	}
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_l3_cache[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fffa },
+	{ _MMIO(0x2774), 0x0000fefe },
+	{ _MMIO(0x2778), 0x0007fffa },
+	{ _MMIO(0x277c), 0x0000fefd },
+	{ _MMIO(0x2790), 0x0007fffa },
+	{ _MMIO(0x2794), 0x0000fbef },
+	{ _MMIO(0x2798), 0x0007fffa },
+	{ _MMIO(0x279c), 0x0000fbdf },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_l3_cache[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00101100 },
+	{ _MMIO(0xe45c), 0x00201200 },
+	{ _MMIO(0xe55c), 0x00301300 },
+	{ _MMIO(0xe65c), 0x00401400 },
+};
+
+static const struct i915_oa_reg mux_config_compute_l3_cache[] = {
+	{ _MMIO(0x9888), 0x143f00b3 },
+	{ _MMIO(0x9888), 0x14bf00b3 },
+	{ _MMIO(0x9888), 0x138303c0 },
+	{ _MMIO(0x9888), 0x3b800060 },
+	{ _MMIO(0x9888), 0x3d800805 },
+	{ _MMIO(0x9888), 0x003f0029 },
+	{ _MMIO(0x9888), 0x063f1400 },
+	{ _MMIO(0x9888), 0x083f1225 },
+	{ _MMIO(0x9888), 0x0e3f1327 },
+	{ _MMIO(0x9888), 0x103f0000 },
+	{ _MMIO(0x9888), 0x005a4000 },
+	{ _MMIO(0x9888), 0x065a8000 },
+	{ _MMIO(0x9888), 0x085ac000 },
+	{ _MMIO(0x9888), 0x0e5ac000 },
+	{ _MMIO(0x9888), 0x001d4000 },
+	{ _MMIO(0x9888), 0x061d8000 },
+	{ _MMIO(0x9888), 0x081dc000 },
+	{ _MMIO(0x9888), 0x0e1dc000 },
+	{ _MMIO(0x9888), 0x0c1f0800 },
+	{ _MMIO(0x9888), 0x0e1f2a00 },
+	{ _MMIO(0x9888), 0x101f0280 },
+	{ _MMIO(0x9888), 0x00391000 },
+	{ _MMIO(0x9888), 0x06394000 },
+	{ _MMIO(0x9888), 0x08395000 },
+	{ _MMIO(0x9888), 0x0e395000 },
+	{ _MMIO(0x9888), 0x0abf1429 },
+	{ _MMIO(0x9888), 0x0cbf1225 },
+	{ _MMIO(0x9888), 0x00bf1380 },
+	{ _MMIO(0x9888), 0x02bf0026 },
+	{ _MMIO(0x9888), 0x10bf0000 },
+	{ _MMIO(0x9888), 0x0adac000 },
+	{ _MMIO(0x9888), 0x0cdac000 },
+	{ _MMIO(0x9888), 0x00da8000 },
+	{ _MMIO(0x9888), 0x02da4000 },
+	{ _MMIO(0x9888), 0x0a9dc000 },
+	{ _MMIO(0x9888), 0x0c9dc000 },
+	{ _MMIO(0x9888), 0x009d8000 },
+	{ _MMIO(0x9888), 0x029d4000 },
+	{ _MMIO(0x9888), 0x0e9f8000 },
+	{ _MMIO(0x9888), 0x109f002a },
+	{ _MMIO(0x9888), 0x0c9fa000 },
+	{ _MMIO(0x9888), 0x0ab95000 },
+	{ _MMIO(0x9888), 0x0cb95000 },
+	{ _MMIO(0x9888), 0x00b94000 },
+	{ _MMIO(0x9888), 0x02b91000 },
+	{ _MMIO(0x9888), 0x0d88c000 },
+	{ _MMIO(0x9888), 0x0f880003 },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8a8020 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x238b0520 },
+	{ _MMIO(0x9888), 0x258ba950 },
+	{ _MMIO(0x9888), 0x278b0016 },
+	{ _MMIO(0x9888), 0x198c5400 },
+	{ _MMIO(0x9888), 0x1b8c0001 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x018d8000 },
+	{ _MMIO(0x9888), 0x038d2000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaa0 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x03835180 },
+	{ _MMIO(0x9888), 0x05834022 },
+	{ _MMIO(0x9888), 0x11830000 },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x07830000 },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x05844000 },
+	{ _MMIO(0x9888), 0x1b80c137 },
+	{ _MMIO(0x9888), 0x1d80c147 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x15804000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d801000 },
+	{ _MMIO(0x9888), 0x4f800111 },
+	{ _MMIO(0x9888), 0x43800842 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800840 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800800 },
+	{ _MMIO(0x9888), 0x418014a2 },
+};
+
+static int
+get_compute_l3_cache_mux_config(struct drm_i915_private *dev_priv,
+				const struct i915_oa_reg **regs,
+				int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_l3_cache;
+	lens[n] = ARRAY_SIZE(mux_config_compute_l3_cache);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_data_port_reads_coalescing[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0xba98ba98 },
+	{ _MMIO(0x2748), 0xba98ba98 },
+	{ _MMIO(0x2744), 0x00003377 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fff2 },
+	{ _MMIO(0x2774), 0x00007ff0 },
+	{ _MMIO(0x2778), 0x0007ffe2 },
+	{ _MMIO(0x277c), 0x00007ff0 },
+	{ _MMIO(0x2780), 0x0007ffc2 },
+	{ _MMIO(0x2784), 0x00007ff0 },
+	{ _MMIO(0x2788), 0x0007ff82 },
+	{ _MMIO(0x278c), 0x00007ff0 },
+	{ _MMIO(0x2790), 0x0007fffa },
+	{ _MMIO(0x2794), 0x0000bfef },
+	{ _MMIO(0x2798), 0x0007fffa },
+	{ _MMIO(0x279c), 0x0000bfdf },
+	{ _MMIO(0x27a0), 0x0007fffa },
+	{ _MMIO(0x27a4), 0x0000bfbf },
+	{ _MMIO(0x27a8), 0x0007fffa },
+	{ _MMIO(0x27ac), 0x0000bf7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_data_port_reads_coalescing[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_data_port_reads_coalescing_0_subslices_0x01[] = {
+	{ _MMIO(0x9888), 0x103d0005 },
+	{ _MMIO(0x9888), 0x163d240b },
+	{ _MMIO(0x9888), 0x1058022f },
+	{ _MMIO(0x9888), 0x185b5520 },
+	{ _MMIO(0x9888), 0x198b0003 },
+	{ _MMIO(0x9888), 0x005cc000 },
+	{ _MMIO(0x9888), 0x065cc000 },
+	{ _MMIO(0x9888), 0x085cc000 },
+	{ _MMIO(0x9888), 0x0a5cc000 },
+	{ _MMIO(0x9888), 0x0c5cc000 },
+	{ _MMIO(0x9888), 0x0e5cc000 },
+	{ _MMIO(0x9888), 0x025c4000 },
+	{ _MMIO(0x9888), 0x045c8000 },
+	{ _MMIO(0x9888), 0x003d0000 },
+	{ _MMIO(0x9888), 0x063d00b0 },
+	{ _MMIO(0x9888), 0x083d0182 },
+	{ _MMIO(0x9888), 0x0a3d10a0 },
+	{ _MMIO(0x9888), 0x0c3d11a2 },
+	{ _MMIO(0x9888), 0x0e3d0000 },
+	{ _MMIO(0x9888), 0x183d0000 },
+	{ _MMIO(0x9888), 0x1a3d0000 },
+	{ _MMIO(0x9888), 0x0e582242 },
+	{ _MMIO(0x9888), 0x00586700 },
+	{ _MMIO(0x9888), 0x0258004f },
+	{ _MMIO(0x9888), 0x0658c000 },
+	{ _MMIO(0x9888), 0x0858c000 },
+	{ _MMIO(0x9888), 0x0a58c000 },
+	{ _MMIO(0x9888), 0x0c58c000 },
+	{ _MMIO(0x9888), 0x045b6300 },
+	{ _MMIO(0x9888), 0x105b0000 },
+	{ _MMIO(0x9888), 0x005b4000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x1a5b0155 },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x0a5b0000 },
+	{ _MMIO(0x9888), 0x0c5b4000 },
+	{ _MMIO(0x9888), 0x0c1fa800 },
+	{ _MMIO(0x9888), 0x0e1faaa0 },
+	{ _MMIO(0x9888), 0x101f02aa },
+	{ _MMIO(0x9888), 0x00384000 },
+	{ _MMIO(0x9888), 0x0e384000 },
+	{ _MMIO(0x9888), 0x16384000 },
+	{ _MMIO(0x9888), 0x18381555 },
+	{ _MMIO(0x9888), 0x02384000 },
+	{ _MMIO(0x9888), 0x04384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0c384000 },
+	{ _MMIO(0x9888), 0x0039a000 },
+	{ _MMIO(0x9888), 0x0639a000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0c39a000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x02392000 },
+	{ _MMIO(0x9888), 0x04398000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8aaaa0 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x038b6300 },
+	{ _MMIO(0x9888), 0x058b0062 },
+	{ _MMIO(0x9888), 0x118b0000 },
+	{ _MMIO(0x9888), 0x238b02a0 },
+	{ _MMIO(0x9888), 0x258b5555 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d801000 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f800001 },
+	{ _MMIO(0x9888), 0x43800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800420 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800421 },
+	{ _MMIO(0x9888), 0x41800041 },
+};
+
+static int
+get_data_port_reads_coalescing_mux_config(struct drm_i915_private *dev_priv,
+					  const struct i915_oa_reg **regs,
+					  int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x01) {
+		regs[n] = mux_config_data_port_reads_coalescing_0_subslices_0x01;
+		lens[n] = ARRAY_SIZE(mux_config_data_port_reads_coalescing_0_subslices_0x01);
+		n++;
+	}
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_data_port_writes_coalescing[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0xba98ba98 },
+	{ _MMIO(0x2748), 0xba98ba98 },
+	{ _MMIO(0x2744), 0x00003377 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ff72 },
+	{ _MMIO(0x2774), 0x0000bfd0 },
+	{ _MMIO(0x2778), 0x0007ff62 },
+	{ _MMIO(0x277c), 0x0000bfd0 },
+	{ _MMIO(0x2780), 0x0007ff42 },
+	{ _MMIO(0x2784), 0x0000bfd0 },
+	{ _MMIO(0x2788), 0x0007ff02 },
+	{ _MMIO(0x278c), 0x0000bfd0 },
+	{ _MMIO(0x2790), 0x0005fff2 },
+	{ _MMIO(0x2794), 0x0000bfd0 },
+	{ _MMIO(0x2798), 0x0005ffe2 },
+	{ _MMIO(0x279c), 0x0000bfd0 },
+	{ _MMIO(0x27a0), 0x0005ffc2 },
+	{ _MMIO(0x27a4), 0x0000bfd0 },
+	{ _MMIO(0x27a8), 0x0005ff82 },
+	{ _MMIO(0x27ac), 0x0000bfd0 },
+};
+
+static const struct i915_oa_reg flex_eu_config_data_port_writes_coalescing[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_data_port_writes_coalescing_0_subslices_0x01[] = {
+	{ _MMIO(0x9888), 0x103d0005 },
+	{ _MMIO(0x9888), 0x143d0120 },
+	{ _MMIO(0x9888), 0x163d2400 },
+	{ _MMIO(0x9888), 0x1058022f },
+	{ _MMIO(0x9888), 0x105b0000 },
+	{ _MMIO(0x9888), 0x198b0003 },
+	{ _MMIO(0x9888), 0x005cc000 },
+	{ _MMIO(0x9888), 0x065cc000 },
+	{ _MMIO(0x9888), 0x085cc000 },
+	{ _MMIO(0x9888), 0x0a5cc000 },
+	{ _MMIO(0x9888), 0x0e5cc000 },
+	{ _MMIO(0x9888), 0x025c4000 },
+	{ _MMIO(0x9888), 0x045c8000 },
+	{ _MMIO(0x9888), 0x003d0000 },
+	{ _MMIO(0x9888), 0x063d0094 },
+	{ _MMIO(0x9888), 0x083d0182 },
+	{ _MMIO(0x9888), 0x0a3d1814 },
+	{ _MMIO(0x9888), 0x0e3d0000 },
+	{ _MMIO(0x9888), 0x183d0000 },
+	{ _MMIO(0x9888), 0x1a3d0000 },
+	{ _MMIO(0x9888), 0x0c3d0000 },
+	{ _MMIO(0x9888), 0x0e582242 },
+	{ _MMIO(0x9888), 0x00586700 },
+	{ _MMIO(0x9888), 0x0258004f },
+	{ _MMIO(0x9888), 0x0658c000 },
+	{ _MMIO(0x9888), 0x0858c000 },
+	{ _MMIO(0x9888), 0x0a58c000 },
+	{ _MMIO(0x9888), 0x045b6a80 },
+	{ _MMIO(0x9888), 0x005b4000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x185b5400 },
+	{ _MMIO(0x9888), 0x1a5b0141 },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x0a5b0000 },
+	{ _MMIO(0x9888), 0x0c5b4000 },
+	{ _MMIO(0x9888), 0x0c1fa800 },
+	{ _MMIO(0x9888), 0x0e1faaa0 },
+	{ _MMIO(0x9888), 0x101f0282 },
+	{ _MMIO(0x9888), 0x00384000 },
+	{ _MMIO(0x9888), 0x0e384000 },
+	{ _MMIO(0x9888), 0x16384000 },
+	{ _MMIO(0x9888), 0x18381415 },
+	{ _MMIO(0x9888), 0x02384000 },
+	{ _MMIO(0x9888), 0x04384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0c384000 },
+	{ _MMIO(0x9888), 0x0039a000 },
+	{ _MMIO(0x9888), 0x0639a000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x02392000 },
+	{ _MMIO(0x9888), 0x04398000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8a82a0 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x038b6300 },
+	{ _MMIO(0x9888), 0x058b0062 },
+	{ _MMIO(0x9888), 0x118b0000 },
+	{ _MMIO(0x9888), 0x238b02a0 },
+	{ _MMIO(0x9888), 0x258b1555 },
+	{ _MMIO(0x9888), 0x278b0014 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x21852aaa },
+	{ _MMIO(0x9888), 0x23850028 },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830141 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0xd24), 0x00000000 },
+	{ _MMIO(0x9888), 0x4d801000 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f800001 },
+	{ _MMIO(0x9888), 0x43800000 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800420 },
+	{ _MMIO(0x9888), 0x3f800421 },
+	{ _MMIO(0x9888), 0x41800041 },
+};
+
+static int
+get_data_port_writes_coalescing_mux_config(struct drm_i915_private *dev_priv,
+					   const struct i915_oa_reg **regs,
+					   int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x01) {
+		regs[n] = mux_config_data_port_writes_coalescing_0_subslices_0x01;
+		lens[n] = ARRAY_SIZE(mux_config_data_port_writes_coalescing_0_subslices_0x01);
+		n++;
+	}
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_hdc_and_sf[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x10800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fff7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_hdc_and_sf[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_hdc_and_sf[] = {
+	{ _MMIO(0x9888), 0x105c0232 },
+	{ _MMIO(0x9888), 0x10580232 },
+	{ _MMIO(0x9888), 0x10380232 },
+	{ _MMIO(0x9888), 0x10dc0232 },
+	{ _MMIO(0x9888), 0x10d80232 },
+	{ _MMIO(0x9888), 0x10b80232 },
+	{ _MMIO(0x9888), 0x118e4400 },
+	{ _MMIO(0x9888), 0x025c6080 },
+	{ _MMIO(0x9888), 0x045c004b },
+	{ _MMIO(0x9888), 0x005c8000 },
+	{ _MMIO(0x9888), 0x00582080 },
+	{ _MMIO(0x9888), 0x0258004b },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x045b4000 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f00aa },
+	{ _MMIO(0x9888), 0x04386080 },
+	{ _MMIO(0x9888), 0x0638404b },
+	{ _MMIO(0x9888), 0x02384000 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a380000 },
+	{ _MMIO(0x9888), 0x0c380000 },
+	{ _MMIO(0x9888), 0x00398000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x0cdc25c1 },
+	{ _MMIO(0x9888), 0x0adcc000 },
+	{ _MMIO(0x9888), 0x0ad825c1 },
+	{ _MMIO(0x9888), 0x18db4000 },
+	{ _MMIO(0x9888), 0x1adb0001 },
+	{ _MMIO(0x9888), 0x0e9f8000 },
+	{ _MMIO(0x9888), 0x109f02aa },
+	{ _MMIO(0x9888), 0x0eb825c1 },
+	{ _MMIO(0x9888), 0x18b80154 },
+	{ _MMIO(0x9888), 0x0ab9a000 },
+	{ _MMIO(0x9888), 0x0cb9a000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x0d88c000 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x258baa05 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x198c5400 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x098dc000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x098e05c0 },
+	{ _MMIO(0x9888), 0x058e0000 },
+	{ _MMIO(0x9888), 0x198f0020 },
+	{ _MMIO(0x9888), 0x2185aa0a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x19835000 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x09848000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x19808000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x51800040 },
+	{ _MMIO(0x9888), 0x43800400 },
+	{ _MMIO(0x9888), 0x45800800 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800c62 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f801042 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x418014a4 },
+};
+
+static int
+get_hdc_and_sf_mux_config(struct drm_i915_private *dev_priv,
+			  const struct i915_oa_reg **regs,
+			  int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_hdc_and_sf;
+	lens[n] = ARRAY_SIZE(mux_config_hdc_and_sf);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1[] = {
+	{ _MMIO(0x9888), 0x10bf03da },
+	{ _MMIO(0x9888), 0x14bf0001 },
+	{ _MMIO(0x9888), 0x12980340 },
+	{ _MMIO(0x9888), 0x12990340 },
+	{ _MMIO(0x9888), 0x0cbf1187 },
+	{ _MMIO(0x9888), 0x0ebf1205 },
+	{ _MMIO(0x9888), 0x00bf0500 },
+	{ _MMIO(0x9888), 0x02bf042b },
+	{ _MMIO(0x9888), 0x04bf002c },
+	{ _MMIO(0x9888), 0x0cdac000 },
+	{ _MMIO(0x9888), 0x0edac000 },
+	{ _MMIO(0x9888), 0x00da8000 },
+	{ _MMIO(0x9888), 0x02dac000 },
+	{ _MMIO(0x9888), 0x04da4000 },
+	{ _MMIO(0x9888), 0x04983400 },
+	{ _MMIO(0x9888), 0x10980000 },
+	{ _MMIO(0x9888), 0x06990034 },
+	{ _MMIO(0x9888), 0x10990000 },
+	{ _MMIO(0x9888), 0x0c9dc000 },
+	{ _MMIO(0x9888), 0x0e9dc000 },
+	{ _MMIO(0x9888), 0x009d8000 },
+	{ _MMIO(0x9888), 0x029dc000 },
+	{ _MMIO(0x9888), 0x049d4000 },
+	{ _MMIO(0x9888), 0x109f02a8 },
+	{ _MMIO(0x9888), 0x0c9fa000 },
+	{ _MMIO(0x9888), 0x0e9f00ba },
+	{ _MMIO(0x9888), 0x0cb88000 },
+	{ _MMIO(0x9888), 0x0cb95000 },
+	{ _MMIO(0x9888), 0x0eb95000 },
+	{ _MMIO(0x9888), 0x00b94000 },
+	{ _MMIO(0x9888), 0x02b95000 },
+	{ _MMIO(0x9888), 0x04b91000 },
+	{ _MMIO(0x9888), 0x06b92000 },
+	{ _MMIO(0x9888), 0x0cba4000 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x258b800a },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b5500 },
+	{ _MMIO(0x9888), 0x198c4000 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x018d8000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2185800a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x1b830154 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x47800000 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800060 },
+};
+
+static int
+get_l3_1_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_1;
+	lens[n] = ARRAY_SIZE(mux_config_l3_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_2[] = {
+	{ _MMIO(0x9888), 0x103f03da },
+	{ _MMIO(0x9888), 0x143f0001 },
+	{ _MMIO(0x9888), 0x12180340 },
+	{ _MMIO(0x9888), 0x12190340 },
+	{ _MMIO(0x9888), 0x0c3f1187 },
+	{ _MMIO(0x9888), 0x0e3f1205 },
+	{ _MMIO(0x9888), 0x003f0500 },
+	{ _MMIO(0x9888), 0x023f042b },
+	{ _MMIO(0x9888), 0x043f002c },
+	{ _MMIO(0x9888), 0x0c5ac000 },
+	{ _MMIO(0x9888), 0x0e5ac000 },
+	{ _MMIO(0x9888), 0x005a8000 },
+	{ _MMIO(0x9888), 0x025ac000 },
+	{ _MMIO(0x9888), 0x045a4000 },
+	{ _MMIO(0x9888), 0x04183400 },
+	{ _MMIO(0x9888), 0x10180000 },
+	{ _MMIO(0x9888), 0x06190034 },
+	{ _MMIO(0x9888), 0x10190000 },
+	{ _MMIO(0x9888), 0x0c1dc000 },
+	{ _MMIO(0x9888), 0x0e1dc000 },
+	{ _MMIO(0x9888), 0x001d8000 },
+	{ _MMIO(0x9888), 0x021dc000 },
+	{ _MMIO(0x9888), 0x041d4000 },
+	{ _MMIO(0x9888), 0x101f02a8 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f00ba },
+	{ _MMIO(0x9888), 0x0c388000 },
+	{ _MMIO(0x9888), 0x0c395000 },
+	{ _MMIO(0x9888), 0x0e395000 },
+	{ _MMIO(0x9888), 0x00394000 },
+	{ _MMIO(0x9888), 0x02395000 },
+	{ _MMIO(0x9888), 0x04391000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x0c3a4000 },
+	{ _MMIO(0x9888), 0x1b8aa800 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x258b4005 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x2185800a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x1b830154 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x47800000 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800060 },
+};
+
+static int
+get_l3_2_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_2;
+	lens[n] = ARRAY_SIZE(mux_config_l3_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_3[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_3[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_3[] = {
+	{ _MMIO(0x9888), 0x121b0340 },
+	{ _MMIO(0x9888), 0x103f0274 },
+	{ _MMIO(0x9888), 0x123f0000 },
+	{ _MMIO(0x9888), 0x129b0340 },
+	{ _MMIO(0x9888), 0x10bf0274 },
+	{ _MMIO(0x9888), 0x12bf0000 },
+	{ _MMIO(0x9888), 0x041b3400 },
+	{ _MMIO(0x9888), 0x101b0000 },
+	{ _MMIO(0x9888), 0x045c8000 },
+	{ _MMIO(0x9888), 0x0a3d4000 },
+	{ _MMIO(0x9888), 0x003f0080 },
+	{ _MMIO(0x9888), 0x023f0793 },
+	{ _MMIO(0x9888), 0x043f0014 },
+	{ _MMIO(0x9888), 0x04588000 },
+	{ _MMIO(0x9888), 0x005a8000 },
+	{ _MMIO(0x9888), 0x025ac000 },
+	{ _MMIO(0x9888), 0x045a4000 },
+	{ _MMIO(0x9888), 0x0a5b4000 },
+	{ _MMIO(0x9888), 0x001d8000 },
+	{ _MMIO(0x9888), 0x021dc000 },
+	{ _MMIO(0x9888), 0x041d4000 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f002a },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x00394000 },
+	{ _MMIO(0x9888), 0x02395000 },
+	{ _MMIO(0x9888), 0x04399000 },
+	{ _MMIO(0x9888), 0x069b0034 },
+	{ _MMIO(0x9888), 0x109b0000 },
+	{ _MMIO(0x9888), 0x06dc4000 },
+	{ _MMIO(0x9888), 0x0cbd4000 },
+	{ _MMIO(0x9888), 0x0cbf0981 },
+	{ _MMIO(0x9888), 0x0ebf0a0f },
+	{ _MMIO(0x9888), 0x06d84000 },
+	{ _MMIO(0x9888), 0x0cdac000 },
+	{ _MMIO(0x9888), 0x0edac000 },
+	{ _MMIO(0x9888), 0x0cdb4000 },
+	{ _MMIO(0x9888), 0x0c9dc000 },
+	{ _MMIO(0x9888), 0x0e9dc000 },
+	{ _MMIO(0x9888), 0x109f02a8 },
+	{ _MMIO(0x9888), 0x0e9f0080 },
+	{ _MMIO(0x9888), 0x0cb84000 },
+	{ _MMIO(0x9888), 0x0cb95000 },
+	{ _MMIO(0x9888), 0x0eb95000 },
+	{ _MMIO(0x9888), 0x06b92000 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x258b8009 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x198c4000 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2185800a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x1b830154 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x45800c00 },
+	{ _MMIO(0x9888), 0x47800c63 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f8014a5 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800045 },
+};
+
+static int
+get_l3_3_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_3;
+	lens[n] = ARRAY_SIZE(mux_config_l3_3);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_4[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_4[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_4[] = {
+	{ _MMIO(0x9888), 0x121a0340 },
+	{ _MMIO(0x9888), 0x103f0017 },
+	{ _MMIO(0x9888), 0x123f0020 },
+	{ _MMIO(0x9888), 0x129a0340 },
+	{ _MMIO(0x9888), 0x10bf0017 },
+	{ _MMIO(0x9888), 0x12bf0020 },
+	{ _MMIO(0x9888), 0x041a3400 },
+	{ _MMIO(0x9888), 0x101a0000 },
+	{ _MMIO(0x9888), 0x043b8000 },
+	{ _MMIO(0x9888), 0x0a3e0010 },
+	{ _MMIO(0x9888), 0x003f0200 },
+	{ _MMIO(0x9888), 0x023f0113 },
+	{ _MMIO(0x9888), 0x043f0014 },
+	{ _MMIO(0x9888), 0x02592000 },
+	{ _MMIO(0x9888), 0x005a8000 },
+	{ _MMIO(0x9888), 0x025ac000 },
+	{ _MMIO(0x9888), 0x045a4000 },
+	{ _MMIO(0x9888), 0x0a1c8000 },
+	{ _MMIO(0x9888), 0x001d8000 },
+	{ _MMIO(0x9888), 0x021dc000 },
+	{ _MMIO(0x9888), 0x041d4000 },
+	{ _MMIO(0x9888), 0x0a1e8000 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f001a },
+	{ _MMIO(0x9888), 0x00394000 },
+	{ _MMIO(0x9888), 0x02395000 },
+	{ _MMIO(0x9888), 0x04391000 },
+	{ _MMIO(0x9888), 0x069a0034 },
+	{ _MMIO(0x9888), 0x109a0000 },
+	{ _MMIO(0x9888), 0x06bb4000 },
+	{ _MMIO(0x9888), 0x0abe0040 },
+	{ _MMIO(0x9888), 0x0cbf0984 },
+	{ _MMIO(0x9888), 0x0ebf0a02 },
+	{ _MMIO(0x9888), 0x02d94000 },
+	{ _MMIO(0x9888), 0x0cdac000 },
+	{ _MMIO(0x9888), 0x0edac000 },
+	{ _MMIO(0x9888), 0x0c9c0400 },
+	{ _MMIO(0x9888), 0x0c9dc000 },
+	{ _MMIO(0x9888), 0x0e9dc000 },
+	{ _MMIO(0x9888), 0x0c9e0400 },
+	{ _MMIO(0x9888), 0x109f02a8 },
+	{ _MMIO(0x9888), 0x0e9f0040 },
+	{ _MMIO(0x9888), 0x0cb95000 },
+	{ _MMIO(0x9888), 0x0eb95000 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x258b8009 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x198c4000 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2185800a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x1b830154 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x45800800 },
+	{ _MMIO(0x9888), 0x47800842 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f801084 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800044 },
+};
+
+static int
+get_l3_4_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_4;
+	lens[n] = ARRAY_SIZE(mux_config_l3_4);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00006000 },
+	{ _MMIO(0x2774), 0x0000f3ff },
+	{ _MMIO(0x2778), 0x00001800 },
+	{ _MMIO(0x277c), 0x0000fcff },
+	{ _MMIO(0x2780), 0x00000600 },
+	{ _MMIO(0x2784), 0x0000ff3f },
+	{ _MMIO(0x2788), 0x00000180 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000060 },
+	{ _MMIO(0x2794), 0x0000fff3 },
+	{ _MMIO(0x2798), 0x00000018 },
+	{ _MMIO(0x279c), 0x0000fffc },
+};
+
+static const struct i915_oa_reg flex_eu_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x9888), 0x143b000e },
+	{ _MMIO(0x9888), 0x043c55c0 },
+	{ _MMIO(0x9888), 0x0a1e0280 },
+	{ _MMIO(0x9888), 0x0c1e0408 },
+	{ _MMIO(0x9888), 0x10390000 },
+	{ _MMIO(0x9888), 0x12397a1f },
+	{ _MMIO(0x9888), 0x14bb000e },
+	{ _MMIO(0x9888), 0x04bc5000 },
+	{ _MMIO(0x9888), 0x0a9e0296 },
+	{ _MMIO(0x9888), 0x0c9e0008 },
+	{ _MMIO(0x9888), 0x10b90000 },
+	{ _MMIO(0x9888), 0x12b97a1f },
+	{ _MMIO(0x9888), 0x063b0042 },
+	{ _MMIO(0x9888), 0x103b0000 },
+	{ _MMIO(0x9888), 0x083c0000 },
+	{ _MMIO(0x9888), 0x0a3e0040 },
+	{ _MMIO(0x9888), 0x043f8000 },
+	{ _MMIO(0x9888), 0x02594000 },
+	{ _MMIO(0x9888), 0x045a8000 },
+	{ _MMIO(0x9888), 0x0c1c0400 },
+	{ _MMIO(0x9888), 0x041d8000 },
+	{ _MMIO(0x9888), 0x081e02c0 },
+	{ _MMIO(0x9888), 0x0e1e0000 },
+	{ _MMIO(0x9888), 0x0c1fa800 },
+	{ _MMIO(0x9888), 0x0e1f0260 },
+	{ _MMIO(0x9888), 0x101f0014 },
+	{ _MMIO(0x9888), 0x003905e0 },
+	{ _MMIO(0x9888), 0x06390bc0 },
+	{ _MMIO(0x9888), 0x02390018 },
+	{ _MMIO(0x9888), 0x04394000 },
+	{ _MMIO(0x9888), 0x04bb0042 },
+	{ _MMIO(0x9888), 0x10bb0000 },
+	{ _MMIO(0x9888), 0x02bc05c0 },
+	{ _MMIO(0x9888), 0x08bc0000 },
+	{ _MMIO(0x9888), 0x0abe0004 },
+	{ _MMIO(0x9888), 0x02bf8000 },
+	{ _MMIO(0x9888), 0x02d91000 },
+	{ _MMIO(0x9888), 0x02da8000 },
+	{ _MMIO(0x9888), 0x089c8000 },
+	{ _MMIO(0x9888), 0x029d8000 },
+	{ _MMIO(0x9888), 0x089e8000 },
+	{ _MMIO(0x9888), 0x0e9e0000 },
+	{ _MMIO(0x9888), 0x0e9fa806 },
+	{ _MMIO(0x9888), 0x109f0142 },
+	{ _MMIO(0x9888), 0x08b90617 },
+	{ _MMIO(0x9888), 0x0ab90be0 },
+	{ _MMIO(0x9888), 0x02b94000 },
+	{ _MMIO(0x9888), 0x0d88f000 },
+	{ _MMIO(0x9888), 0x0f88000c },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x1b8a2800 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x238b52a0 },
+	{ _MMIO(0x9888), 0x258b6a95 },
+	{ _MMIO(0x9888), 0x278b0029 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c1500 },
+	{ _MMIO(0x9888), 0x1b8c0014 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x038d8000 },
+	{ _MMIO(0x9888), 0x058d2000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x4d800444 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f804000 },
+	{ _MMIO(0x9888), 0x43801080 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800084 },
+	{ _MMIO(0x9888), 0x53800044 },
+	{ _MMIO(0x9888), 0x47801080 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800000 },
+	{ _MMIO(0x9888), 0x41800840 },
+};
+
+static int
+get_rasterizer_and_pixel_backend_mux_config(struct drm_i915_private *dev_priv,
+					    const struct i915_oa_reg **regs,
+					    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_rasterizer_and_pixel_backend;
+	lens[n] = ARRAY_SIZE(mux_config_rasterizer_and_pixel_backend);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler_1[] = {
+	{ _MMIO(0x9888), 0x18921400 },
+	{ _MMIO(0x9888), 0x149500ab },
+	{ _MMIO(0x9888), 0x18b21400 },
+	{ _MMIO(0x9888), 0x14b500ab },
+	{ _MMIO(0x9888), 0x18d21400 },
+	{ _MMIO(0x9888), 0x14d500ab },
+	{ _MMIO(0x9888), 0x0cdc8000 },
+	{ _MMIO(0x9888), 0x0edc4000 },
+	{ _MMIO(0x9888), 0x02dcc000 },
+	{ _MMIO(0x9888), 0x04dcc000 },
+	{ _MMIO(0x9888), 0x1abd00a0 },
+	{ _MMIO(0x9888), 0x0abd8000 },
+	{ _MMIO(0x9888), 0x0cd88000 },
+	{ _MMIO(0x9888), 0x0ed84000 },
+	{ _MMIO(0x9888), 0x04d88000 },
+	{ _MMIO(0x9888), 0x1adb0050 },
+	{ _MMIO(0x9888), 0x04db8000 },
+	{ _MMIO(0x9888), 0x06db8000 },
+	{ _MMIO(0x9888), 0x08db8000 },
+	{ _MMIO(0x9888), 0x0adb4000 },
+	{ _MMIO(0x9888), 0x109f02a0 },
+	{ _MMIO(0x9888), 0x0c9fa000 },
+	{ _MMIO(0x9888), 0x0e9f00aa },
+	{ _MMIO(0x9888), 0x18b82500 },
+	{ _MMIO(0x9888), 0x02b88000 },
+	{ _MMIO(0x9888), 0x04b84000 },
+	{ _MMIO(0x9888), 0x06b84000 },
+	{ _MMIO(0x9888), 0x08b84000 },
+	{ _MMIO(0x9888), 0x0ab84000 },
+	{ _MMIO(0x9888), 0x0cb88000 },
+	{ _MMIO(0x9888), 0x0cb98000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x00b98000 },
+	{ _MMIO(0x9888), 0x02b9a000 },
+	{ _MMIO(0x9888), 0x04b9a000 },
+	{ _MMIO(0x9888), 0x06b92000 },
+	{ _MMIO(0x9888), 0x1aba0200 },
+	{ _MMIO(0x9888), 0x02ba8000 },
+	{ _MMIO(0x9888), 0x0cba8000 },
+	{ _MMIO(0x9888), 0x04908000 },
+	{ _MMIO(0x9888), 0x04918000 },
+	{ _MMIO(0x9888), 0x04927300 },
+	{ _MMIO(0x9888), 0x10920000 },
+	{ _MMIO(0x9888), 0x1893000a },
+	{ _MMIO(0x9888), 0x0a934000 },
+	{ _MMIO(0x9888), 0x0a946000 },
+	{ _MMIO(0x9888), 0x0c959000 },
+	{ _MMIO(0x9888), 0x0e950098 },
+	{ _MMIO(0x9888), 0x10950000 },
+	{ _MMIO(0x9888), 0x04b04000 },
+	{ _MMIO(0x9888), 0x04b14000 },
+	{ _MMIO(0x9888), 0x04b20073 },
+	{ _MMIO(0x9888), 0x10b20000 },
+	{ _MMIO(0x9888), 0x04b38000 },
+	{ _MMIO(0x9888), 0x06b38000 },
+	{ _MMIO(0x9888), 0x08b34000 },
+	{ _MMIO(0x9888), 0x04b4c000 },
+	{ _MMIO(0x9888), 0x02b59890 },
+	{ _MMIO(0x9888), 0x10b50000 },
+	{ _MMIO(0x9888), 0x06d04000 },
+	{ _MMIO(0x9888), 0x06d14000 },
+	{ _MMIO(0x9888), 0x06d20073 },
+	{ _MMIO(0x9888), 0x10d20000 },
+	{ _MMIO(0x9888), 0x18d30020 },
+	{ _MMIO(0x9888), 0x02d38000 },
+	{ _MMIO(0x9888), 0x0cd34000 },
+	{ _MMIO(0x9888), 0x0ad48000 },
+	{ _MMIO(0x9888), 0x04d42000 },
+	{ _MMIO(0x9888), 0x0ed59000 },
+	{ _MMIO(0x9888), 0x00d59800 },
+	{ _MMIO(0x9888), 0x10d50000 },
+	{ _MMIO(0x9888), 0x0f88000e },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b5500 },
+	{ _MMIO(0x9888), 0x258b000a },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x0d8d8000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x018d8000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x2185000a },
+	{ _MMIO(0x9888), 0x1b830150 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d848000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d808000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47801021 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800c64 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800c02 },
+};
+
+static int
+get_sampler_1_mux_config(struct drm_i915_private *dev_priv,
+			 const struct i915_oa_reg **regs,
+			 int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_sampler_1;
+	lens[n] = ARRAY_SIZE(mux_config_sampler_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler_2[] = {
+	{ _MMIO(0x9888), 0x18121400 },
+	{ _MMIO(0x9888), 0x141500ab },
+	{ _MMIO(0x9888), 0x18321400 },
+	{ _MMIO(0x9888), 0x143500ab },
+	{ _MMIO(0x9888), 0x18521400 },
+	{ _MMIO(0x9888), 0x145500ab },
+	{ _MMIO(0x9888), 0x0c5c8000 },
+	{ _MMIO(0x9888), 0x0e5c4000 },
+	{ _MMIO(0x9888), 0x025cc000 },
+	{ _MMIO(0x9888), 0x045cc000 },
+	{ _MMIO(0x9888), 0x1a3d00a0 },
+	{ _MMIO(0x9888), 0x0a3d8000 },
+	{ _MMIO(0x9888), 0x0c588000 },
+	{ _MMIO(0x9888), 0x0e584000 },
+	{ _MMIO(0x9888), 0x04588000 },
+	{ _MMIO(0x9888), 0x1a5b0050 },
+	{ _MMIO(0x9888), 0x045b8000 },
+	{ _MMIO(0x9888), 0x065b8000 },
+	{ _MMIO(0x9888), 0x085b8000 },
+	{ _MMIO(0x9888), 0x0a5b4000 },
+	{ _MMIO(0x9888), 0x101f02a0 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f00aa },
+	{ _MMIO(0x9888), 0x18382500 },
+	{ _MMIO(0x9888), 0x02388000 },
+	{ _MMIO(0x9888), 0x04384000 },
+	{ _MMIO(0x9888), 0x06384000 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0c388000 },
+	{ _MMIO(0x9888), 0x0c398000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x00398000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x1a3a0200 },
+	{ _MMIO(0x9888), 0x023a8000 },
+	{ _MMIO(0x9888), 0x0c3a8000 },
+	{ _MMIO(0x9888), 0x04108000 },
+	{ _MMIO(0x9888), 0x04118000 },
+	{ _MMIO(0x9888), 0x04127300 },
+	{ _MMIO(0x9888), 0x10120000 },
+	{ _MMIO(0x9888), 0x1813000a },
+	{ _MMIO(0x9888), 0x0a134000 },
+	{ _MMIO(0x9888), 0x0a146000 },
+	{ _MMIO(0x9888), 0x0c159000 },
+	{ _MMIO(0x9888), 0x0e150098 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x04304000 },
+	{ _MMIO(0x9888), 0x04314000 },
+	{ _MMIO(0x9888), 0x04320073 },
+	{ _MMIO(0x9888), 0x10320000 },
+	{ _MMIO(0x9888), 0x04338000 },
+	{ _MMIO(0x9888), 0x06338000 },
+	{ _MMIO(0x9888), 0x08334000 },
+	{ _MMIO(0x9888), 0x0434c000 },
+	{ _MMIO(0x9888), 0x02359890 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x06504000 },
+	{ _MMIO(0x9888), 0x06514000 },
+	{ _MMIO(0x9888), 0x06520073 },
+	{ _MMIO(0x9888), 0x10520000 },
+	{ _MMIO(0x9888), 0x18530020 },
+	{ _MMIO(0x9888), 0x02538000 },
+	{ _MMIO(0x9888), 0x0c534000 },
+	{ _MMIO(0x9888), 0x0a548000 },
+	{ _MMIO(0x9888), 0x04542000 },
+	{ _MMIO(0x9888), 0x0e559000 },
+	{ _MMIO(0x9888), 0x00559800 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x1b8aa000 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x258b0005 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x2185000a },
+	{ _MMIO(0x9888), 0x1b830150 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d848000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d808000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47801021 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800c64 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800c02 },
+};
+
+static int
+get_sampler_2_mux_config(struct drm_i915_private *dev_priv,
+			 const struct i915_oa_reg **regs,
+			 int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_sampler_2;
+	lens[n] = ARRAY_SIZE(mux_config_sampler_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x0000fe7f },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000ffbf },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fff7 },
+	{ _MMIO(0x2798), 0x00000000 },
+	{ _MMIO(0x279c), 0x0000fff9 },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_1[] = {
+	{ _MMIO(0x9888), 0x16154d60 },
+	{ _MMIO(0x9888), 0x16352e60 },
+	{ _MMIO(0x9888), 0x16554d60 },
+	{ _MMIO(0x9888), 0x16950000 },
+	{ _MMIO(0x9888), 0x16b50000 },
+	{ _MMIO(0x9888), 0x16d50000 },
+	{ _MMIO(0x9888), 0x005c8000 },
+	{ _MMIO(0x9888), 0x045cc000 },
+	{ _MMIO(0x9888), 0x065c4000 },
+	{ _MMIO(0x9888), 0x083d8000 },
+	{ _MMIO(0x9888), 0x0a3d8000 },
+	{ _MMIO(0x9888), 0x0458c000 },
+	{ _MMIO(0x9888), 0x025b8000 },
+	{ _MMIO(0x9888), 0x085b4000 },
+	{ _MMIO(0x9888), 0x0a5b4000 },
+	{ _MMIO(0x9888), 0x0c5b8000 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f00aa },
+	{ _MMIO(0x9888), 0x02384000 },
+	{ _MMIO(0x9888), 0x04388000 },
+	{ _MMIO(0x9888), 0x06388000 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0c384000 },
+	{ _MMIO(0x9888), 0x00398000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x043a8000 },
+	{ _MMIO(0x9888), 0x063a8000 },
+	{ _MMIO(0x9888), 0x08138000 },
+	{ _MMIO(0x9888), 0x0a138000 },
+	{ _MMIO(0x9888), 0x06143000 },
+	{ _MMIO(0x9888), 0x0415cfc7 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x02338000 },
+	{ _MMIO(0x9888), 0x0c338000 },
+	{ _MMIO(0x9888), 0x04342000 },
+	{ _MMIO(0x9888), 0x06344000 },
+	{ _MMIO(0x9888), 0x0035c700 },
+	{ _MMIO(0x9888), 0x063500cf },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x04538000 },
+	{ _MMIO(0x9888), 0x06538000 },
+	{ _MMIO(0x9888), 0x0454c000 },
+	{ _MMIO(0x9888), 0x0255cfc7 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x06dc8000 },
+	{ _MMIO(0x9888), 0x08dc4000 },
+	{ _MMIO(0x9888), 0x0cdcc000 },
+	{ _MMIO(0x9888), 0x0edcc000 },
+	{ _MMIO(0x9888), 0x1abd00a8 },
+	{ _MMIO(0x9888), 0x0cd8c000 },
+	{ _MMIO(0x9888), 0x0ed84000 },
+	{ _MMIO(0x9888), 0x0edb8000 },
+	{ _MMIO(0x9888), 0x18db0800 },
+	{ _MMIO(0x9888), 0x1adb0254 },
+	{ _MMIO(0x9888), 0x0e9faa00 },
+	{ _MMIO(0x9888), 0x109f02aa },
+	{ _MMIO(0x9888), 0x0eb84000 },
+	{ _MMIO(0x9888), 0x16b84000 },
+	{ _MMIO(0x9888), 0x18b8156a },
+	{ _MMIO(0x9888), 0x06b98000 },
+	{ _MMIO(0x9888), 0x08b9a000 },
+	{ _MMIO(0x9888), 0x0ab9a000 },
+	{ _MMIO(0x9888), 0x0cb9a000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x18baa000 },
+	{ _MMIO(0x9888), 0x1aba0002 },
+	{ _MMIO(0x9888), 0x16934000 },
+	{ _MMIO(0x9888), 0x1893000a },
+	{ _MMIO(0x9888), 0x0a947000 },
+	{ _MMIO(0x9888), 0x0c95c5c1 },
+	{ _MMIO(0x9888), 0x0e9500c3 },
+	{ _MMIO(0x9888), 0x10950000 },
+	{ _MMIO(0x9888), 0x0eb38000 },
+	{ _MMIO(0x9888), 0x16b30040 },
+	{ _MMIO(0x9888), 0x18b30020 },
+	{ _MMIO(0x9888), 0x06b48000 },
+	{ _MMIO(0x9888), 0x08b41000 },
+	{ _MMIO(0x9888), 0x0ab48000 },
+	{ _MMIO(0x9888), 0x06b5c500 },
+	{ _MMIO(0x9888), 0x08b500c3 },
+	{ _MMIO(0x9888), 0x0eb5c100 },
+	{ _MMIO(0x9888), 0x10b50000 },
+	{ _MMIO(0x9888), 0x16d31500 },
+	{ _MMIO(0x9888), 0x08d4e000 },
+	{ _MMIO(0x9888), 0x08d5c100 },
+	{ _MMIO(0x9888), 0x0ad5c3c5 },
+	{ _MMIO(0x9888), 0x10d50000 },
+	{ _MMIO(0x9888), 0x0d88f800 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x258baaa5 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x0f8c4000 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c5500 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x078d8000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800c42 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800063 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800800 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f8014a4 },
+	{ _MMIO(0x9888), 0x41801042 },
+};
+
+static int
+get_tdl_1_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_1;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x0000fe7f },
+	{ _MMIO(0x2780), 0x00000000 },
+	{ _MMIO(0x2784), 0x0000ff9f },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000ffe7 },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fffb },
+	{ _MMIO(0x2798), 0x00000002 },
+	{ _MMIO(0x279c), 0x0000fffd },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_2[] = {
+	{ _MMIO(0x9888), 0x16150000 },
+	{ _MMIO(0x9888), 0x16350000 },
+	{ _MMIO(0x9888), 0x16550000 },
+	{ _MMIO(0x9888), 0x16952e60 },
+	{ _MMIO(0x9888), 0x16b54d60 },
+	{ _MMIO(0x9888), 0x16d52e60 },
+	{ _MMIO(0x9888), 0x065c8000 },
+	{ _MMIO(0x9888), 0x085cc000 },
+	{ _MMIO(0x9888), 0x0a5cc000 },
+	{ _MMIO(0x9888), 0x0c5c4000 },
+	{ _MMIO(0x9888), 0x0e3d8000 },
+	{ _MMIO(0x9888), 0x183da000 },
+	{ _MMIO(0x9888), 0x06588000 },
+	{ _MMIO(0x9888), 0x08588000 },
+	{ _MMIO(0x9888), 0x0a584000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x185b5800 },
+	{ _MMIO(0x9888), 0x1a5b000a },
+	{ _MMIO(0x9888), 0x0e1faa00 },
+	{ _MMIO(0x9888), 0x101f02aa },
+	{ _MMIO(0x9888), 0x0e384000 },
+	{ _MMIO(0x9888), 0x16384000 },
+	{ _MMIO(0x9888), 0x18382a55 },
+	{ _MMIO(0x9888), 0x06398000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0c39a000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x1a3a02a0 },
+	{ _MMIO(0x9888), 0x0e138000 },
+	{ _MMIO(0x9888), 0x16130500 },
+	{ _MMIO(0x9888), 0x06148000 },
+	{ _MMIO(0x9888), 0x08146000 },
+	{ _MMIO(0x9888), 0x0615c100 },
+	{ _MMIO(0x9888), 0x0815c500 },
+	{ _MMIO(0x9888), 0x0a1500c3 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x16335040 },
+	{ _MMIO(0x9888), 0x08349000 },
+	{ _MMIO(0x9888), 0x0a341000 },
+	{ _MMIO(0x9888), 0x083500c1 },
+	{ _MMIO(0x9888), 0x0a35c500 },
+	{ _MMIO(0x9888), 0x0c3500c3 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x1853002a },
+	{ _MMIO(0x9888), 0x0a54e000 },
+	{ _MMIO(0x9888), 0x0c55c500 },
+	{ _MMIO(0x9888), 0x0e55c1c3 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x00dc8000 },
+	{ _MMIO(0x9888), 0x02dcc000 },
+	{ _MMIO(0x9888), 0x04dc4000 },
+	{ _MMIO(0x9888), 0x04bd8000 },
+	{ _MMIO(0x9888), 0x06bd8000 },
+	{ _MMIO(0x9888), 0x02d8c000 },
+	{ _MMIO(0x9888), 0x02db8000 },
+	{ _MMIO(0x9888), 0x04db4000 },
+	{ _MMIO(0x9888), 0x06db4000 },
+	{ _MMIO(0x9888), 0x08db8000 },
+	{ _MMIO(0x9888), 0x0c9fa000 },
+	{ _MMIO(0x9888), 0x0e9f00aa },
+	{ _MMIO(0x9888), 0x02b84000 },
+	{ _MMIO(0x9888), 0x04b84000 },
+	{ _MMIO(0x9888), 0x06b84000 },
+	{ _MMIO(0x9888), 0x08b84000 },
+	{ _MMIO(0x9888), 0x0ab88000 },
+	{ _MMIO(0x9888), 0x0cb88000 },
+	{ _MMIO(0x9888), 0x00b98000 },
+	{ _MMIO(0x9888), 0x02b9a000 },
+	{ _MMIO(0x9888), 0x04b9a000 },
+	{ _MMIO(0x9888), 0x06b92000 },
+	{ _MMIO(0x9888), 0x0aba8000 },
+	{ _MMIO(0x9888), 0x0cba8000 },
+	{ _MMIO(0x9888), 0x04938000 },
+	{ _MMIO(0x9888), 0x06938000 },
+	{ _MMIO(0x9888), 0x0494c000 },
+	{ _MMIO(0x9888), 0x0295cfc7 },
+	{ _MMIO(0x9888), 0x10950000 },
+	{ _MMIO(0x9888), 0x02b38000 },
+	{ _MMIO(0x9888), 0x08b38000 },
+	{ _MMIO(0x9888), 0x04b42000 },
+	{ _MMIO(0x9888), 0x06b41000 },
+	{ _MMIO(0x9888), 0x00b5c700 },
+	{ _MMIO(0x9888), 0x04b500cf },
+	{ _MMIO(0x9888), 0x10b50000 },
+	{ _MMIO(0x9888), 0x0ad38000 },
+	{ _MMIO(0x9888), 0x0cd38000 },
+	{ _MMIO(0x9888), 0x06d46000 },
+	{ _MMIO(0x9888), 0x04d5c700 },
+	{ _MMIO(0x9888), 0x06d500cf },
+	{ _MMIO(0x9888), 0x10d50000 },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8aaaa0 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x258b555a },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x238b5500 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x018d8000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800882 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45801082 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x478014a5 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800002 },
+	{ _MMIO(0x9888), 0x41800c62 },
+};
+
+static int
+get_tdl_2_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_2;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extra[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extra[] = {
+	{ _MMIO(0xe458), 0x00001000 },
+	{ _MMIO(0xe558), 0x00003002 },
+	{ _MMIO(0xe658), 0x00005004 },
+	{ _MMIO(0xe758), 0x00011010 },
+	{ _MMIO(0xe45c), 0x00050012 },
+	{ _MMIO(0xe55c), 0x00052051 },
+	{ _MMIO(0xe65c), 0x00000008 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extra[] = {
+	{ _MMIO(0x9888), 0x161503e0 },
+	{ _MMIO(0x9888), 0x163503e0 },
+	{ _MMIO(0x9888), 0x165503e0 },
+	{ _MMIO(0x9888), 0x169503e0 },
+	{ _MMIO(0x9888), 0x16b503e0 },
+	{ _MMIO(0x9888), 0x16d503e0 },
+	{ _MMIO(0x9888), 0x045cc000 },
+	{ _MMIO(0x9888), 0x083d8000 },
+	{ _MMIO(0x9888), 0x04584000 },
+	{ _MMIO(0x9888), 0x085b4000 },
+	{ _MMIO(0x9888), 0x0a5b8000 },
+	{ _MMIO(0x9888), 0x0e1f00a8 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0c388000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x0c3a8000 },
+	{ _MMIO(0x9888), 0x08138000 },
+	{ _MMIO(0x9888), 0x06141000 },
+	{ _MMIO(0x9888), 0x041500c3 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x0a338000 },
+	{ _MMIO(0x9888), 0x06342000 },
+	{ _MMIO(0x9888), 0x0435c300 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x0c538000 },
+	{ _MMIO(0x9888), 0x06544000 },
+	{ _MMIO(0x9888), 0x065500c3 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x00dc8000 },
+	{ _MMIO(0x9888), 0x02dc4000 },
+	{ _MMIO(0x9888), 0x02bd8000 },
+	{ _MMIO(0x9888), 0x00d88000 },
+	{ _MMIO(0x9888), 0x02db4000 },
+	{ _MMIO(0x9888), 0x04db8000 },
+	{ _MMIO(0x9888), 0x0c9fa000 },
+	{ _MMIO(0x9888), 0x0e9f0002 },
+	{ _MMIO(0x9888), 0x02b84000 },
+	{ _MMIO(0x9888), 0x04b84000 },
+	{ _MMIO(0x9888), 0x06b88000 },
+	{ _MMIO(0x9888), 0x00b98000 },
+	{ _MMIO(0x9888), 0x02b9a000 },
+	{ _MMIO(0x9888), 0x06ba8000 },
+	{ _MMIO(0x9888), 0x02938000 },
+	{ _MMIO(0x9888), 0x04942000 },
+	{ _MMIO(0x9888), 0x0095c300 },
+	{ _MMIO(0x9888), 0x10950000 },
+	{ _MMIO(0x9888), 0x04b38000 },
+	{ _MMIO(0x9888), 0x04b44000 },
+	{ _MMIO(0x9888), 0x02b500c3 },
+	{ _MMIO(0x9888), 0x10b50000 },
+	{ _MMIO(0x9888), 0x06d38000 },
+	{ _MMIO(0x9888), 0x04d48000 },
+	{ _MMIO(0x9888), 0x02d5c300 },
+	{ _MMIO(0x9888), 0x10d50000 },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x238b3500 },
+	{ _MMIO(0x9888), 0x258b0005 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x018d8000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x2185000a },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800c40 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41801482 },
+	{ _MMIO(0x9888), 0x31800000 },
+};
+
+static int
+get_compute_extra_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_extra;
+	lens[n] = ARRAY_SIZE(mux_config_compute_extra);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_vme_pipe[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00100030 },
+	{ _MMIO(0x2774), 0x0000fff9 },
+	{ _MMIO(0x2778), 0x00000002 },
+	{ _MMIO(0x277c), 0x0000fffc },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000fff3 },
+	{ _MMIO(0x2788), 0x00100180 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00000002 },
+	{ _MMIO(0x279c), 0x0000ff3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_vme_pipe[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00008003 },
+};
+
+static const struct i915_oa_reg mux_config_vme_pipe[] = {
+	{ _MMIO(0x9888), 0x14100812 },
+	{ _MMIO(0x9888), 0x14125800 },
+	{ _MMIO(0x9888), 0x161200c0 },
+	{ _MMIO(0x9888), 0x14300812 },
+	{ _MMIO(0x9888), 0x14325800 },
+	{ _MMIO(0x9888), 0x163200c0 },
+	{ _MMIO(0x9888), 0x005c4000 },
+	{ _MMIO(0x9888), 0x065c8000 },
+	{ _MMIO(0x9888), 0x085cc000 },
+	{ _MMIO(0x9888), 0x0a5cc000 },
+	{ _MMIO(0x9888), 0x0c5cc000 },
+	{ _MMIO(0x9888), 0x003d8000 },
+	{ _MMIO(0x9888), 0x0e3d8000 },
+	{ _MMIO(0x9888), 0x183d2800 },
+	{ _MMIO(0x9888), 0x00584000 },
+	{ _MMIO(0x9888), 0x06588000 },
+	{ _MMIO(0x9888), 0x0858c000 },
+	{ _MMIO(0x9888), 0x005b4000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x185b9400 },
+	{ _MMIO(0x9888), 0x1a5b002a },
+	{ _MMIO(0x9888), 0x0c1f0800 },
+	{ _MMIO(0x9888), 0x0e1faa00 },
+	{ _MMIO(0x9888), 0x101f002a },
+	{ _MMIO(0x9888), 0x00384000 },
+	{ _MMIO(0x9888), 0x0e384000 },
+	{ _MMIO(0x9888), 0x16384000 },
+	{ _MMIO(0x9888), 0x18380155 },
+	{ _MMIO(0x9888), 0x00392000 },
+	{ _MMIO(0x9888), 0x06398000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0c39a000 },
+	{ _MMIO(0x9888), 0x00100047 },
+	{ _MMIO(0x9888), 0x06101a80 },
+	{ _MMIO(0x9888), 0x10100000 },
+	{ _MMIO(0x9888), 0x0810c000 },
+	{ _MMIO(0x9888), 0x0811c000 },
+	{ _MMIO(0x9888), 0x08126151 },
+	{ _MMIO(0x9888), 0x10120000 },
+	{ _MMIO(0x9888), 0x00134000 },
+	{ _MMIO(0x9888), 0x0e134000 },
+	{ _MMIO(0x9888), 0x161300a0 },
+	{ _MMIO(0x9888), 0x0a301ac7 },
+	{ _MMIO(0x9888), 0x10300000 },
+	{ _MMIO(0x9888), 0x0c30c000 },
+	{ _MMIO(0x9888), 0x0c31c000 },
+	{ _MMIO(0x9888), 0x0c326151 },
+	{ _MMIO(0x9888), 0x10320000 },
+	{ _MMIO(0x9888), 0x16332a00 },
+	{ _MMIO(0x9888), 0x18330001 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8a2aa0 },
+	{ _MMIO(0x9888), 0x238b0020 },
+	{ _MMIO(0x9888), 0x258b5550 },
+	{ _MMIO(0x9888), 0x278b0001 },
+	{ _MMIO(0x9888), 0x1f850080 },
+	{ _MMIO(0x9888), 0x2185aaa0 },
+	{ _MMIO(0x9888), 0x23850002 },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830015 },
+	{ _MMIO(0x9888), 0x01844000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x11804000 },
+	{ _MMIO(0x9888), 0x17808000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3d800800 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800002 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800884 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800002 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+};
+
+static int
+get_vme_pipe_mux_config(struct drm_i915_private *dev_priv,
+			const struct i915_oa_reg **regs,
+			int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_vme_pipe;
+	lens[n] = ARRAY_SIZE(mux_config_vme_pipe);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_test_oa[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2770), 0x00000004 },
+	{ _MMIO(0x2774), 0x00000000 },
+	{ _MMIO(0x2778), 0x00000003 },
+	{ _MMIO(0x277c), 0x00000000 },
+	{ _MMIO(0x2780), 0x00000007 },
+	{ _MMIO(0x2784), 0x00000000 },
+	{ _MMIO(0x2788), 0x00100002 },
+	{ _MMIO(0x278c), 0x0000fff7 },
+	{ _MMIO(0x2790), 0x00100002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00100082 },
+	{ _MMIO(0x279c), 0x0000ffef },
+	{ _MMIO(0x27a0), 0x001000c2 },
+	{ _MMIO(0x27a4), 0x0000ffe7 },
+	{ _MMIO(0x27a8), 0x00100001 },
+	{ _MMIO(0x27ac), 0x0000ffe7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_test_oa[] = {
+};
+
+static const struct i915_oa_reg mux_config_test_oa[] = {
+	{ _MMIO(0x9888), 0x198b0000 },
+	{ _MMIO(0x9888), 0x078b0066 },
+	{ _MMIO(0x9888), 0x118b0000 },
+	{ _MMIO(0x9888), 0x258b0000 },
+	{ _MMIO(0x9888), 0x21850008 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+};
+
+static int
+get_test_oa_mux_config(struct drm_i915_private *dev_priv,
+		       const struct i915_oa_reg **regs,
+		       int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_test_oa;
+	lens[n] = ARRAY_SIZE(mux_config_test_oa);
+	n++;
+
+	return n;
+}
+
 int i915_oa_select_metric_set_bdw(struct drm_i915_private *dev_priv)
 {
-	dev_priv->perf.oa.n_mux_configs = 0;
-	dev_priv->perf.oa.b_counter_regs = NULL;
-	dev_priv->perf.oa.b_counter_regs_len = 0;
-	dev_priv->perf.oa.flex_regs = NULL;
-	dev_priv->perf.oa.flex_regs_len = 0;
+	dev_priv->perf.oa.n_mux_configs = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_basic_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_basic_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
+		return 0;
+	case METRIC_SET_ID_RENDER_PIPE_PROFILE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_pipe_profile_mux_config(dev_priv,
+							   dev_priv->perf.oa.mux_regs,
+							   dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_PIPE_PROFILE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_pipe_profile;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_pipe_profile);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_pipe_profile;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_pipe_profile);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_READS:
+		dev_priv->perf.oa.n_mux_configs =
+			get_memory_reads_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_READS\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_reads;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_reads);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_reads;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_reads);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_WRITES:
+		dev_priv->perf.oa.n_mux_configs =
+			get_memory_writes_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_WRITES\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_writes;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_writes);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_writes;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_writes);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTENDED:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_extended_mux_config(dev_priv,
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTENDED\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extended;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extended);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extended;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extended);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_L3_CACHE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_l3_cache_mux_config(dev_priv,
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_L3_CACHE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_l3_cache;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_l3_cache);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_l3_cache;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_l3_cache);
+
+		return 0;
+	case METRIC_SET_ID_DATA_PORT_READS_COALESCING:
+		dev_priv->perf.oa.n_mux_configs =
+			get_data_port_reads_coalescing_mux_config(dev_priv,
+								  dev_priv->perf.oa.mux_regs,
+								  dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"DATA_PORT_READS_COALESCING\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_data_port_reads_coalescing;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_data_port_reads_coalescing);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_data_port_reads_coalescing;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_data_port_reads_coalescing);
+
+		return 0;
+	case METRIC_SET_ID_DATA_PORT_WRITES_COALESCING:
+		dev_priv->perf.oa.n_mux_configs =
+			get_data_port_writes_coalescing_mux_config(dev_priv,
+								   dev_priv->perf.oa.mux_regs,
+								   dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"DATA_PORT_WRITES_COALESCING\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_data_port_writes_coalescing;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_data_port_writes_coalescing);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_data_port_writes_coalescing;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_data_port_writes_coalescing);
+
+		return 0;
+	case METRIC_SET_ID_HDC_AND_SF:
+		dev_priv->perf.oa.n_mux_configs =
+			get_hdc_and_sf_mux_config(dev_priv,
+						  dev_priv->perf.oa.mux_regs,
+						  dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"HDC_AND_SF\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_hdc_and_sf;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_hdc_and_sf);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_hdc_and_sf;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_hdc_and_sf);
+
+		return 0;
+	case METRIC_SET_ID_L3_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_1_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_1);
+
+		return 0;
+	case METRIC_SET_ID_L3_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_2_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_2);
+
+		return 0;
+	case METRIC_SET_ID_L3_3:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_3_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_3\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_3;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_3);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_3;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_3);
+
+		return 0;
+	case METRIC_SET_ID_L3_4:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_4_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_4\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_4;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_4);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_4;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_4);
+
+		return 0;
+	case METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND:
+		dev_priv->perf.oa.n_mux_configs =
+			get_rasterizer_and_pixel_backend_mux_config(dev_priv,
+								    dev_priv->perf.oa.mux_regs,
+								    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RASTERIZER_AND_PIXEL_BACKEND\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_rasterizer_and_pixel_backend);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_rasterizer_and_pixel_backend);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_sampler_1_mux_config(dev_priv,
+						 dev_priv->perf.oa.mux_regs,
+						 dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler_1);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_sampler_2_mux_config(dev_priv,
+						 dev_priv->perf.oa.mux_regs,
+						 dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler_2);
+
+		return 0;
+	case METRIC_SET_ID_TDL_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_1_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_1);
+
+		return 0;
+	case METRIC_SET_ID_TDL_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_2_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_2);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTRA:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_extra_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTRA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extra;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extra);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extra;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extra);
+
+		return 0;
+	case METRIC_SET_ID_VME_PIPE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_vme_pipe_mux_config(dev_priv,
+						dev_priv->perf.oa.mux_regs,
+						dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"VME_PIPE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_vme_pipe;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_vme_pipe);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_vme_pipe;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_vme_pipe);
+
+		return 0;
+	case METRIC_SET_ID_TEST_OA:
+		dev_priv->perf.oa.n_mux_configs =
+			get_test_oa_mux_config(dev_priv,
+					       dev_priv->perf.oa.mux_regs,
+					       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TEST_OA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_test_oa;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_test_oa);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_test_oa;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_test_oa);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "b541bd57-0e0f-4154-b4c0-5858010a2bf7",
+	.attrs =  attrs_render_basic,
+};
+
+static ssize_t
+show_compute_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_BASIC);
+}
+
+static struct device_attribute dev_attr_compute_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_basic[] = {
+	&dev_attr_compute_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_basic = {
+	.name = "35fbc9b2-a891-40a6-a38d-022bb7057552",
+	.attrs =  attrs_compute_basic,
+};
+
+static ssize_t
+show_render_pipe_profile_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_PIPE_PROFILE);
+}
+
+static struct device_attribute dev_attr_render_pipe_profile_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_pipe_profile_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_pipe_profile[] = {
+	&dev_attr_render_pipe_profile_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_pipe_profile = {
+	.name = "233d0544-fff7-4281-8291-e02f222aff72",
+	.attrs =  attrs_render_pipe_profile,
+};
+
+static ssize_t
+show_memory_reads_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_READS);
+}
+
+static struct device_attribute dev_attr_memory_reads_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_reads_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_reads[] = {
+	&dev_attr_memory_reads_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_reads = {
+	.name = "2b255d48-2117-4fef-a8f7-f151e1d25a2c",
+	.attrs =  attrs_memory_reads,
+};
+
+static ssize_t
+show_memory_writes_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_WRITES);
+}
+
+static struct device_attribute dev_attr_memory_writes_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_writes_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_writes[] = {
+	&dev_attr_memory_writes_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_writes = {
+	.name = "f7fd3220-b466-4a4d-9f98-b0caf3f2394c",
+	.attrs =  attrs_memory_writes,
+};
+
+static ssize_t
+show_compute_extended_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTENDED);
+}
+
+static struct device_attribute dev_attr_compute_extended_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extended_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extended[] = {
+	&dev_attr_compute_extended_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extended = {
+	.name = "e99ccaca-821c-4df9-97a7-96bdb7204e43",
+	.attrs =  attrs_compute_extended,
+};
+
+static ssize_t
+show_compute_l3_cache_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_L3_CACHE);
+}
+
+static struct device_attribute dev_attr_compute_l3_cache_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_l3_cache_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_l3_cache[] = {
+	&dev_attr_compute_l3_cache_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_l3_cache = {
+	.name = "27a364dc-8225-4ecb-b607-d6f1925598d9",
+	.attrs =  attrs_compute_l3_cache,
+};
+
+static ssize_t
+show_data_port_reads_coalescing_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_DATA_PORT_READS_COALESCING);
+}
+
+static struct device_attribute dev_attr_data_port_reads_coalescing_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_data_port_reads_coalescing_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_data_port_reads_coalescing[] = {
+	&dev_attr_data_port_reads_coalescing_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_data_port_reads_coalescing = {
+	.name = "857fc630-2f09-4804-85f1-084adfadd5ab",
+	.attrs =  attrs_data_port_reads_coalescing,
+};
+
+static ssize_t
+show_data_port_writes_coalescing_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_DATA_PORT_WRITES_COALESCING);
+}
 
-	switch (dev_priv->perf.oa.metrics_set) {
-	case METRIC_SET_ID_RENDER_BASIC:
-		dev_priv->perf.oa.n_mux_configs =
-			get_render_basic_mux_config(dev_priv,
-						    dev_priv->perf.oa.mux_regs,
-						    dev_priv->perf.oa.mux_regs_lens);
-		if (dev_priv->perf.oa.n_mux_configs == 0) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+static struct device_attribute dev_attr_data_port_writes_coalescing_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_data_port_writes_coalescing_id,
+	.store = NULL,
+};
 
-			/* EINVAL because *_register_sysfs already checked this
-			 * and so it wouldn't have been advertised to userspace and
-			 * so shouldn't have been requested
-			 */
-			return -EINVAL;
-		}
+static struct attribute *attrs_data_port_writes_coalescing[] = {
+	&dev_attr_data_port_writes_coalescing_id.attr,
+	NULL,
+};
 
-		dev_priv->perf.oa.b_counter_regs =
-			b_counter_config_render_basic;
-		dev_priv->perf.oa.b_counter_regs_len =
-			ARRAY_SIZE(b_counter_config_render_basic);
+static struct attribute_group group_data_port_writes_coalescing = {
+	.name = "343ebc99-4a55-414c-8c17-d8e259cf5e20",
+	.attrs =  attrs_data_port_writes_coalescing,
+};
 
-		dev_priv->perf.oa.flex_regs =
-			flex_eu_config_render_basic;
-		dev_priv->perf.oa.flex_regs_len =
-			ARRAY_SIZE(flex_eu_config_render_basic);
+static ssize_t
+show_hdc_and_sf_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_HDC_AND_SF);
+}
 
-		return 0;
-	default:
-		return -ENODEV;
-	}
+static struct device_attribute dev_attr_hdc_and_sf_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_hdc_and_sf_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_hdc_and_sf[] = {
+	&dev_attr_hdc_and_sf_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_hdc_and_sf = {
+	.name = "7bdafd88-a4fa-4ed5-bc09-1a977aa5be3e",
+	.attrs =  attrs_hdc_and_sf,
+};
+
+static ssize_t
+show_l3_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_1);
 }
 
+static struct device_attribute dev_attr_l3_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_1[] = {
+	&dev_attr_l3_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_1 = {
+	.name = "9385ebb2-f34f-4aa5-aec5-7e9cbbea0f0b",
+	.attrs =  attrs_l3_1,
+};
+
 static ssize_t
-show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+show_l3_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
 {
-	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_2);
 }
 
-static struct device_attribute dev_attr_render_basic_id = {
+static struct device_attribute dev_attr_l3_2_id = {
 	.attr = { .name = "id", .mode = 0444 },
-	.show = show_render_basic_id,
+	.show = show_l3_2_id,
 	.store = NULL,
 };
 
-static struct attribute *attrs_render_basic[] = {
-	&dev_attr_render_basic_id.attr,
+static struct attribute *attrs_l3_2[] = {
+	&dev_attr_l3_2_id.attr,
 	NULL,
 };
 
-static struct attribute_group group_render_basic = {
-	.name = "b541bd57-0e0f-4154-b4c0-5858010a2bf7",
-	.attrs =  attrs_render_basic,
+static struct attribute_group group_l3_2 = {
+	.name = "446ae59b-ff2e-41c9-b49e-0184a54bf00a",
+	.attrs =  attrs_l3_2,
+};
+
+static ssize_t
+show_l3_3_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_3);
+}
+
+static struct device_attribute dev_attr_l3_3_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_3_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_3[] = {
+	&dev_attr_l3_3_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_3 = {
+	.name = "84a7956f-1ea4-4d0d-837f-e39a0376e38c",
+	.attrs =  attrs_l3_3,
+};
+
+static ssize_t
+show_l3_4_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_4);
+}
+
+static struct device_attribute dev_attr_l3_4_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_4_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_4[] = {
+	&dev_attr_l3_4_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_4 = {
+	.name = "92b493d9-df18-4bed-be06-5cac6f2a6f5f",
+	.attrs =  attrs_l3_4,
+};
+
+static ssize_t
+show_rasterizer_and_pixel_backend_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND);
+}
+
+static struct device_attribute dev_attr_rasterizer_and_pixel_backend_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_rasterizer_and_pixel_backend_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_rasterizer_and_pixel_backend[] = {
+	&dev_attr_rasterizer_and_pixel_backend_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_rasterizer_and_pixel_backend = {
+	.name = "14345c35-cc46-40d0-bb04-6ed1fbb43679",
+	.attrs =  attrs_rasterizer_and_pixel_backend,
+};
+
+static ssize_t
+show_sampler_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER_1);
+}
+
+static struct device_attribute dev_attr_sampler_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler_1[] = {
+	&dev_attr_sampler_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler_1 = {
+	.name = "f0c6ba37-d3d3-4211-91b5-226730312a54",
+	.attrs =  attrs_sampler_1,
+};
+
+static ssize_t
+show_sampler_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER_2);
+}
+
+static struct device_attribute dev_attr_sampler_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler_2[] = {
+	&dev_attr_sampler_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler_2 = {
+	.name = "30bf3702-48cf-4bca-b412-7cf50bb2f564",
+	.attrs =  attrs_sampler_2,
+};
+
+static ssize_t
+show_tdl_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_1);
+}
+
+static struct device_attribute dev_attr_tdl_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_1[] = {
+	&dev_attr_tdl_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_1 = {
+	.name = "238bec85-df05-44f3-b905-d166712f2451",
+	.attrs =  attrs_tdl_1,
+};
+
+static ssize_t
+show_tdl_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_2);
+}
+
+static struct device_attribute dev_attr_tdl_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_2[] = {
+	&dev_attr_tdl_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_2 = {
+	.name = "24bf02cd-8693-4583-981c-c4165b33da01",
+	.attrs =  attrs_tdl_2,
+};
+
+static ssize_t
+show_compute_extra_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTRA);
+}
+
+static struct device_attribute dev_attr_compute_extra_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extra_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extra[] = {
+	&dev_attr_compute_extra_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extra = {
+	.name = "8fb61ba2-2fbb-454c-a136-2dec5a8a595e",
+	.attrs =  attrs_compute_extra,
+};
+
+static ssize_t
+show_vme_pipe_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_VME_PIPE);
+}
+
+static struct device_attribute dev_attr_vme_pipe_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_vme_pipe_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_vme_pipe[] = {
+	&dev_attr_vme_pipe_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_vme_pipe = {
+	.name = "e1743ca0-7fc8-410b-a066-de7bbb9280b7",
+	.attrs =  attrs_vme_pipe,
+};
+
+static ssize_t
+show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TEST_OA);
+}
+
+static struct device_attribute dev_attr_test_oa_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_test_oa_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_test_oa[] = {
+	&dev_attr_test_oa_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_test_oa = {
+	.name = "d6de6f55-e526-4f79-a6a6-d7315c09044e",
+	.attrs =  attrs_test_oa,
 };
 
 int
@@ -374,9 +5148,177 @@ i915_perf_register_sysfs_bdw(struct drm_i915_private *dev_priv)
 		if (ret)
 			goto error_render_basic;
 	}
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+		if (ret)
+			goto error_compute_basic;
+	}
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+		if (ret)
+			goto error_render_pipe_profile;
+	}
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+		if (ret)
+			goto error_memory_reads;
+	}
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+		if (ret)
+			goto error_memory_writes;
+	}
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+		if (ret)
+			goto error_compute_extended;
+	}
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+		if (ret)
+			goto error_compute_l3_cache;
+	}
+	if (get_data_port_reads_coalescing_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_data_port_reads_coalescing);
+		if (ret)
+			goto error_data_port_reads_coalescing;
+	}
+	if (get_data_port_writes_coalescing_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_data_port_writes_coalescing);
+		if (ret)
+			goto error_data_port_writes_coalescing;
+	}
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+		if (ret)
+			goto error_hdc_and_sf;
+	}
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+		if (ret)
+			goto error_l3_1;
+	}
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+		if (ret)
+			goto error_l3_2;
+	}
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+		if (ret)
+			goto error_l3_3;
+	}
+	if (get_l3_4_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_4);
+		if (ret)
+			goto error_l3_4;
+	}
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+		if (ret)
+			goto error_rasterizer_and_pixel_backend;
+	}
+	if (get_sampler_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler_1);
+		if (ret)
+			goto error_sampler_1;
+	}
+	if (get_sampler_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler_2);
+		if (ret)
+			goto error_sampler_2;
+	}
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+		if (ret)
+			goto error_tdl_1;
+	}
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+		if (ret)
+			goto error_tdl_2;
+	}
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+		if (ret)
+			goto error_compute_extra;
+	}
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+		if (ret)
+			goto error_vme_pipe;
+	}
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+		if (ret)
+			goto error_test_oa;
+	}
 
 	return 0;
 
+error_test_oa:
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+error_vme_pipe:
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+error_compute_extra:
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+error_tdl_2:
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+error_tdl_1:
+	if (get_sampler_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_2);
+error_sampler_2:
+	if (get_sampler_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_1);
+error_sampler_1:
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+error_rasterizer_and_pixel_backend:
+	if (get_l3_4_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_4);
+error_l3_4:
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+error_l3_3:
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+error_l3_2:
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+error_l3_1:
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+error_hdc_and_sf:
+	if (get_data_port_writes_coalescing_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_data_port_writes_coalescing);
+error_data_port_writes_coalescing:
+	if (get_data_port_reads_coalescing_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_data_port_reads_coalescing);
+error_data_port_reads_coalescing:
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+error_compute_l3_cache:
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+error_compute_extended:
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+error_memory_writes:
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+error_memory_reads:
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+error_render_pipe_profile:
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+error_compute_basic:
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
 error_render_basic:
 	return ret;
 }
@@ -389,4 +5331,46 @@ i915_perf_unregister_sysfs_bdw(struct drm_i915_private *dev_priv)
 
 	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+	if (get_data_port_reads_coalescing_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_data_port_reads_coalescing);
+	if (get_data_port_writes_coalescing_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_data_port_writes_coalescing);
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+	if (get_l3_4_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_4);
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+	if (get_sampler_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_1);
+	if (get_sampler_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_2);
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_test_oa);
 }
diff --git a/drivers/gpu/drm/i915/i915_oa_bxt.c b/drivers/gpu/drm/i915/i915_oa_bxt.c
index 345ec1d3faa7..93864d8f32dd 100644
--- a/drivers/gpu/drm/i915/i915_oa_bxt.c
+++ b/drivers/gpu/drm/i915/i915_oa_bxt.c
@@ -33,9 +33,23 @@
 
 enum metric_set_id {
 	METRIC_SET_ID_RENDER_BASIC = 1,
+	METRIC_SET_ID_COMPUTE_BASIC,
+	METRIC_SET_ID_RENDER_PIPE_PROFILE,
+	METRIC_SET_ID_MEMORY_READS,
+	METRIC_SET_ID_MEMORY_WRITES,
+	METRIC_SET_ID_COMPUTE_EXTENDED,
+	METRIC_SET_ID_COMPUTE_L3_CACHE,
+	METRIC_SET_ID_HDC_AND_SF,
+	METRIC_SET_ID_L3_1,
+	METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND,
+	METRIC_SET_ID_SAMPLER,
+	METRIC_SET_ID_TDL_1,
+	METRIC_SET_ID_TDL_2,
+	METRIC_SET_ID_COMPUTE_EXTRA,
+	METRIC_SET_ID_TEST_OA,
 };
 
-int i915_oa_n_builtin_metric_sets_bxt = 1;
+int i915_oa_n_builtin_metric_sets_bxt = 15;
 
 static const struct i915_oa_reg b_counter_config_render_basic[] = {
 	{ _MMIO(0x2710), 0x00000000 },
@@ -156,6 +170,1622 @@ get_render_basic_mux_config(struct drm_i915_private *dev_priv,
 	return n;
 }
 
+static const struct i915_oa_reg b_counter_config_compute_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic[] = {
+	{ _MMIO(0x9888), 0x104f00e0 },
+	{ _MMIO(0x9888), 0x124f1c00 },
+	{ _MMIO(0x9888), 0x39900340 },
+	{ _MMIO(0x9888), 0x3f900c00 },
+	{ _MMIO(0x9888), 0x41900000 },
+	{ _MMIO(0x9888), 0x002d5000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x082d4000 },
+	{ _MMIO(0x9888), 0x0a2d1000 },
+	{ _MMIO(0x9888), 0x0c2d5000 },
+	{ _MMIO(0x9888), 0x0e2d4000 },
+	{ _MMIO(0x9888), 0x0c2e1400 },
+	{ _MMIO(0x9888), 0x0e2e5100 },
+	{ _MMIO(0x9888), 0x102e0114 },
+	{ _MMIO(0x9888), 0x044cc000 },
+	{ _MMIO(0x9888), 0x0a4c8000 },
+	{ _MMIO(0x9888), 0x0c4c8000 },
+	{ _MMIO(0x9888), 0x0e4c4000 },
+	{ _MMIO(0x9888), 0x104c8000 },
+	{ _MMIO(0x9888), 0x124c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x004ea000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e2000 },
+	{ _MMIO(0x9888), 0x0c4ea000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x004f6b42 },
+	{ _MMIO(0x9888), 0x064f6200 },
+	{ _MMIO(0x9888), 0x084f4100 },
+	{ _MMIO(0x9888), 0x0a4f0061 },
+	{ _MMIO(0x9888), 0x0c4f6c4c },
+	{ _MMIO(0x9888), 0x0e4f4b00 },
+	{ _MMIO(0x9888), 0x1a4f0000 },
+	{ _MMIO(0x9888), 0x1c4f0000 },
+	{ _MMIO(0x9888), 0x180f5000 },
+	{ _MMIO(0x9888), 0x1a0f8800 },
+	{ _MMIO(0x9888), 0x1c0f08a2 },
+	{ _MMIO(0x9888), 0x182c4000 },
+	{ _MMIO(0x9888), 0x1c2c1451 },
+	{ _MMIO(0x9888), 0x1e2c0001 },
+	{ _MMIO(0x9888), 0x1a2c0010 },
+	{ _MMIO(0x9888), 0x01938000 },
+	{ _MMIO(0x9888), 0x0f938000 },
+	{ _MMIO(0x9888), 0x19938a28 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x19900177 },
+	{ _MMIO(0x9888), 0x1b900178 },
+	{ _MMIO(0x9888), 0x1d900125 },
+	{ _MMIO(0x9888), 0x1f900123 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x25904000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x53901000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x55900111 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+};
+
+static int
+get_compute_basic_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_basic;
+	lens[n] = ARRAY_SIZE(mux_config_compute_basic);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_render_pipe_profile[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ffea },
+	{ _MMIO(0x2774), 0x00007ffc },
+	{ _MMIO(0x2778), 0x0007affa },
+	{ _MMIO(0x277c), 0x0000f5fd },
+	{ _MMIO(0x2780), 0x00079ffa },
+	{ _MMIO(0x2784), 0x0000f3fb },
+	{ _MMIO(0x2788), 0x0007bf7a },
+	{ _MMIO(0x278c), 0x0000f7e7 },
+	{ _MMIO(0x2790), 0x0007fefa },
+	{ _MMIO(0x2794), 0x0000f7cf },
+	{ _MMIO(0x2798), 0x00077ffa },
+	{ _MMIO(0x279c), 0x0000efdf },
+	{ _MMIO(0x27a0), 0x0006fffa },
+	{ _MMIO(0x27a4), 0x0000cfbf },
+	{ _MMIO(0x27a8), 0x0003fffa },
+	{ _MMIO(0x27ac), 0x00005f7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_pipe_profile[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile[] = {
+	{ _MMIO(0x9888), 0x0c2e001f },
+	{ _MMIO(0x9888), 0x0a2f0000 },
+	{ _MMIO(0x9888), 0x10186800 },
+	{ _MMIO(0x9888), 0x11810019 },
+	{ _MMIO(0x9888), 0x15810013 },
+	{ _MMIO(0x9888), 0x13820020 },
+	{ _MMIO(0x9888), 0x11830020 },
+	{ _MMIO(0x9888), 0x17840000 },
+	{ _MMIO(0x9888), 0x11860007 },
+	{ _MMIO(0x9888), 0x21860000 },
+	{ _MMIO(0x9888), 0x178703e0 },
+	{ _MMIO(0x9888), 0x0c2d8000 },
+	{ _MMIO(0x9888), 0x042d4000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x022e5400 },
+	{ _MMIO(0x9888), 0x002e0000 },
+	{ _MMIO(0x9888), 0x0e2e0080 },
+	{ _MMIO(0x9888), 0x082f0040 },
+	{ _MMIO(0x9888), 0x002f0000 },
+	{ _MMIO(0x9888), 0x06143000 },
+	{ _MMIO(0x9888), 0x06174000 },
+	{ _MMIO(0x9888), 0x06180012 },
+	{ _MMIO(0x9888), 0x00180000 },
+	{ _MMIO(0x9888), 0x0d804000 },
+	{ _MMIO(0x9888), 0x0f804000 },
+	{ _MMIO(0x9888), 0x05804000 },
+	{ _MMIO(0x9888), 0x09810200 },
+	{ _MMIO(0x9888), 0x0b810030 },
+	{ _MMIO(0x9888), 0x03810003 },
+	{ _MMIO(0x9888), 0x21819140 },
+	{ _MMIO(0x9888), 0x23819050 },
+	{ _MMIO(0x9888), 0x25810018 },
+	{ _MMIO(0x9888), 0x0b820980 },
+	{ _MMIO(0x9888), 0x03820d80 },
+	{ _MMIO(0x9888), 0x11820000 },
+	{ _MMIO(0x9888), 0x0182c000 },
+	{ _MMIO(0x9888), 0x07828000 },
+	{ _MMIO(0x9888), 0x09824000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x0d830004 },
+	{ _MMIO(0x9888), 0x0583000c },
+	{ _MMIO(0x9888), 0x0f831000 },
+	{ _MMIO(0x9888), 0x01848072 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x09844000 },
+	{ _MMIO(0x9888), 0x0f848000 },
+	{ _MMIO(0x9888), 0x07860000 },
+	{ _MMIO(0x9888), 0x09860092 },
+	{ _MMIO(0x9888), 0x0f860400 },
+	{ _MMIO(0x9888), 0x01869100 },
+	{ _MMIO(0x9888), 0x0f870065 },
+	{ _MMIO(0x9888), 0x01870000 },
+	{ _MMIO(0x9888), 0x19930800 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x1b952000 },
+	{ _MMIO(0x9888), 0x1d955055 },
+	{ _MMIO(0x9888), 0x1f951455 },
+	{ _MMIO(0x9888), 0x0992a000 },
+	{ _MMIO(0x9888), 0x0f928000 },
+	{ _MMIO(0x9888), 0x1192a800 },
+	{ _MMIO(0x9888), 0x1392028a },
+	{ _MMIO(0x9888), 0x0b92a000 },
+	{ _MMIO(0x9888), 0x0d922000 },
+	{ _MMIO(0x9888), 0x13908000 },
+	{ _MMIO(0x9888), 0x21908000 },
+	{ _MMIO(0x9888), 0x23908000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27908000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f908000 },
+	{ _MMIO(0x9888), 0x31908000 },
+	{ _MMIO(0x9888), 0x15908000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900c01 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900863 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900061 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900000 },
+	{ _MMIO(0x9888), 0x45900c22 },
+};
+
+static int
+get_render_pipe_profile_mux_config(struct drm_i915_private *dev_priv,
+				   const struct i915_oa_reg **regs,
+				   int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_render_pipe_profile;
+	lens[n] = ARRAY_SIZE(mux_config_render_pipe_profile);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_reads[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f872 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_reads[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads[] = {
+	{ _MMIO(0x9888), 0x19800343 },
+	{ _MMIO(0x9888), 0x39900340 },
+	{ _MMIO(0x9888), 0x3f901000 },
+	{ _MMIO(0x9888), 0x41900003 },
+	{ _MMIO(0x9888), 0x03803180 },
+	{ _MMIO(0x9888), 0x058035e2 },
+	{ _MMIO(0x9888), 0x0780006a },
+	{ _MMIO(0x9888), 0x11800000 },
+	{ _MMIO(0x9888), 0x2181a000 },
+	{ _MMIO(0x9888), 0x2381000a },
+	{ _MMIO(0x9888), 0x1d950550 },
+	{ _MMIO(0x9888), 0x0b928000 },
+	{ _MMIO(0x9888), 0x0d92a000 },
+	{ _MMIO(0x9888), 0x0f922000 },
+	{ _MMIO(0x9888), 0x13900170 },
+	{ _MMIO(0x9888), 0x21900171 },
+	{ _MMIO(0x9888), 0x23900172 },
+	{ _MMIO(0x9888), 0x25900173 },
+	{ _MMIO(0x9888), 0x27900174 },
+	{ _MMIO(0x9888), 0x29900175 },
+	{ _MMIO(0x9888), 0x2b900176 },
+	{ _MMIO(0x9888), 0x2d900177 },
+	{ _MMIO(0x9888), 0x2f90017f },
+	{ _MMIO(0x9888), 0x31900125 },
+	{ _MMIO(0x9888), 0x15900123 },
+	{ _MMIO(0x9888), 0x17900121 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43901084 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47901080 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49901084 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b901084 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900004 },
+	{ _MMIO(0x9888), 0x45900000 },
+};
+
+static int
+get_memory_reads_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_memory_reads;
+	lens[n] = ARRAY_SIZE(mux_config_memory_reads);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_writes[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f822 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_writes[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes[] = {
+	{ _MMIO(0x9888), 0x19800343 },
+	{ _MMIO(0x9888), 0x39900340 },
+	{ _MMIO(0x9888), 0x3f900000 },
+	{ _MMIO(0x9888), 0x41900080 },
+	{ _MMIO(0x9888), 0x03803180 },
+	{ _MMIO(0x9888), 0x058035e2 },
+	{ _MMIO(0x9888), 0x0780006a },
+	{ _MMIO(0x9888), 0x11800000 },
+	{ _MMIO(0x9888), 0x2181a000 },
+	{ _MMIO(0x9888), 0x2381000a },
+	{ _MMIO(0x9888), 0x1d950550 },
+	{ _MMIO(0x9888), 0x0b928000 },
+	{ _MMIO(0x9888), 0x0d92a000 },
+	{ _MMIO(0x9888), 0x0f922000 },
+	{ _MMIO(0x9888), 0x13900180 },
+	{ _MMIO(0x9888), 0x21900181 },
+	{ _MMIO(0x9888), 0x23900182 },
+	{ _MMIO(0x9888), 0x25900183 },
+	{ _MMIO(0x9888), 0x27900184 },
+	{ _MMIO(0x9888), 0x29900185 },
+	{ _MMIO(0x9888), 0x2b900186 },
+	{ _MMIO(0x9888), 0x2d900187 },
+	{ _MMIO(0x9888), 0x2f900170 },
+	{ _MMIO(0x9888), 0x31900125 },
+	{ _MMIO(0x9888), 0x15900123 },
+	{ _MMIO(0x9888), 0x17900121 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43901084 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47901080 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49901084 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b901084 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900004 },
+	{ _MMIO(0x9888), 0x45900000 },
+};
+
+static int
+get_memory_writes_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_memory_writes;
+	lens[n] = ARRAY_SIZE(mux_config_memory_writes);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extended[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fc2a },
+	{ _MMIO(0x2774), 0x0000bf00 },
+	{ _MMIO(0x2778), 0x0007fc6a },
+	{ _MMIO(0x277c), 0x0000bf00 },
+	{ _MMIO(0x2780), 0x0007fc92 },
+	{ _MMIO(0x2784), 0x0000bf00 },
+	{ _MMIO(0x2788), 0x0007fca2 },
+	{ _MMIO(0x278c), 0x0000bf00 },
+	{ _MMIO(0x2790), 0x0007fc32 },
+	{ _MMIO(0x2794), 0x0000bf00 },
+	{ _MMIO(0x2798), 0x0007fc9a },
+	{ _MMIO(0x279c), 0x0000bf00 },
+	{ _MMIO(0x27a0), 0x0007fe6a },
+	{ _MMIO(0x27a4), 0x0000bf00 },
+	{ _MMIO(0x27a8), 0x0007fe7a },
+	{ _MMIO(0x27ac), 0x0000bf00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extended[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended[] = {
+	{ _MMIO(0x9888), 0x104f00e0 },
+	{ _MMIO(0x9888), 0x141c0160 },
+	{ _MMIO(0x9888), 0x161c0015 },
+	{ _MMIO(0x9888), 0x181c0120 },
+	{ _MMIO(0x9888), 0x002d5000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x082d5000 },
+	{ _MMIO(0x9888), 0x0a2d5000 },
+	{ _MMIO(0x9888), 0x0c2d5000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x0c2e5400 },
+	{ _MMIO(0x9888), 0x0e2e5515 },
+	{ _MMIO(0x9888), 0x102e0155 },
+	{ _MMIO(0x9888), 0x044cc000 },
+	{ _MMIO(0x9888), 0x0a4c8000 },
+	{ _MMIO(0x9888), 0x0c4cc000 },
+	{ _MMIO(0x9888), 0x0e4cc000 },
+	{ _MMIO(0x9888), 0x104c8000 },
+	{ _MMIO(0x9888), 0x124c8000 },
+	{ _MMIO(0x9888), 0x144c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x064cc000 },
+	{ _MMIO(0x9888), 0x084cc000 },
+	{ _MMIO(0x9888), 0x004ea000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084ea000 },
+	{ _MMIO(0x9888), 0x0a4ea000 },
+	{ _MMIO(0x9888), 0x0c4ea000 },
+	{ _MMIO(0x9888), 0x0e4ea000 },
+	{ _MMIO(0x9888), 0x024ea000 },
+	{ _MMIO(0x9888), 0x044ea000 },
+	{ _MMIO(0x9888), 0x0e4f4b41 },
+	{ _MMIO(0x9888), 0x004f4200 },
+	{ _MMIO(0x9888), 0x024f404c },
+	{ _MMIO(0x9888), 0x1c4f0000 },
+	{ _MMIO(0x9888), 0x1a4f0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x061b8000 },
+	{ _MMIO(0x9888), 0x081bc000 },
+	{ _MMIO(0x9888), 0x0a1bc000 },
+	{ _MMIO(0x9888), 0x0c1bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x001c0031 },
+	{ _MMIO(0x9888), 0x061c1900 },
+	{ _MMIO(0x9888), 0x081c1a33 },
+	{ _MMIO(0x9888), 0x0a1c1b35 },
+	{ _MMIO(0x9888), 0x0c1c3337 },
+	{ _MMIO(0x9888), 0x041c31c7 },
+	{ _MMIO(0x9888), 0x180f5000 },
+	{ _MMIO(0x9888), 0x1a0fa8aa },
+	{ _MMIO(0x9888), 0x1c0f0aaa },
+	{ _MMIO(0x9888), 0x182c8000 },
+	{ _MMIO(0x9888), 0x1c2c6aaa },
+	{ _MMIO(0x9888), 0x1e2c0001 },
+	{ _MMIO(0x9888), 0x1a2c2950 },
+	{ _MMIO(0x9888), 0x01938000 },
+	{ _MMIO(0x9888), 0x0f938000 },
+	{ _MMIO(0x9888), 0x1993aaaa },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25904000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x29904000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900420 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900400 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900001 },
+	{ _MMIO(0x9888), 0x45900001 },
+};
+
+static int
+get_compute_extended_mux_config(struct drm_i915_private *dev_priv,
+				const struct i915_oa_reg **regs,
+				int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_extended;
+	lens[n] = ARRAY_SIZE(mux_config_compute_extended);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_l3_cache[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fffa },
+	{ _MMIO(0x2774), 0x0000fefe },
+	{ _MMIO(0x2778), 0x0007fffa },
+	{ _MMIO(0x277c), 0x0000fefd },
+	{ _MMIO(0x2790), 0x0007fffa },
+	{ _MMIO(0x2794), 0x0000fbef },
+	{ _MMIO(0x2798), 0x0007fffa },
+	{ _MMIO(0x279c), 0x0000fbdf },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_l3_cache[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00101100 },
+	{ _MMIO(0xe45c), 0x00201200 },
+	{ _MMIO(0xe55c), 0x00301300 },
+	{ _MMIO(0xe65c), 0x00401400 },
+};
+
+static const struct i915_oa_reg mux_config_compute_l3_cache[] = {
+	{ _MMIO(0x9888), 0x166c03b0 },
+	{ _MMIO(0x9888), 0x1593001e },
+	{ _MMIO(0x9888), 0x3f900c00 },
+	{ _MMIO(0x9888), 0x41900000 },
+	{ _MMIO(0x9888), 0x002d1000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x082d5000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x0c2e0400 },
+	{ _MMIO(0x9888), 0x0e2e1500 },
+	{ _MMIO(0x9888), 0x102e0140 },
+	{ _MMIO(0x9888), 0x044c4000 },
+	{ _MMIO(0x9888), 0x0a4c8000 },
+	{ _MMIO(0x9888), 0x0c4cc000 },
+	{ _MMIO(0x9888), 0x144c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x004e2000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084ea000 },
+	{ _MMIO(0x9888), 0x0e4ea000 },
+	{ _MMIO(0x9888), 0x1a4f4001 },
+	{ _MMIO(0x9888), 0x1c4f5005 },
+	{ _MMIO(0x9888), 0x006c0051 },
+	{ _MMIO(0x9888), 0x066c5000 },
+	{ _MMIO(0x9888), 0x086c5c5d },
+	{ _MMIO(0x9888), 0x0e6c5e5f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x146c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x180f1000 },
+	{ _MMIO(0x9888), 0x1a0fa800 },
+	{ _MMIO(0x9888), 0x1c0f0a00 },
+	{ _MMIO(0x9888), 0x182c4000 },
+	{ _MMIO(0x9888), 0x1c2c4015 },
+	{ _MMIO(0x9888), 0x1e2c0001 },
+	{ _MMIO(0x9888), 0x03931980 },
+	{ _MMIO(0x9888), 0x05930032 },
+	{ _MMIO(0x9888), 0x11930000 },
+	{ _MMIO(0x9888), 0x01938000 },
+	{ _MMIO(0x9888), 0x0f938000 },
+	{ _MMIO(0x9888), 0x1993a00a },
+	{ _MMIO(0x9888), 0x07930000 },
+	{ _MMIO(0x9888), 0x09930000 },
+	{ _MMIO(0x9888), 0x1d900177 },
+	{ _MMIO(0x9888), 0x1f900178 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x53901000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x55900111 },
+	{ _MMIO(0x9888), 0x47900001 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x4d900000 },
+	{ _MMIO(0x9888), 0x45900400 },
+};
+
+static int
+get_compute_l3_cache_mux_config(struct drm_i915_private *dev_priv,
+				const struct i915_oa_reg **regs,
+				int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_l3_cache;
+	lens[n] = ARRAY_SIZE(mux_config_compute_l3_cache);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_hdc_and_sf[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x10800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+};
+
+static const struct i915_oa_reg flex_eu_config_hdc_and_sf[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_hdc_and_sf[] = {
+	{ _MMIO(0x9888), 0x104f0232 },
+	{ _MMIO(0x9888), 0x124f4640 },
+	{ _MMIO(0x9888), 0x11834400 },
+	{ _MMIO(0x9888), 0x022d4000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x0e2e0055 },
+	{ _MMIO(0x9888), 0x064c8000 },
+	{ _MMIO(0x9888), 0x084cc000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044ea000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x024f6100 },
+	{ _MMIO(0x9888), 0x044f416b },
+	{ _MMIO(0x9888), 0x064f004b },
+	{ _MMIO(0x9888), 0x1a4f0000 },
+	{ _MMIO(0x9888), 0x1a0f02a8 },
+	{ _MMIO(0x9888), 0x1a2c5500 },
+	{ _MMIO(0x9888), 0x0f808000 },
+	{ _MMIO(0x9888), 0x25810020 },
+	{ _MMIO(0x9888), 0x0f8305c0 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x1f951000 },
+	{ _MMIO(0x9888), 0x13920200 },
+	{ _MMIO(0x9888), 0x31908000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4d900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_hdc_and_sf_mux_config(struct drm_i915_private *dev_priv,
+			  const struct i915_oa_reg **regs,
+			  int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_hdc_and_sf;
+	lens[n] = ARRAY_SIZE(mux_config_hdc_and_sf);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1_0_sku_gte_0x03[] = {
+	{ _MMIO(0x9888), 0x12643400 },
+	{ _MMIO(0x9888), 0x12653400 },
+	{ _MMIO(0x9888), 0x106c6800 },
+	{ _MMIO(0x9888), 0x126c001e },
+	{ _MMIO(0x9888), 0x166c0010 },
+	{ _MMIO(0x9888), 0x0c2d5000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x002d4000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x102e0154 },
+	{ _MMIO(0x9888), 0x0c2e5000 },
+	{ _MMIO(0x9888), 0x0e2e0055 },
+	{ _MMIO(0x9888), 0x104c8000 },
+	{ _MMIO(0x9888), 0x124c8000 },
+	{ _MMIO(0x9888), 0x144c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x044c8000 },
+	{ _MMIO(0x9888), 0x064cc000 },
+	{ _MMIO(0x9888), 0x084cc000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x0c4ea000 },
+	{ _MMIO(0x9888), 0x0e4ea000 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x024ea000 },
+	{ _MMIO(0x9888), 0x044ea000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x1c4f5500 },
+	{ _MMIO(0x9888), 0x1a4f1554 },
+	{ _MMIO(0x9888), 0x0a640024 },
+	{ _MMIO(0x9888), 0x10640000 },
+	{ _MMIO(0x9888), 0x04640000 },
+	{ _MMIO(0x9888), 0x0c650024 },
+	{ _MMIO(0x9888), 0x10650000 },
+	{ _MMIO(0x9888), 0x06650000 },
+	{ _MMIO(0x9888), 0x0c6c5327 },
+	{ _MMIO(0x9888), 0x0e6c5425 },
+	{ _MMIO(0x9888), 0x006c2a00 },
+	{ _MMIO(0x9888), 0x026c285b },
+	{ _MMIO(0x9888), 0x046c005c },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0900 },
+	{ _MMIO(0x9888), 0x1c0f0aa0 },
+	{ _MMIO(0x9888), 0x180f4000 },
+	{ _MMIO(0x9888), 0x1a0f02aa },
+	{ _MMIO(0x9888), 0x1c2c5400 },
+	{ _MMIO(0x9888), 0x1e2c0001 },
+	{ _MMIO(0x9888), 0x1a2c5550 },
+	{ _MMIO(0x9888), 0x1993aa00 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900421 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900001 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900420 },
+	{ _MMIO(0x9888), 0x45900021 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1_0_sku_lt_0x03[] = {
+	{ _MMIO(0x9888), 0x14640340 },
+	{ _MMIO(0x9888), 0x14650340 },
+	{ _MMIO(0x9888), 0x106c6800 },
+	{ _MMIO(0x9888), 0x126c001e },
+	{ _MMIO(0x9888), 0x166c0010 },
+	{ _MMIO(0x9888), 0x0c2d5000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x002d4000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x102e0154 },
+	{ _MMIO(0x9888), 0x0c2e5000 },
+	{ _MMIO(0x9888), 0x0e2e0055 },
+	{ _MMIO(0x9888), 0x104c8000 },
+	{ _MMIO(0x9888), 0x124c8000 },
+	{ _MMIO(0x9888), 0x144c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x044c8000 },
+	{ _MMIO(0x9888), 0x064cc000 },
+	{ _MMIO(0x9888), 0x084cc000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x0c4ea000 },
+	{ _MMIO(0x9888), 0x0e4ea000 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x024ea000 },
+	{ _MMIO(0x9888), 0x044ea000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x1c4f5500 },
+	{ _MMIO(0x9888), 0x1a4f1554 },
+	{ _MMIO(0x9888), 0x04642400 },
+	{ _MMIO(0x9888), 0x22640000 },
+	{ _MMIO(0x9888), 0x1a640000 },
+	{ _MMIO(0x9888), 0x06650024 },
+	{ _MMIO(0x9888), 0x22650000 },
+	{ _MMIO(0x9888), 0x1c650000 },
+	{ _MMIO(0x9888), 0x0c6c5327 },
+	{ _MMIO(0x9888), 0x0e6c5425 },
+	{ _MMIO(0x9888), 0x006c2a00 },
+	{ _MMIO(0x9888), 0x026c285b },
+	{ _MMIO(0x9888), 0x046c005c },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0900 },
+	{ _MMIO(0x9888), 0x1c0f0aa0 },
+	{ _MMIO(0x9888), 0x180f4000 },
+	{ _MMIO(0x9888), 0x1a0f02aa },
+	{ _MMIO(0x9888), 0x1c2c5400 },
+	{ _MMIO(0x9888), 0x1e2c0001 },
+	{ _MMIO(0x9888), 0x1a2c5550 },
+	{ _MMIO(0x9888), 0x1993aa00 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900421 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900001 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900420 },
+	{ _MMIO(0x9888), 0x45900021 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+};
+
+static int
+get_l3_1_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 2);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 2);
+
+	if (dev_priv->drm.pdev->revision >= 0x03) {
+		regs[n] = mux_config_l3_1_0_sku_gte_0x03;
+		lens[n] = ARRAY_SIZE(mux_config_l3_1_0_sku_gte_0x03);
+		n++;
+	}
+	if (dev_priv->drm.pdev->revision < 0x03) {
+		regs[n] = mux_config_l3_1_0_sku_lt_0x03;
+		lens[n] = ARRAY_SIZE(mux_config_l3_1_0_sku_lt_0x03);
+		n++;
+	}
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000efff },
+	{ _MMIO(0x2778), 0x00006000 },
+	{ _MMIO(0x277c), 0x0000f3ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x9888), 0x102d7800 },
+	{ _MMIO(0x9888), 0x122d79e0 },
+	{ _MMIO(0x9888), 0x0c2f0004 },
+	{ _MMIO(0x9888), 0x100e3800 },
+	{ _MMIO(0x9888), 0x180f0005 },
+	{ _MMIO(0x9888), 0x002d0940 },
+	{ _MMIO(0x9888), 0x022d802f },
+	{ _MMIO(0x9888), 0x042d4013 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x0e2e0050 },
+	{ _MMIO(0x9888), 0x022f0010 },
+	{ _MMIO(0x9888), 0x002f0000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x040e0480 },
+	{ _MMIO(0x9888), 0x000e0000 },
+	{ _MMIO(0x9888), 0x060f0027 },
+	{ _MMIO(0x9888), 0x100f0000 },
+	{ _MMIO(0x9888), 0x1a0f0040 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x439014a0 },
+	{ _MMIO(0x9888), 0x459000a4 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900001 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_rasterizer_and_pixel_backend_mux_config(struct drm_i915_private *dev_priv,
+					    const struct i915_oa_reg **regs,
+					    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_rasterizer_and_pixel_backend;
+	lens[n] = ARRAY_SIZE(mux_config_rasterizer_and_pixel_backend);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler[] = {
+	{ _MMIO(0x9888), 0x121300a0 },
+	{ _MMIO(0x9888), 0x141600ab },
+	{ _MMIO(0x9888), 0x123300a0 },
+	{ _MMIO(0x9888), 0x143600ab },
+	{ _MMIO(0x9888), 0x125300a0 },
+	{ _MMIO(0x9888), 0x145600ab },
+	{ _MMIO(0x9888), 0x0c2d4000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x002d4000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x102e01a0 },
+	{ _MMIO(0x9888), 0x0c2e5000 },
+	{ _MMIO(0x9888), 0x0e2e0065 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x044c8000 },
+	{ _MMIO(0x9888), 0x064cc000 },
+	{ _MMIO(0x9888), 0x084c4000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x024ea000 },
+	{ _MMIO(0x9888), 0x044e2000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x1c0f0800 },
+	{ _MMIO(0x9888), 0x180f4000 },
+	{ _MMIO(0x9888), 0x1a0f023f },
+	{ _MMIO(0x9888), 0x1e2c0003 },
+	{ _MMIO(0x9888), 0x1a2cc030 },
+	{ _MMIO(0x9888), 0x04132180 },
+	{ _MMIO(0x9888), 0x02130000 },
+	{ _MMIO(0x9888), 0x0c148000 },
+	{ _MMIO(0x9888), 0x0e142000 },
+	{ _MMIO(0x9888), 0x04148000 },
+	{ _MMIO(0x9888), 0x1e150140 },
+	{ _MMIO(0x9888), 0x1c150040 },
+	{ _MMIO(0x9888), 0x0c163000 },
+	{ _MMIO(0x9888), 0x0e160068 },
+	{ _MMIO(0x9888), 0x10160000 },
+	{ _MMIO(0x9888), 0x18160000 },
+	{ _MMIO(0x9888), 0x0a164000 },
+	{ _MMIO(0x9888), 0x04330043 },
+	{ _MMIO(0x9888), 0x02330000 },
+	{ _MMIO(0x9888), 0x0234a000 },
+	{ _MMIO(0x9888), 0x04342000 },
+	{ _MMIO(0x9888), 0x1c350015 },
+	{ _MMIO(0x9888), 0x02363460 },
+	{ _MMIO(0x9888), 0x10360000 },
+	{ _MMIO(0x9888), 0x04360000 },
+	{ _MMIO(0x9888), 0x06360000 },
+	{ _MMIO(0x9888), 0x08364000 },
+	{ _MMIO(0x9888), 0x06530043 },
+	{ _MMIO(0x9888), 0x02530000 },
+	{ _MMIO(0x9888), 0x0e548000 },
+	{ _MMIO(0x9888), 0x00548000 },
+	{ _MMIO(0x9888), 0x06542000 },
+	{ _MMIO(0x9888), 0x1e550400 },
+	{ _MMIO(0x9888), 0x1a552000 },
+	{ _MMIO(0x9888), 0x1c550100 },
+	{ _MMIO(0x9888), 0x0e563000 },
+	{ _MMIO(0x9888), 0x00563400 },
+	{ _MMIO(0x9888), 0x10560000 },
+	{ _MMIO(0x9888), 0x18560000 },
+	{ _MMIO(0x9888), 0x02560000 },
+	{ _MMIO(0x9888), 0x0c564000 },
+	{ _MMIO(0x9888), 0x1993a800 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b9014a0 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900001 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900820 },
+	{ _MMIO(0x9888), 0x45901022 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+};
+
+static int
+get_sampler_mux_config(struct drm_i915_private *dev_priv,
+		       const struct i915_oa_reg **regs,
+		       int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_sampler;
+	lens[n] = ARRAY_SIZE(mux_config_sampler);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x00007fff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x00009fff },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000efff },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000f3ff },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fdff },
+	{ _MMIO(0x2798), 0x00000000 },
+	{ _MMIO(0x279c), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_1[] = {
+	{ _MMIO(0x9888), 0x141a0000 },
+	{ _MMIO(0x9888), 0x143a0000 },
+	{ _MMIO(0x9888), 0x145a0000 },
+	{ _MMIO(0x9888), 0x0c2d4000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x002d4000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x102e0150 },
+	{ _MMIO(0x9888), 0x0c2e5000 },
+	{ _MMIO(0x9888), 0x0e2e006a },
+	{ _MMIO(0x9888), 0x124c8000 },
+	{ _MMIO(0x9888), 0x144c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x044c8000 },
+	{ _MMIO(0x9888), 0x064c4000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x0e4ea000 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x024e2000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x1c0f0bc0 },
+	{ _MMIO(0x9888), 0x180f4000 },
+	{ _MMIO(0x9888), 0x1a0f0302 },
+	{ _MMIO(0x9888), 0x1e2c0003 },
+	{ _MMIO(0x9888), 0x1a2c00f0 },
+	{ _MMIO(0x9888), 0x021a3080 },
+	{ _MMIO(0x9888), 0x041a31e5 },
+	{ _MMIO(0x9888), 0x02148000 },
+	{ _MMIO(0x9888), 0x0414a000 },
+	{ _MMIO(0x9888), 0x1c150054 },
+	{ _MMIO(0x9888), 0x06168000 },
+	{ _MMIO(0x9888), 0x08168000 },
+	{ _MMIO(0x9888), 0x0a168000 },
+	{ _MMIO(0x9888), 0x0c3a3280 },
+	{ _MMIO(0x9888), 0x0e3a0063 },
+	{ _MMIO(0x9888), 0x063a0061 },
+	{ _MMIO(0x9888), 0x023a0000 },
+	{ _MMIO(0x9888), 0x0c348000 },
+	{ _MMIO(0x9888), 0x0e342000 },
+	{ _MMIO(0x9888), 0x06342000 },
+	{ _MMIO(0x9888), 0x1e350140 },
+	{ _MMIO(0x9888), 0x1c350100 },
+	{ _MMIO(0x9888), 0x18360028 },
+	{ _MMIO(0x9888), 0x0c368000 },
+	{ _MMIO(0x9888), 0x0e5a3080 },
+	{ _MMIO(0x9888), 0x005a3280 },
+	{ _MMIO(0x9888), 0x025a0063 },
+	{ _MMIO(0x9888), 0x0e548000 },
+	{ _MMIO(0x9888), 0x00548000 },
+	{ _MMIO(0x9888), 0x02542000 },
+	{ _MMIO(0x9888), 0x1e550400 },
+	{ _MMIO(0x9888), 0x1a552000 },
+	{ _MMIO(0x9888), 0x1c550001 },
+	{ _MMIO(0x9888), 0x18560080 },
+	{ _MMIO(0x9888), 0x02568000 },
+	{ _MMIO(0x9888), 0x04568000 },
+	{ _MMIO(0x9888), 0x1993a800 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900420 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x45901084 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900001 },
+};
+
+static int
+get_tdl_1_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_1;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_2[] = {
+	{ _MMIO(0x9888), 0x141a026b },
+	{ _MMIO(0x9888), 0x143a0173 },
+	{ _MMIO(0x9888), 0x145a026b },
+	{ _MMIO(0x9888), 0x002d4000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x0c2e5000 },
+	{ _MMIO(0x9888), 0x0e2e0069 },
+	{ _MMIO(0x9888), 0x044c8000 },
+	{ _MMIO(0x9888), 0x064cc000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x024ea000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x180f6000 },
+	{ _MMIO(0x9888), 0x1a0f030a },
+	{ _MMIO(0x9888), 0x1a2c03c0 },
+	{ _MMIO(0x9888), 0x041a37e7 },
+	{ _MMIO(0x9888), 0x021a0000 },
+	{ _MMIO(0x9888), 0x0414a000 },
+	{ _MMIO(0x9888), 0x1c150050 },
+	{ _MMIO(0x9888), 0x08168000 },
+	{ _MMIO(0x9888), 0x0a168000 },
+	{ _MMIO(0x9888), 0x003a3380 },
+	{ _MMIO(0x9888), 0x063a006f },
+	{ _MMIO(0x9888), 0x023a0000 },
+	{ _MMIO(0x9888), 0x00348000 },
+	{ _MMIO(0x9888), 0x06342000 },
+	{ _MMIO(0x9888), 0x1a352000 },
+	{ _MMIO(0x9888), 0x1c350100 },
+	{ _MMIO(0x9888), 0x02368000 },
+	{ _MMIO(0x9888), 0x0c368000 },
+	{ _MMIO(0x9888), 0x025a37e7 },
+	{ _MMIO(0x9888), 0x0254a000 },
+	{ _MMIO(0x9888), 0x1c550005 },
+	{ _MMIO(0x9888), 0x04568000 },
+	{ _MMIO(0x9888), 0x06568000 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900020 },
+	{ _MMIO(0x9888), 0x45901080 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900001 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_tdl_2_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_2;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extra[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extra[] = {
+	{ _MMIO(0xe458), 0x00001000 },
+	{ _MMIO(0xe558), 0x00003002 },
+	{ _MMIO(0xe658), 0x00005004 },
+	{ _MMIO(0xe758), 0x00011010 },
+	{ _MMIO(0xe45c), 0x00050012 },
+	{ _MMIO(0xe55c), 0x00052051 },
+	{ _MMIO(0xe65c), 0x00000008 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extra[] = {
+	{ _MMIO(0x9888), 0x141a001f },
+	{ _MMIO(0x9888), 0x143a001f },
+	{ _MMIO(0x9888), 0x145a001f },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x0e2e0094 },
+	{ _MMIO(0x9888), 0x084cc000 },
+	{ _MMIO(0x9888), 0x044ea000 },
+	{ _MMIO(0x9888), 0x1a0f00e0 },
+	{ _MMIO(0x9888), 0x1a2c0c00 },
+	{ _MMIO(0x9888), 0x061a0063 },
+	{ _MMIO(0x9888), 0x021a0000 },
+	{ _MMIO(0x9888), 0x06142000 },
+	{ _MMIO(0x9888), 0x1c150100 },
+	{ _MMIO(0x9888), 0x0c168000 },
+	{ _MMIO(0x9888), 0x043a3180 },
+	{ _MMIO(0x9888), 0x023a0000 },
+	{ _MMIO(0x9888), 0x04348000 },
+	{ _MMIO(0x9888), 0x1c350040 },
+	{ _MMIO(0x9888), 0x0a368000 },
+	{ _MMIO(0x9888), 0x045a0063 },
+	{ _MMIO(0x9888), 0x025a0000 },
+	{ _MMIO(0x9888), 0x04542000 },
+	{ _MMIO(0x9888), 0x1c550010 },
+	{ _MMIO(0x9888), 0x08568000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900400 },
+	{ _MMIO(0x9888), 0x47900004 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_compute_extra_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_extra;
+	lens[n] = ARRAY_SIZE(mux_config_compute_extra);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_test_oa[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2770), 0x00000004 },
+	{ _MMIO(0x2774), 0x00000000 },
+	{ _MMIO(0x2778), 0x00000003 },
+	{ _MMIO(0x277c), 0x00000000 },
+	{ _MMIO(0x2780), 0x00000007 },
+	{ _MMIO(0x2784), 0x00000000 },
+	{ _MMIO(0x2788), 0x00100002 },
+	{ _MMIO(0x278c), 0x0000fff7 },
+	{ _MMIO(0x2790), 0x00100002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00100082 },
+	{ _MMIO(0x279c), 0x0000ffef },
+	{ _MMIO(0x27a0), 0x001000c2 },
+	{ _MMIO(0x27a4), 0x0000ffe7 },
+	{ _MMIO(0x27a8), 0x00100001 },
+	{ _MMIO(0x27ac), 0x0000ffe7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_test_oa[] = {
+};
+
+static const struct i915_oa_reg mux_config_test_oa[] = {
+	{ _MMIO(0x9888), 0x19800000 },
+	{ _MMIO(0x9888), 0x07800063 },
+	{ _MMIO(0x9888), 0x11800000 },
+	{ _MMIO(0x9888), 0x23810008 },
+	{ _MMIO(0x9888), 0x1d950400 },
+	{ _MMIO(0x9888), 0x0f922000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_test_oa_mux_config(struct drm_i915_private *dev_priv,
+		       const struct i915_oa_reg **regs,
+		       int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_test_oa;
+	lens[n] = ARRAY_SIZE(mux_config_test_oa);
+	n++;
+
+	return n;
+}
+
 int i915_oa_select_metric_set_bxt(struct drm_i915_private *dev_priv)
 {
 	dev_priv->perf.oa.n_mux_configs = 0;
@@ -164,14 +1794,352 @@ int i915_oa_select_metric_set_bxt(struct drm_i915_private *dev_priv)
 	dev_priv->perf.oa.flex_regs = NULL;
 	dev_priv->perf.oa.flex_regs_len = 0;
 
-	switch (dev_priv->perf.oa.metrics_set) {
-	case METRIC_SET_ID_RENDER_BASIC:
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_basic_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_basic_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
+		return 0;
+	case METRIC_SET_ID_RENDER_PIPE_PROFILE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_pipe_profile_mux_config(dev_priv,
+							   dev_priv->perf.oa.mux_regs,
+							   dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_PIPE_PROFILE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_pipe_profile;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_pipe_profile);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_pipe_profile;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_pipe_profile);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_READS:
+		dev_priv->perf.oa.n_mux_configs =
+			get_memory_reads_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_READS\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_reads;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_reads);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_reads;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_reads);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_WRITES:
+		dev_priv->perf.oa.n_mux_configs =
+			get_memory_writes_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_WRITES\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_writes;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_writes);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_writes;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_writes);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTENDED:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_extended_mux_config(dev_priv,
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTENDED\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extended;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extended);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extended;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extended);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_L3_CACHE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_l3_cache_mux_config(dev_priv,
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_L3_CACHE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_l3_cache;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_l3_cache);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_l3_cache;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_l3_cache);
+
+		return 0;
+	case METRIC_SET_ID_HDC_AND_SF:
+		dev_priv->perf.oa.n_mux_configs =
+			get_hdc_and_sf_mux_config(dev_priv,
+						  dev_priv->perf.oa.mux_regs,
+						  dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"HDC_AND_SF\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_hdc_and_sf;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_hdc_and_sf);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_hdc_and_sf;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_hdc_and_sf);
+
+		return 0;
+	case METRIC_SET_ID_L3_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_1_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_1);
+
+		return 0;
+	case METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND:
+		dev_priv->perf.oa.n_mux_configs =
+			get_rasterizer_and_pixel_backend_mux_config(dev_priv,
+								    dev_priv->perf.oa.mux_regs,
+								    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RASTERIZER_AND_PIXEL_BACKEND\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_rasterizer_and_pixel_backend);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_rasterizer_and_pixel_backend);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER:
+		dev_priv->perf.oa.n_mux_configs =
+			get_sampler_mux_config(dev_priv,
+					       dev_priv->perf.oa.mux_regs,
+					       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler);
+
+		return 0;
+	case METRIC_SET_ID_TDL_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_1_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_1);
+
+		return 0;
+	case METRIC_SET_ID_TDL_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_2_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_2);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTRA:
 		dev_priv->perf.oa.n_mux_configs =
-			get_render_basic_mux_config(dev_priv,
-						    dev_priv->perf.oa.mux_regs,
-						    dev_priv->perf.oa.mux_regs_lens);
+			get_compute_extra_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
 		if (dev_priv->perf.oa.n_mux_configs == 0) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTRA\" metric set\n");
 
 			/* EINVAL because *_register_sysfs already checked this
 			 * and so it wouldn't have been advertised to userspace and
@@ -181,14 +2149,40 @@ int i915_oa_select_metric_set_bxt(struct drm_i915_private *dev_priv)
 		}
 
 		dev_priv->perf.oa.b_counter_regs =
-			b_counter_config_render_basic;
+			b_counter_config_compute_extra;
 		dev_priv->perf.oa.b_counter_regs_len =
-			ARRAY_SIZE(b_counter_config_render_basic);
+			ARRAY_SIZE(b_counter_config_compute_extra);
 
 		dev_priv->perf.oa.flex_regs =
-			flex_eu_config_render_basic;
+			flex_eu_config_compute_extra;
 		dev_priv->perf.oa.flex_regs_len =
-			ARRAY_SIZE(flex_eu_config_render_basic);
+			ARRAY_SIZE(flex_eu_config_compute_extra);
+
+		return 0;
+	case METRIC_SET_ID_TEST_OA:
+		dev_priv->perf.oa.n_mux_configs =
+			get_test_oa_mux_config(dev_priv,
+					       dev_priv->perf.oa.mux_regs,
+					       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TEST_OA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_test_oa;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_test_oa);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_test_oa;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_test_oa);
 
 		return 0;
 	default:
@@ -218,6 +2212,314 @@ static struct attribute_group group_render_basic = {
 	.attrs =  attrs_render_basic,
 };
 
+static ssize_t
+show_compute_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_BASIC);
+}
+
+static struct device_attribute dev_attr_compute_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_basic[] = {
+	&dev_attr_compute_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_basic = {
+	.name = "012d72cf-82a9-4d25-8ddf-74076fd30797",
+	.attrs =  attrs_compute_basic,
+};
+
+static ssize_t
+show_render_pipe_profile_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_PIPE_PROFILE);
+}
+
+static struct device_attribute dev_attr_render_pipe_profile_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_pipe_profile_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_pipe_profile[] = {
+	&dev_attr_render_pipe_profile_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_pipe_profile = {
+	.name = "ce416533-e49e-4211-80af-ec513590a914",
+	.attrs =  attrs_render_pipe_profile,
+};
+
+static ssize_t
+show_memory_reads_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_READS);
+}
+
+static struct device_attribute dev_attr_memory_reads_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_reads_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_reads[] = {
+	&dev_attr_memory_reads_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_reads = {
+	.name = "398e2452-18d7-42d0-b241-e4d0a9148ada",
+	.attrs =  attrs_memory_reads,
+};
+
+static ssize_t
+show_memory_writes_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_WRITES);
+}
+
+static struct device_attribute dev_attr_memory_writes_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_writes_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_writes[] = {
+	&dev_attr_memory_writes_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_writes = {
+	.name = "d324a0d6-7269-4847-a5c2-6f71ddc7fed5",
+	.attrs =  attrs_memory_writes,
+};
+
+static ssize_t
+show_compute_extended_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTENDED);
+}
+
+static struct device_attribute dev_attr_compute_extended_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extended_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extended[] = {
+	&dev_attr_compute_extended_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extended = {
+	.name = "caf3596a-7bb1-4dec-b3b3-2a080d283b49",
+	.attrs =  attrs_compute_extended,
+};
+
+static ssize_t
+show_compute_l3_cache_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_L3_CACHE);
+}
+
+static struct device_attribute dev_attr_compute_l3_cache_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_l3_cache_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_l3_cache[] = {
+	&dev_attr_compute_l3_cache_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_l3_cache = {
+	.name = "49b956e2-d5b9-47e0-9d8a-cee5e8cec527",
+	.attrs =  attrs_compute_l3_cache,
+};
+
+static ssize_t
+show_hdc_and_sf_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_HDC_AND_SF);
+}
+
+static struct device_attribute dev_attr_hdc_and_sf_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_hdc_and_sf_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_hdc_and_sf[] = {
+	&dev_attr_hdc_and_sf_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_hdc_and_sf = {
+	.name = "f64ef50a-bdba-4b35-8f09-203c13d8ee5a",
+	.attrs =  attrs_hdc_and_sf,
+};
+
+static ssize_t
+show_l3_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_1);
+}
+
+static struct device_attribute dev_attr_l3_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_1[] = {
+	&dev_attr_l3_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_1 = {
+	.name = "00ad5a41-7eab-4f7a-9103-49d411c67219",
+	.attrs =  attrs_l3_1,
+};
+
+static ssize_t
+show_rasterizer_and_pixel_backend_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND);
+}
+
+static struct device_attribute dev_attr_rasterizer_and_pixel_backend_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_rasterizer_and_pixel_backend_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_rasterizer_and_pixel_backend[] = {
+	&dev_attr_rasterizer_and_pixel_backend_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_rasterizer_and_pixel_backend = {
+	.name = "46dc44ca-491c-4cc1-a951-e7b3e62bf02b",
+	.attrs =  attrs_rasterizer_and_pixel_backend,
+};
+
+static ssize_t
+show_sampler_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER);
+}
+
+static struct device_attribute dev_attr_sampler_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler[] = {
+	&dev_attr_sampler_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler = {
+	.name = "8364e2a8-af63-40af-b0d5-42969a255654",
+	.attrs =  attrs_sampler,
+};
+
+static ssize_t
+show_tdl_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_1);
+}
+
+static struct device_attribute dev_attr_tdl_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_1[] = {
+	&dev_attr_tdl_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_1 = {
+	.name = "175c8092-cb25-4d1e-8dc7-b4fdd39e2d92",
+	.attrs =  attrs_tdl_1,
+};
+
+static ssize_t
+show_tdl_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_2);
+}
+
+static struct device_attribute dev_attr_tdl_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_2[] = {
+	&dev_attr_tdl_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_2 = {
+	.name = "d260f03f-b34d-4b49-a44e-436819117332",
+	.attrs =  attrs_tdl_2,
+};
+
+static ssize_t
+show_compute_extra_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTRA);
+}
+
+static struct device_attribute dev_attr_compute_extra_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extra_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extra[] = {
+	&dev_attr_compute_extra_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extra = {
+	.name = "fa6ecf21-2cb8-4d0b-9308-6e4a7b4ca87a",
+	.attrs =  attrs_compute_extra,
+};
+
+static ssize_t
+show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TEST_OA);
+}
+
+static struct device_attribute dev_attr_test_oa_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_test_oa_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_test_oa[] = {
+	&dev_attr_test_oa_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_test_oa = {
+	.name = "5ee72f5c-092f-421e-8b70-225f7c3e9612",
+	.attrs =  attrs_test_oa,
+};
+
 int
 i915_perf_register_sysfs_bxt(struct drm_i915_private *dev_priv)
 {
@@ -230,9 +2532,121 @@ i915_perf_register_sysfs_bxt(struct drm_i915_private *dev_priv)
 		if (ret)
 			goto error_render_basic;
 	}
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+		if (ret)
+			goto error_compute_basic;
+	}
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+		if (ret)
+			goto error_render_pipe_profile;
+	}
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+		if (ret)
+			goto error_memory_reads;
+	}
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+		if (ret)
+			goto error_memory_writes;
+	}
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+		if (ret)
+			goto error_compute_extended;
+	}
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+		if (ret)
+			goto error_compute_l3_cache;
+	}
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+		if (ret)
+			goto error_hdc_and_sf;
+	}
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+		if (ret)
+			goto error_l3_1;
+	}
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+		if (ret)
+			goto error_rasterizer_and_pixel_backend;
+	}
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler);
+		if (ret)
+			goto error_sampler;
+	}
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+		if (ret)
+			goto error_tdl_1;
+	}
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+		if (ret)
+			goto error_tdl_2;
+	}
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+		if (ret)
+			goto error_compute_extra;
+	}
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+		if (ret)
+			goto error_test_oa;
+	}
 
 	return 0;
 
+error_test_oa:
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+error_compute_extra:
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+error_tdl_2:
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+error_tdl_1:
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+error_sampler:
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+error_rasterizer_and_pixel_backend:
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+error_l3_1:
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+error_hdc_and_sf:
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+error_compute_l3_cache:
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+error_compute_extended:
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+error_memory_writes:
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+error_memory_reads:
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+error_render_pipe_profile:
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+error_compute_basic:
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
 error_render_basic:
 	return ret;
 }
@@ -245,4 +2659,32 @@ i915_perf_unregister_sysfs_bxt(struct drm_i915_private *dev_priv)
 
 	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_test_oa);
 }
diff --git a/drivers/gpu/drm/i915/i915_oa_chv.c b/drivers/gpu/drm/i915/i915_oa_chv.c
index b15f6c980d11..aa6bece7e75f 100644
--- a/drivers/gpu/drm/i915/i915_oa_chv.c
+++ b/drivers/gpu/drm/i915/i915_oa_chv.c
@@ -33,9 +33,22 @@
 
 enum metric_set_id {
 	METRIC_SET_ID_RENDER_BASIC = 1,
+	METRIC_SET_ID_COMPUTE_BASIC,
+	METRIC_SET_ID_RENDER_PIPE_PROFILE,
+	METRIC_SET_ID_HDC_AND_SF,
+	METRIC_SET_ID_L3_1,
+	METRIC_SET_ID_L3_2,
+	METRIC_SET_ID_L3_3,
+	METRIC_SET_ID_L3_4,
+	METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND,
+	METRIC_SET_ID_SAMPLER_1,
+	METRIC_SET_ID_SAMPLER_2,
+	METRIC_SET_ID_TDL_1,
+	METRIC_SET_ID_TDL_2,
+	METRIC_SET_ID_TEST_OA,
 };
 
-int i915_oa_n_builtin_metric_sets_chv = 1;
+int i915_oa_n_builtin_metric_sets_chv = 14;
 
 static const struct i915_oa_reg b_counter_config_render_basic[] = {
 	{ _MMIO(0x2740), 0x00000000 },
@@ -146,6 +159,1874 @@ get_render_basic_mux_config(struct drm_i915_private *dev_priv,
 	return n;
 }
 
+static const struct i915_oa_reg b_counter_config_compute_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic[] = {
+	{ _MMIO(0x9888), 0x59800000 },
+	{ _MMIO(0x9888), 0x59800001 },
+	{ _MMIO(0x9888), 0x2e5800e0 },
+	{ _MMIO(0x9888), 0x2e3800e0 },
+	{ _MMIO(0x9888), 0x3580024f },
+	{ _MMIO(0x9888), 0x3d800140 },
+	{ _MMIO(0x9888), 0x08580042 },
+	{ _MMIO(0x9888), 0x0c580040 },
+	{ _MMIO(0x9888), 0x1058004c },
+	{ _MMIO(0x9888), 0x1458004b },
+	{ _MMIO(0x9888), 0x04580000 },
+	{ _MMIO(0x9888), 0x00580000 },
+	{ _MMIO(0x9888), 0x00195555 },
+	{ _MMIO(0x9888), 0x06380042 },
+	{ _MMIO(0x9888), 0x0a380040 },
+	{ _MMIO(0x9888), 0x0e38004c },
+	{ _MMIO(0x9888), 0x1238004b },
+	{ _MMIO(0x9888), 0x04380000 },
+	{ _MMIO(0x9888), 0x00384444 },
+	{ _MMIO(0x9888), 0x003a5555 },
+	{ _MMIO(0x9888), 0x018bffff },
+	{ _MMIO(0x9888), 0x01845555 },
+	{ _MMIO(0x9888), 0x17800074 },
+	{ _MMIO(0x9888), 0x1980007d },
+	{ _MMIO(0x9888), 0x1b80007c },
+	{ _MMIO(0x9888), 0x1d8000b6 },
+	{ _MMIO(0x9888), 0x1f8000b7 },
+	{ _MMIO(0x9888), 0x05800000 },
+	{ _MMIO(0x9888), 0x03800000 },
+	{ _MMIO(0x9888), 0x418000aa },
+	{ _MMIO(0x9888), 0x438000aa },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x47800000 },
+	{ _MMIO(0x9888), 0x4980012a },
+	{ _MMIO(0x9888), 0x4b80012a },
+	{ _MMIO(0x9888), 0x4d80012a },
+	{ _MMIO(0x9888), 0x4f80012a },
+	{ _MMIO(0x9888), 0x518001ce },
+	{ _MMIO(0x9888), 0x538001ce },
+	{ _MMIO(0x9888), 0x5580000e },
+	{ _MMIO(0x9888), 0x59800000 },
+};
+
+static int
+get_compute_basic_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_basic;
+	lens[n] = ARRAY_SIZE(mux_config_compute_basic);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_render_pipe_profile[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ffea },
+	{ _MMIO(0x2774), 0x00007ffc },
+	{ _MMIO(0x2778), 0x0007affa },
+	{ _MMIO(0x277c), 0x0000f5fd },
+	{ _MMIO(0x2780), 0x00079ffa },
+	{ _MMIO(0x2784), 0x0000f3fb },
+	{ _MMIO(0x2788), 0x0007bf7a },
+	{ _MMIO(0x278c), 0x0000f7e7 },
+	{ _MMIO(0x2790), 0x0007fefa },
+	{ _MMIO(0x2794), 0x0000f7cf },
+	{ _MMIO(0x2798), 0x00077ffa },
+	{ _MMIO(0x279c), 0x0000efdf },
+	{ _MMIO(0x27a0), 0x0006fffa },
+	{ _MMIO(0x27a4), 0x0000cfbf },
+	{ _MMIO(0x27a8), 0x0003fffa },
+	{ _MMIO(0x27ac), 0x00005f7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_pipe_profile[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile[] = {
+	{ _MMIO(0x9888), 0x59800000 },
+	{ _MMIO(0x9888), 0x59800001 },
+	{ _MMIO(0x9888), 0x261e0000 },
+	{ _MMIO(0x9888), 0x281f000f },
+	{ _MMIO(0x9888), 0x2817001a },
+	{ _MMIO(0x9888), 0x2791001f },
+	{ _MMIO(0x9888), 0x27880019 },
+	{ _MMIO(0x9888), 0x2d890000 },
+	{ _MMIO(0x9888), 0x278a0007 },
+	{ _MMIO(0x9888), 0x298d001f },
+	{ _MMIO(0x9888), 0x278e0020 },
+	{ _MMIO(0x9888), 0x2b8f0012 },
+	{ _MMIO(0x9888), 0x29900000 },
+	{ _MMIO(0x9888), 0x00184000 },
+	{ _MMIO(0x9888), 0x02181000 },
+	{ _MMIO(0x9888), 0x02194000 },
+	{ _MMIO(0x9888), 0x141e0002 },
+	{ _MMIO(0x9888), 0x041e0000 },
+	{ _MMIO(0x9888), 0x001e0000 },
+	{ _MMIO(0x9888), 0x221f0015 },
+	{ _MMIO(0x9888), 0x041f0000 },
+	{ _MMIO(0x9888), 0x001f4000 },
+	{ _MMIO(0x9888), 0x021f0000 },
+	{ _MMIO(0x9888), 0x023a8000 },
+	{ _MMIO(0x9888), 0x0213c000 },
+	{ _MMIO(0x9888), 0x02164000 },
+	{ _MMIO(0x9888), 0x24170012 },
+	{ _MMIO(0x9888), 0x04170000 },
+	{ _MMIO(0x9888), 0x07910005 },
+	{ _MMIO(0x9888), 0x05910000 },
+	{ _MMIO(0x9888), 0x01911500 },
+	{ _MMIO(0x9888), 0x03910501 },
+	{ _MMIO(0x9888), 0x0d880002 },
+	{ _MMIO(0x9888), 0x1d880003 },
+	{ _MMIO(0x9888), 0x05880000 },
+	{ _MMIO(0x9888), 0x0b890032 },
+	{ _MMIO(0x9888), 0x1b890031 },
+	{ _MMIO(0x9888), 0x05890000 },
+	{ _MMIO(0x9888), 0x01890040 },
+	{ _MMIO(0x9888), 0x03890040 },
+	{ _MMIO(0x9888), 0x098a0000 },
+	{ _MMIO(0x9888), 0x198a0004 },
+	{ _MMIO(0x9888), 0x058a0000 },
+	{ _MMIO(0x9888), 0x018a8050 },
+	{ _MMIO(0x9888), 0x038a2050 },
+	{ _MMIO(0x9888), 0x018b95a9 },
+	{ _MMIO(0x9888), 0x038be5a9 },
+	{ _MMIO(0x9888), 0x018c1500 },
+	{ _MMIO(0x9888), 0x038c0501 },
+	{ _MMIO(0x9888), 0x178d0015 },
+	{ _MMIO(0x9888), 0x058d0000 },
+	{ _MMIO(0x9888), 0x138e0004 },
+	{ _MMIO(0x9888), 0x218e000c },
+	{ _MMIO(0x9888), 0x058e0000 },
+	{ _MMIO(0x9888), 0x018e0500 },
+	{ _MMIO(0x9888), 0x038e0101 },
+	{ _MMIO(0x9888), 0x0f8f0027 },
+	{ _MMIO(0x9888), 0x058f0000 },
+	{ _MMIO(0x9888), 0x018f0000 },
+	{ _MMIO(0x9888), 0x038f0001 },
+	{ _MMIO(0x9888), 0x11900013 },
+	{ _MMIO(0x9888), 0x1f900017 },
+	{ _MMIO(0x9888), 0x05900000 },
+	{ _MMIO(0x9888), 0x01900100 },
+	{ _MMIO(0x9888), 0x03900001 },
+	{ _MMIO(0x9888), 0x01845555 },
+	{ _MMIO(0x9888), 0x03845555 },
+	{ _MMIO(0x9888), 0x418000aa },
+	{ _MMIO(0x9888), 0x438000aa },
+	{ _MMIO(0x9888), 0x458000aa },
+	{ _MMIO(0x9888), 0x478000aa },
+	{ _MMIO(0x9888), 0x4980018c },
+	{ _MMIO(0x9888), 0x4b80014b },
+	{ _MMIO(0x9888), 0x4d800128 },
+	{ _MMIO(0x9888), 0x4f80012a },
+	{ _MMIO(0x9888), 0x51800187 },
+	{ _MMIO(0x9888), 0x5380014b },
+	{ _MMIO(0x9888), 0x55800149 },
+	{ _MMIO(0x9888), 0x5780010a },
+	{ _MMIO(0x9888), 0x59800000 },
+};
+
+static int
+get_render_pipe_profile_mux_config(struct drm_i915_private *dev_priv,
+				   const struct i915_oa_reg **regs,
+				   int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_render_pipe_profile;
+	lens[n] = ARRAY_SIZE(mux_config_render_pipe_profile);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_hdc_and_sf[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x10800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fff7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_hdc_and_sf[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_hdc_and_sf[] = {
+	{ _MMIO(0x9888), 0x105c0232 },
+	{ _MMIO(0x9888), 0x10580232 },
+	{ _MMIO(0x9888), 0x10380232 },
+	{ _MMIO(0x9888), 0x10dc0232 },
+	{ _MMIO(0x9888), 0x10d80232 },
+	{ _MMIO(0x9888), 0x10b80232 },
+	{ _MMIO(0x9888), 0x118e4400 },
+	{ _MMIO(0x9888), 0x025c6080 },
+	{ _MMIO(0x9888), 0x045c004b },
+	{ _MMIO(0x9888), 0x005c8000 },
+	{ _MMIO(0x9888), 0x00582080 },
+	{ _MMIO(0x9888), 0x0258004b },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x045b4000 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f00aa },
+	{ _MMIO(0x9888), 0x04386080 },
+	{ _MMIO(0x9888), 0x0638404b },
+	{ _MMIO(0x9888), 0x02384000 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a380000 },
+	{ _MMIO(0x9888), 0x0c380000 },
+	{ _MMIO(0x9888), 0x00398000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x0cdc25c1 },
+	{ _MMIO(0x9888), 0x0adcc000 },
+	{ _MMIO(0x9888), 0x0ad825c1 },
+	{ _MMIO(0x9888), 0x18db4000 },
+	{ _MMIO(0x9888), 0x1adb0001 },
+	{ _MMIO(0x9888), 0x0e9f8000 },
+	{ _MMIO(0x9888), 0x109f02aa },
+	{ _MMIO(0x9888), 0x0eb825c1 },
+	{ _MMIO(0x9888), 0x18b80154 },
+	{ _MMIO(0x9888), 0x0ab9a000 },
+	{ _MMIO(0x9888), 0x0cb9a000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x0d88c000 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x258baa05 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x198c5400 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x098dc000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x098e05c0 },
+	{ _MMIO(0x9888), 0x058e0000 },
+	{ _MMIO(0x9888), 0x198f0020 },
+	{ _MMIO(0x9888), 0x2185aa0a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x19835000 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x09848000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x19808000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x51800040 },
+	{ _MMIO(0x9888), 0x43800400 },
+	{ _MMIO(0x9888), 0x45800800 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800c62 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f801042 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x418014a4 },
+};
+
+static int
+get_hdc_and_sf_mux_config(struct drm_i915_private *dev_priv,
+			  const struct i915_oa_reg **regs,
+			  int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_hdc_and_sf;
+	lens[n] = ARRAY_SIZE(mux_config_hdc_and_sf);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1[] = {
+	{ _MMIO(0x9888), 0x10bf03da },
+	{ _MMIO(0x9888), 0x14bf0001 },
+	{ _MMIO(0x9888), 0x12980340 },
+	{ _MMIO(0x9888), 0x12990340 },
+	{ _MMIO(0x9888), 0x0cbf1187 },
+	{ _MMIO(0x9888), 0x0ebf1205 },
+	{ _MMIO(0x9888), 0x00bf0500 },
+	{ _MMIO(0x9888), 0x02bf042b },
+	{ _MMIO(0x9888), 0x04bf002c },
+	{ _MMIO(0x9888), 0x0cdac000 },
+	{ _MMIO(0x9888), 0x0edac000 },
+	{ _MMIO(0x9888), 0x00da8000 },
+	{ _MMIO(0x9888), 0x02dac000 },
+	{ _MMIO(0x9888), 0x04da4000 },
+	{ _MMIO(0x9888), 0x04983400 },
+	{ _MMIO(0x9888), 0x10980000 },
+	{ _MMIO(0x9888), 0x06990034 },
+	{ _MMIO(0x9888), 0x10990000 },
+	{ _MMIO(0x9888), 0x0c9dc000 },
+	{ _MMIO(0x9888), 0x0e9dc000 },
+	{ _MMIO(0x9888), 0x009d8000 },
+	{ _MMIO(0x9888), 0x029dc000 },
+	{ _MMIO(0x9888), 0x049d4000 },
+	{ _MMIO(0x9888), 0x109f02a8 },
+	{ _MMIO(0x9888), 0x0c9fa000 },
+	{ _MMIO(0x9888), 0x0e9f00ba },
+	{ _MMIO(0x9888), 0x0cb88000 },
+	{ _MMIO(0x9888), 0x0cb95000 },
+	{ _MMIO(0x9888), 0x0eb95000 },
+	{ _MMIO(0x9888), 0x00b94000 },
+	{ _MMIO(0x9888), 0x02b95000 },
+	{ _MMIO(0x9888), 0x04b91000 },
+	{ _MMIO(0x9888), 0x06b92000 },
+	{ _MMIO(0x9888), 0x0cba4000 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x258b800a },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b5500 },
+	{ _MMIO(0x9888), 0x198c4000 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x018d8000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2185800a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x1b830154 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x47800000 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800060 },
+};
+
+static int
+get_l3_1_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_1;
+	lens[n] = ARRAY_SIZE(mux_config_l3_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_2[] = {
+	{ _MMIO(0x9888), 0x103f03da },
+	{ _MMIO(0x9888), 0x143f0001 },
+	{ _MMIO(0x9888), 0x12180340 },
+	{ _MMIO(0x9888), 0x12190340 },
+	{ _MMIO(0x9888), 0x0c3f1187 },
+	{ _MMIO(0x9888), 0x0e3f1205 },
+	{ _MMIO(0x9888), 0x003f0500 },
+	{ _MMIO(0x9888), 0x023f042b },
+	{ _MMIO(0x9888), 0x043f002c },
+	{ _MMIO(0x9888), 0x0c5ac000 },
+	{ _MMIO(0x9888), 0x0e5ac000 },
+	{ _MMIO(0x9888), 0x005a8000 },
+	{ _MMIO(0x9888), 0x025ac000 },
+	{ _MMIO(0x9888), 0x045a4000 },
+	{ _MMIO(0x9888), 0x04183400 },
+	{ _MMIO(0x9888), 0x10180000 },
+	{ _MMIO(0x9888), 0x06190034 },
+	{ _MMIO(0x9888), 0x10190000 },
+	{ _MMIO(0x9888), 0x0c1dc000 },
+	{ _MMIO(0x9888), 0x0e1dc000 },
+	{ _MMIO(0x9888), 0x001d8000 },
+	{ _MMIO(0x9888), 0x021dc000 },
+	{ _MMIO(0x9888), 0x041d4000 },
+	{ _MMIO(0x9888), 0x101f02a8 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f00ba },
+	{ _MMIO(0x9888), 0x0c388000 },
+	{ _MMIO(0x9888), 0x0c395000 },
+	{ _MMIO(0x9888), 0x0e395000 },
+	{ _MMIO(0x9888), 0x00394000 },
+	{ _MMIO(0x9888), 0x02395000 },
+	{ _MMIO(0x9888), 0x04391000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x0c3a4000 },
+	{ _MMIO(0x9888), 0x1b8aa800 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x258b4005 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x2185800a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x1b830154 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x45800000 },
+	{ _MMIO(0x9888), 0x47800000 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800060 },
+};
+
+static int
+get_l3_2_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_2;
+	lens[n] = ARRAY_SIZE(mux_config_l3_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_3[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_3[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_3[] = {
+	{ _MMIO(0x9888), 0x121b0340 },
+	{ _MMIO(0x9888), 0x103f0274 },
+	{ _MMIO(0x9888), 0x123f0000 },
+	{ _MMIO(0x9888), 0x129b0340 },
+	{ _MMIO(0x9888), 0x10bf0274 },
+	{ _MMIO(0x9888), 0x12bf0000 },
+	{ _MMIO(0x9888), 0x041b3400 },
+	{ _MMIO(0x9888), 0x101b0000 },
+	{ _MMIO(0x9888), 0x045c8000 },
+	{ _MMIO(0x9888), 0x0a3d4000 },
+	{ _MMIO(0x9888), 0x003f0080 },
+	{ _MMIO(0x9888), 0x023f0793 },
+	{ _MMIO(0x9888), 0x043f0014 },
+	{ _MMIO(0x9888), 0x04588000 },
+	{ _MMIO(0x9888), 0x005a8000 },
+	{ _MMIO(0x9888), 0x025ac000 },
+	{ _MMIO(0x9888), 0x045a4000 },
+	{ _MMIO(0x9888), 0x0a5b4000 },
+	{ _MMIO(0x9888), 0x001d8000 },
+	{ _MMIO(0x9888), 0x021dc000 },
+	{ _MMIO(0x9888), 0x041d4000 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f002a },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x00394000 },
+	{ _MMIO(0x9888), 0x02395000 },
+	{ _MMIO(0x9888), 0x04399000 },
+	{ _MMIO(0x9888), 0x069b0034 },
+	{ _MMIO(0x9888), 0x109b0000 },
+	{ _MMIO(0x9888), 0x06dc4000 },
+	{ _MMIO(0x9888), 0x0cbd4000 },
+	{ _MMIO(0x9888), 0x0cbf0981 },
+	{ _MMIO(0x9888), 0x0ebf0a0f },
+	{ _MMIO(0x9888), 0x06d84000 },
+	{ _MMIO(0x9888), 0x0cdac000 },
+	{ _MMIO(0x9888), 0x0edac000 },
+	{ _MMIO(0x9888), 0x0cdb4000 },
+	{ _MMIO(0x9888), 0x0c9dc000 },
+	{ _MMIO(0x9888), 0x0e9dc000 },
+	{ _MMIO(0x9888), 0x109f02a8 },
+	{ _MMIO(0x9888), 0x0e9f0080 },
+	{ _MMIO(0x9888), 0x0cb84000 },
+	{ _MMIO(0x9888), 0x0cb95000 },
+	{ _MMIO(0x9888), 0x0eb95000 },
+	{ _MMIO(0x9888), 0x06b92000 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x258b8009 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x198c4000 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2185800a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x1b830154 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x45800c00 },
+	{ _MMIO(0x9888), 0x47800c63 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f8014a5 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800045 },
+};
+
+static int
+get_l3_3_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_3;
+	lens[n] = ARRAY_SIZE(mux_config_l3_3);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_4[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_4[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_4[] = {
+	{ _MMIO(0x9888), 0x121a0340 },
+	{ _MMIO(0x9888), 0x103f0017 },
+	{ _MMIO(0x9888), 0x123f0020 },
+	{ _MMIO(0x9888), 0x129a0340 },
+	{ _MMIO(0x9888), 0x10bf0017 },
+	{ _MMIO(0x9888), 0x12bf0020 },
+	{ _MMIO(0x9888), 0x041a3400 },
+	{ _MMIO(0x9888), 0x101a0000 },
+	{ _MMIO(0x9888), 0x043b8000 },
+	{ _MMIO(0x9888), 0x0a3e0010 },
+	{ _MMIO(0x9888), 0x003f0200 },
+	{ _MMIO(0x9888), 0x023f0113 },
+	{ _MMIO(0x9888), 0x043f0014 },
+	{ _MMIO(0x9888), 0x02592000 },
+	{ _MMIO(0x9888), 0x005a8000 },
+	{ _MMIO(0x9888), 0x025ac000 },
+	{ _MMIO(0x9888), 0x045a4000 },
+	{ _MMIO(0x9888), 0x0a1c8000 },
+	{ _MMIO(0x9888), 0x001d8000 },
+	{ _MMIO(0x9888), 0x021dc000 },
+	{ _MMIO(0x9888), 0x041d4000 },
+	{ _MMIO(0x9888), 0x0a1e8000 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f001a },
+	{ _MMIO(0x9888), 0x00394000 },
+	{ _MMIO(0x9888), 0x02395000 },
+	{ _MMIO(0x9888), 0x04391000 },
+	{ _MMIO(0x9888), 0x069a0034 },
+	{ _MMIO(0x9888), 0x109a0000 },
+	{ _MMIO(0x9888), 0x06bb4000 },
+	{ _MMIO(0x9888), 0x0abe0040 },
+	{ _MMIO(0x9888), 0x0cbf0984 },
+	{ _MMIO(0x9888), 0x0ebf0a02 },
+	{ _MMIO(0x9888), 0x02d94000 },
+	{ _MMIO(0x9888), 0x0cdac000 },
+	{ _MMIO(0x9888), 0x0edac000 },
+	{ _MMIO(0x9888), 0x0c9c0400 },
+	{ _MMIO(0x9888), 0x0c9dc000 },
+	{ _MMIO(0x9888), 0x0e9dc000 },
+	{ _MMIO(0x9888), 0x0c9e0400 },
+	{ _MMIO(0x9888), 0x109f02a8 },
+	{ _MMIO(0x9888), 0x0e9f0040 },
+	{ _MMIO(0x9888), 0x0cb95000 },
+	{ _MMIO(0x9888), 0x0eb95000 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x258b8009 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x198c4000 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2185800a },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x1b830154 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x45800800 },
+	{ _MMIO(0x9888), 0x47800842 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f801084 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800044 },
+};
+
+static int
+get_l3_4_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_4;
+	lens[n] = ARRAY_SIZE(mux_config_l3_4);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00006000 },
+	{ _MMIO(0x2774), 0x0000f3ff },
+	{ _MMIO(0x2778), 0x00001800 },
+	{ _MMIO(0x277c), 0x0000fcff },
+	{ _MMIO(0x2780), 0x00000600 },
+	{ _MMIO(0x2784), 0x0000ff3f },
+	{ _MMIO(0x2788), 0x00000180 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000060 },
+	{ _MMIO(0x2794), 0x0000fff3 },
+	{ _MMIO(0x2798), 0x00000018 },
+	{ _MMIO(0x279c), 0x0000fffc },
+};
+
+static const struct i915_oa_reg flex_eu_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x9888), 0x143b000e },
+	{ _MMIO(0x9888), 0x043c55c0 },
+	{ _MMIO(0x9888), 0x0a1e0280 },
+	{ _MMIO(0x9888), 0x0c1e0408 },
+	{ _MMIO(0x9888), 0x10390000 },
+	{ _MMIO(0x9888), 0x12397a1f },
+	{ _MMIO(0x9888), 0x14bb000e },
+	{ _MMIO(0x9888), 0x04bc5000 },
+	{ _MMIO(0x9888), 0x0a9e0296 },
+	{ _MMIO(0x9888), 0x0c9e0008 },
+	{ _MMIO(0x9888), 0x10b90000 },
+	{ _MMIO(0x9888), 0x12b97a1f },
+	{ _MMIO(0x9888), 0x063b0042 },
+	{ _MMIO(0x9888), 0x103b0000 },
+	{ _MMIO(0x9888), 0x083c0000 },
+	{ _MMIO(0x9888), 0x0a3e0040 },
+	{ _MMIO(0x9888), 0x043f8000 },
+	{ _MMIO(0x9888), 0x02594000 },
+	{ _MMIO(0x9888), 0x045a8000 },
+	{ _MMIO(0x9888), 0x0c1c0400 },
+	{ _MMIO(0x9888), 0x041d8000 },
+	{ _MMIO(0x9888), 0x081e02c0 },
+	{ _MMIO(0x9888), 0x0e1e0000 },
+	{ _MMIO(0x9888), 0x0c1fa800 },
+	{ _MMIO(0x9888), 0x0e1f0260 },
+	{ _MMIO(0x9888), 0x101f0014 },
+	{ _MMIO(0x9888), 0x003905e0 },
+	{ _MMIO(0x9888), 0x06390bc0 },
+	{ _MMIO(0x9888), 0x02390018 },
+	{ _MMIO(0x9888), 0x04394000 },
+	{ _MMIO(0x9888), 0x04bb0042 },
+	{ _MMIO(0x9888), 0x10bb0000 },
+	{ _MMIO(0x9888), 0x02bc05c0 },
+	{ _MMIO(0x9888), 0x08bc0000 },
+	{ _MMIO(0x9888), 0x0abe0004 },
+	{ _MMIO(0x9888), 0x02bf8000 },
+	{ _MMIO(0x9888), 0x02d91000 },
+	{ _MMIO(0x9888), 0x02da8000 },
+	{ _MMIO(0x9888), 0x089c8000 },
+	{ _MMIO(0x9888), 0x029d8000 },
+	{ _MMIO(0x9888), 0x089e8000 },
+	{ _MMIO(0x9888), 0x0e9e0000 },
+	{ _MMIO(0x9888), 0x0e9fa806 },
+	{ _MMIO(0x9888), 0x109f0142 },
+	{ _MMIO(0x9888), 0x08b90617 },
+	{ _MMIO(0x9888), 0x0ab90be0 },
+	{ _MMIO(0x9888), 0x02b94000 },
+	{ _MMIO(0x9888), 0x0d88f000 },
+	{ _MMIO(0x9888), 0x0f88000c },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x018a8000 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x1b8a2800 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x238b52a0 },
+	{ _MMIO(0x9888), 0x258b6a95 },
+	{ _MMIO(0x9888), 0x278b0029 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c1500 },
+	{ _MMIO(0x9888), 0x1b8c0014 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x038d8000 },
+	{ _MMIO(0x9888), 0x058d2000 },
+	{ _MMIO(0x9888), 0x1f85aa80 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x01834000 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0184c000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1180c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x4d800444 },
+	{ _MMIO(0x9888), 0x3d800000 },
+	{ _MMIO(0x9888), 0x4f804000 },
+	{ _MMIO(0x9888), 0x43801080 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800084 },
+	{ _MMIO(0x9888), 0x53800044 },
+	{ _MMIO(0x9888), 0x47801080 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x3f800000 },
+	{ _MMIO(0x9888), 0x41800840 },
+};
+
+static int
+get_rasterizer_and_pixel_backend_mux_config(struct drm_i915_private *dev_priv,
+					    const struct i915_oa_reg **regs,
+					    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_rasterizer_and_pixel_backend;
+	lens[n] = ARRAY_SIZE(mux_config_rasterizer_and_pixel_backend);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler_1[] = {
+	{ _MMIO(0x9888), 0x18921400 },
+	{ _MMIO(0x9888), 0x149500ab },
+	{ _MMIO(0x9888), 0x18b21400 },
+	{ _MMIO(0x9888), 0x14b500ab },
+	{ _MMIO(0x9888), 0x18d21400 },
+	{ _MMIO(0x9888), 0x14d500ab },
+	{ _MMIO(0x9888), 0x0cdc8000 },
+	{ _MMIO(0x9888), 0x0edc4000 },
+	{ _MMIO(0x9888), 0x02dcc000 },
+	{ _MMIO(0x9888), 0x04dcc000 },
+	{ _MMIO(0x9888), 0x1abd00a0 },
+	{ _MMIO(0x9888), 0x0abd8000 },
+	{ _MMIO(0x9888), 0x0cd88000 },
+	{ _MMIO(0x9888), 0x0ed84000 },
+	{ _MMIO(0x9888), 0x04d88000 },
+	{ _MMIO(0x9888), 0x1adb0050 },
+	{ _MMIO(0x9888), 0x04db8000 },
+	{ _MMIO(0x9888), 0x06db8000 },
+	{ _MMIO(0x9888), 0x08db8000 },
+	{ _MMIO(0x9888), 0x0adb4000 },
+	{ _MMIO(0x9888), 0x109f02a0 },
+	{ _MMIO(0x9888), 0x0c9fa000 },
+	{ _MMIO(0x9888), 0x0e9f00aa },
+	{ _MMIO(0x9888), 0x18b82500 },
+	{ _MMIO(0x9888), 0x02b88000 },
+	{ _MMIO(0x9888), 0x04b84000 },
+	{ _MMIO(0x9888), 0x06b84000 },
+	{ _MMIO(0x9888), 0x08b84000 },
+	{ _MMIO(0x9888), 0x0ab84000 },
+	{ _MMIO(0x9888), 0x0cb88000 },
+	{ _MMIO(0x9888), 0x0cb98000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x00b98000 },
+	{ _MMIO(0x9888), 0x02b9a000 },
+	{ _MMIO(0x9888), 0x04b9a000 },
+	{ _MMIO(0x9888), 0x06b92000 },
+	{ _MMIO(0x9888), 0x1aba0200 },
+	{ _MMIO(0x9888), 0x02ba8000 },
+	{ _MMIO(0x9888), 0x0cba8000 },
+	{ _MMIO(0x9888), 0x04908000 },
+	{ _MMIO(0x9888), 0x04918000 },
+	{ _MMIO(0x9888), 0x04927300 },
+	{ _MMIO(0x9888), 0x10920000 },
+	{ _MMIO(0x9888), 0x1893000a },
+	{ _MMIO(0x9888), 0x0a934000 },
+	{ _MMIO(0x9888), 0x0a946000 },
+	{ _MMIO(0x9888), 0x0c959000 },
+	{ _MMIO(0x9888), 0x0e950098 },
+	{ _MMIO(0x9888), 0x10950000 },
+	{ _MMIO(0x9888), 0x04b04000 },
+	{ _MMIO(0x9888), 0x04b14000 },
+	{ _MMIO(0x9888), 0x04b20073 },
+	{ _MMIO(0x9888), 0x10b20000 },
+	{ _MMIO(0x9888), 0x04b38000 },
+	{ _MMIO(0x9888), 0x06b38000 },
+	{ _MMIO(0x9888), 0x08b34000 },
+	{ _MMIO(0x9888), 0x04b4c000 },
+	{ _MMIO(0x9888), 0x02b59890 },
+	{ _MMIO(0x9888), 0x10b50000 },
+	{ _MMIO(0x9888), 0x06d04000 },
+	{ _MMIO(0x9888), 0x06d14000 },
+	{ _MMIO(0x9888), 0x06d20073 },
+	{ _MMIO(0x9888), 0x10d20000 },
+	{ _MMIO(0x9888), 0x18d30020 },
+	{ _MMIO(0x9888), 0x02d38000 },
+	{ _MMIO(0x9888), 0x0cd34000 },
+	{ _MMIO(0x9888), 0x0ad48000 },
+	{ _MMIO(0x9888), 0x04d42000 },
+	{ _MMIO(0x9888), 0x0ed59000 },
+	{ _MMIO(0x9888), 0x00d59800 },
+	{ _MMIO(0x9888), 0x10d50000 },
+	{ _MMIO(0x9888), 0x0f88000e },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b5500 },
+	{ _MMIO(0x9888), 0x258b000a },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x0d8d8000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x018d8000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x2185000a },
+	{ _MMIO(0x9888), 0x1b830150 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d848000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d808000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47801021 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800c64 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800c02 },
+};
+
+static int
+get_sampler_1_mux_config(struct drm_i915_private *dev_priv,
+			 const struct i915_oa_reg **regs,
+			 int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_sampler_1;
+	lens[n] = ARRAY_SIZE(mux_config_sampler_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler_2[] = {
+	{ _MMIO(0x9888), 0x18121400 },
+	{ _MMIO(0x9888), 0x141500ab },
+	{ _MMIO(0x9888), 0x18321400 },
+	{ _MMIO(0x9888), 0x143500ab },
+	{ _MMIO(0x9888), 0x18521400 },
+	{ _MMIO(0x9888), 0x145500ab },
+	{ _MMIO(0x9888), 0x0c5c8000 },
+	{ _MMIO(0x9888), 0x0e5c4000 },
+	{ _MMIO(0x9888), 0x025cc000 },
+	{ _MMIO(0x9888), 0x045cc000 },
+	{ _MMIO(0x9888), 0x1a3d00a0 },
+	{ _MMIO(0x9888), 0x0a3d8000 },
+	{ _MMIO(0x9888), 0x0c588000 },
+	{ _MMIO(0x9888), 0x0e584000 },
+	{ _MMIO(0x9888), 0x04588000 },
+	{ _MMIO(0x9888), 0x1a5b0050 },
+	{ _MMIO(0x9888), 0x045b8000 },
+	{ _MMIO(0x9888), 0x065b8000 },
+	{ _MMIO(0x9888), 0x085b8000 },
+	{ _MMIO(0x9888), 0x0a5b4000 },
+	{ _MMIO(0x9888), 0x101f02a0 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f00aa },
+	{ _MMIO(0x9888), 0x18382500 },
+	{ _MMIO(0x9888), 0x02388000 },
+	{ _MMIO(0x9888), 0x04384000 },
+	{ _MMIO(0x9888), 0x06384000 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0c388000 },
+	{ _MMIO(0x9888), 0x0c398000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x00398000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x1a3a0200 },
+	{ _MMIO(0x9888), 0x023a8000 },
+	{ _MMIO(0x9888), 0x0c3a8000 },
+	{ _MMIO(0x9888), 0x04108000 },
+	{ _MMIO(0x9888), 0x04118000 },
+	{ _MMIO(0x9888), 0x04127300 },
+	{ _MMIO(0x9888), 0x10120000 },
+	{ _MMIO(0x9888), 0x1813000a },
+	{ _MMIO(0x9888), 0x0a134000 },
+	{ _MMIO(0x9888), 0x0a146000 },
+	{ _MMIO(0x9888), 0x0c159000 },
+	{ _MMIO(0x9888), 0x0e150098 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x04304000 },
+	{ _MMIO(0x9888), 0x04314000 },
+	{ _MMIO(0x9888), 0x04320073 },
+	{ _MMIO(0x9888), 0x10320000 },
+	{ _MMIO(0x9888), 0x04338000 },
+	{ _MMIO(0x9888), 0x06338000 },
+	{ _MMIO(0x9888), 0x08334000 },
+	{ _MMIO(0x9888), 0x0434c000 },
+	{ _MMIO(0x9888), 0x02359890 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x06504000 },
+	{ _MMIO(0x9888), 0x06514000 },
+	{ _MMIO(0x9888), 0x06520073 },
+	{ _MMIO(0x9888), 0x10520000 },
+	{ _MMIO(0x9888), 0x18530020 },
+	{ _MMIO(0x9888), 0x02538000 },
+	{ _MMIO(0x9888), 0x0c534000 },
+	{ _MMIO(0x9888), 0x0a548000 },
+	{ _MMIO(0x9888), 0x04542000 },
+	{ _MMIO(0x9888), 0x0e559000 },
+	{ _MMIO(0x9888), 0x00559800 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x1b8aa000 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x258b0005 },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x2185000a },
+	{ _MMIO(0x9888), 0x1b830150 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0d848000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x07844000 },
+	{ _MMIO(0x9888), 0x1d808000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x17804000 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47801021 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800c64 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x41800c02 },
+};
+
+static int
+get_sampler_2_mux_config(struct drm_i915_private *dev_priv,
+			 const struct i915_oa_reg **regs,
+			 int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_sampler_2;
+	lens[n] = ARRAY_SIZE(mux_config_sampler_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x0000fe7f },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000ffbf },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fff7 },
+	{ _MMIO(0x2798), 0x00000000 },
+	{ _MMIO(0x279c), 0x0000fff9 },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_1[] = {
+	{ _MMIO(0x9888), 0x16154d60 },
+	{ _MMIO(0x9888), 0x16352e60 },
+	{ _MMIO(0x9888), 0x16554d60 },
+	{ _MMIO(0x9888), 0x16950000 },
+	{ _MMIO(0x9888), 0x16b50000 },
+	{ _MMIO(0x9888), 0x16d50000 },
+	{ _MMIO(0x9888), 0x005c8000 },
+	{ _MMIO(0x9888), 0x045cc000 },
+	{ _MMIO(0x9888), 0x065c4000 },
+	{ _MMIO(0x9888), 0x083d8000 },
+	{ _MMIO(0x9888), 0x0a3d8000 },
+	{ _MMIO(0x9888), 0x0458c000 },
+	{ _MMIO(0x9888), 0x025b8000 },
+	{ _MMIO(0x9888), 0x085b4000 },
+	{ _MMIO(0x9888), 0x0a5b4000 },
+	{ _MMIO(0x9888), 0x0c5b8000 },
+	{ _MMIO(0x9888), 0x0c1fa000 },
+	{ _MMIO(0x9888), 0x0e1f00aa },
+	{ _MMIO(0x9888), 0x02384000 },
+	{ _MMIO(0x9888), 0x04388000 },
+	{ _MMIO(0x9888), 0x06388000 },
+	{ _MMIO(0x9888), 0x08384000 },
+	{ _MMIO(0x9888), 0x0a384000 },
+	{ _MMIO(0x9888), 0x0c384000 },
+	{ _MMIO(0x9888), 0x00398000 },
+	{ _MMIO(0x9888), 0x0239a000 },
+	{ _MMIO(0x9888), 0x0439a000 },
+	{ _MMIO(0x9888), 0x06392000 },
+	{ _MMIO(0x9888), 0x043a8000 },
+	{ _MMIO(0x9888), 0x063a8000 },
+	{ _MMIO(0x9888), 0x08138000 },
+	{ _MMIO(0x9888), 0x0a138000 },
+	{ _MMIO(0x9888), 0x06143000 },
+	{ _MMIO(0x9888), 0x0415cfc7 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x02338000 },
+	{ _MMIO(0x9888), 0x0c338000 },
+	{ _MMIO(0x9888), 0x04342000 },
+	{ _MMIO(0x9888), 0x06344000 },
+	{ _MMIO(0x9888), 0x0035c700 },
+	{ _MMIO(0x9888), 0x063500cf },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x04538000 },
+	{ _MMIO(0x9888), 0x06538000 },
+	{ _MMIO(0x9888), 0x0454c000 },
+	{ _MMIO(0x9888), 0x0255cfc7 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x06dc8000 },
+	{ _MMIO(0x9888), 0x08dc4000 },
+	{ _MMIO(0x9888), 0x0cdcc000 },
+	{ _MMIO(0x9888), 0x0edcc000 },
+	{ _MMIO(0x9888), 0x1abd00a8 },
+	{ _MMIO(0x9888), 0x0cd8c000 },
+	{ _MMIO(0x9888), 0x0ed84000 },
+	{ _MMIO(0x9888), 0x0edb8000 },
+	{ _MMIO(0x9888), 0x18db0800 },
+	{ _MMIO(0x9888), 0x1adb0254 },
+	{ _MMIO(0x9888), 0x0e9faa00 },
+	{ _MMIO(0x9888), 0x109f02aa },
+	{ _MMIO(0x9888), 0x0eb84000 },
+	{ _MMIO(0x9888), 0x16b84000 },
+	{ _MMIO(0x9888), 0x18b8156a },
+	{ _MMIO(0x9888), 0x06b98000 },
+	{ _MMIO(0x9888), 0x08b9a000 },
+	{ _MMIO(0x9888), 0x0ab9a000 },
+	{ _MMIO(0x9888), 0x0cb9a000 },
+	{ _MMIO(0x9888), 0x0eb9a000 },
+	{ _MMIO(0x9888), 0x18baa000 },
+	{ _MMIO(0x9888), 0x1aba0002 },
+	{ _MMIO(0x9888), 0x16934000 },
+	{ _MMIO(0x9888), 0x1893000a },
+	{ _MMIO(0x9888), 0x0a947000 },
+	{ _MMIO(0x9888), 0x0c95c5c1 },
+	{ _MMIO(0x9888), 0x0e9500c3 },
+	{ _MMIO(0x9888), 0x10950000 },
+	{ _MMIO(0x9888), 0x0eb38000 },
+	{ _MMIO(0x9888), 0x16b30040 },
+	{ _MMIO(0x9888), 0x18b30020 },
+	{ _MMIO(0x9888), 0x06b48000 },
+	{ _MMIO(0x9888), 0x08b41000 },
+	{ _MMIO(0x9888), 0x0ab48000 },
+	{ _MMIO(0x9888), 0x06b5c500 },
+	{ _MMIO(0x9888), 0x08b500c3 },
+	{ _MMIO(0x9888), 0x0eb5c100 },
+	{ _MMIO(0x9888), 0x10b50000 },
+	{ _MMIO(0x9888), 0x16d31500 },
+	{ _MMIO(0x9888), 0x08d4e000 },
+	{ _MMIO(0x9888), 0x08d5c100 },
+	{ _MMIO(0x9888), 0x0ad5c3c5 },
+	{ _MMIO(0x9888), 0x10d50000 },
+	{ _MMIO(0x9888), 0x0d88f800 },
+	{ _MMIO(0x9888), 0x0f88000f },
+	{ _MMIO(0x9888), 0x038a8000 },
+	{ _MMIO(0x9888), 0x058a8000 },
+	{ _MMIO(0x9888), 0x078a8000 },
+	{ _MMIO(0x9888), 0x098a8000 },
+	{ _MMIO(0x9888), 0x0b8a8000 },
+	{ _MMIO(0x9888), 0x0d8a8000 },
+	{ _MMIO(0x9888), 0x258baaa5 },
+	{ _MMIO(0x9888), 0x278b002a },
+	{ _MMIO(0x9888), 0x238b2a80 },
+	{ _MMIO(0x9888), 0x0f8c4000 },
+	{ _MMIO(0x9888), 0x178c2000 },
+	{ _MMIO(0x9888), 0x198c5500 },
+	{ _MMIO(0x9888), 0x1b8c0015 },
+	{ _MMIO(0x9888), 0x078d8000 },
+	{ _MMIO(0x9888), 0x098da000 },
+	{ _MMIO(0x9888), 0x0b8da000 },
+	{ _MMIO(0x9888), 0x0d8da000 },
+	{ _MMIO(0x9888), 0x0f8da000 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800c42 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45800063 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x47800800 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f8014a4 },
+	{ _MMIO(0x9888), 0x41801042 },
+};
+
+static int
+get_tdl_1_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_1;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x0000fe7f },
+	{ _MMIO(0x2780), 0x00000000 },
+	{ _MMIO(0x2784), 0x0000ff9f },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000ffe7 },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fffb },
+	{ _MMIO(0x2798), 0x00000002 },
+	{ _MMIO(0x279c), 0x0000fffd },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_2[] = {
+	{ _MMIO(0x9888), 0x16150000 },
+	{ _MMIO(0x9888), 0x16350000 },
+	{ _MMIO(0x9888), 0x16550000 },
+	{ _MMIO(0x9888), 0x16952e60 },
+	{ _MMIO(0x9888), 0x16b54d60 },
+	{ _MMIO(0x9888), 0x16d52e60 },
+	{ _MMIO(0x9888), 0x065c8000 },
+	{ _MMIO(0x9888), 0x085cc000 },
+	{ _MMIO(0x9888), 0x0a5cc000 },
+	{ _MMIO(0x9888), 0x0c5c4000 },
+	{ _MMIO(0x9888), 0x0e3d8000 },
+	{ _MMIO(0x9888), 0x183da000 },
+	{ _MMIO(0x9888), 0x06588000 },
+	{ _MMIO(0x9888), 0x08588000 },
+	{ _MMIO(0x9888), 0x0a584000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x185b5800 },
+	{ _MMIO(0x9888), 0x1a5b000a },
+	{ _MMIO(0x9888), 0x0e1faa00 },
+	{ _MMIO(0x9888), 0x101f02aa },
+	{ _MMIO(0x9888), 0x0e384000 },
+	{ _MMIO(0x9888), 0x16384000 },
+	{ _MMIO(0x9888), 0x18382a55 },
+	{ _MMIO(0x9888), 0x06398000 },
+	{ _MMIO(0x9888), 0x0839a000 },
+	{ _MMIO(0x9888), 0x0a39a000 },
+	{ _MMIO(0x9888), 0x0c39a000 },
+	{ _MMIO(0x9888), 0x0e39a000 },
+	{ _MMIO(0x9888), 0x1a3a02a0 },
+	{ _MMIO(0x9888), 0x0e138000 },
+	{ _MMIO(0x9888), 0x16130500 },
+	{ _MMIO(0x9888), 0x06148000 },
+	{ _MMIO(0x9888), 0x08146000 },
+	{ _MMIO(0x9888), 0x0615c100 },
+	{ _MMIO(0x9888), 0x0815c500 },
+	{ _MMIO(0x9888), 0x0a1500c3 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x16335040 },
+	{ _MMIO(0x9888), 0x08349000 },
+	{ _MMIO(0x9888), 0x0a341000 },
+	{ _MMIO(0x9888), 0x083500c1 },
+	{ _MMIO(0x9888), 0x0a35c500 },
+	{ _MMIO(0x9888), 0x0c3500c3 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x1853002a },
+	{ _MMIO(0x9888), 0x0a54e000 },
+	{ _MMIO(0x9888), 0x0c55c500 },
+	{ _MMIO(0x9888), 0x0e55c1c3 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x00dc8000 },
+	{ _MMIO(0x9888), 0x02dcc000 },
+	{ _MMIO(0x9888), 0x04dc4000 },
+	{ _MMIO(0x9888), 0x04bd8000 },
+	{ _MMIO(0x9888), 0x06bd8000 },
+	{ _MMIO(0x9888), 0x02d8c000 },
+	{ _MMIO(0x9888), 0x02db8000 },
+	{ _MMIO(0x9888), 0x04db4000 },
+	{ _MMIO(0x9888), 0x06db4000 },
+	{ _MMIO(0x9888), 0x08db8000 },
+	{ _MMIO(0x9888), 0x0c9fa000 },
+	{ _MMIO(0x9888), 0x0e9f00aa },
+	{ _MMIO(0x9888), 0x02b84000 },
+	{ _MMIO(0x9888), 0x04b84000 },
+	{ _MMIO(0x9888), 0x06b84000 },
+	{ _MMIO(0x9888), 0x08b84000 },
+	{ _MMIO(0x9888), 0x0ab88000 },
+	{ _MMIO(0x9888), 0x0cb88000 },
+	{ _MMIO(0x9888), 0x00b98000 },
+	{ _MMIO(0x9888), 0x02b9a000 },
+	{ _MMIO(0x9888), 0x04b9a000 },
+	{ _MMIO(0x9888), 0x06b92000 },
+	{ _MMIO(0x9888), 0x0aba8000 },
+	{ _MMIO(0x9888), 0x0cba8000 },
+	{ _MMIO(0x9888), 0x04938000 },
+	{ _MMIO(0x9888), 0x06938000 },
+	{ _MMIO(0x9888), 0x0494c000 },
+	{ _MMIO(0x9888), 0x0295cfc7 },
+	{ _MMIO(0x9888), 0x10950000 },
+	{ _MMIO(0x9888), 0x02b38000 },
+	{ _MMIO(0x9888), 0x08b38000 },
+	{ _MMIO(0x9888), 0x04b42000 },
+	{ _MMIO(0x9888), 0x06b41000 },
+	{ _MMIO(0x9888), 0x00b5c700 },
+	{ _MMIO(0x9888), 0x04b500cf },
+	{ _MMIO(0x9888), 0x10b50000 },
+	{ _MMIO(0x9888), 0x0ad38000 },
+	{ _MMIO(0x9888), 0x0cd38000 },
+	{ _MMIO(0x9888), 0x06d46000 },
+	{ _MMIO(0x9888), 0x04d5c700 },
+	{ _MMIO(0x9888), 0x06d500cf },
+	{ _MMIO(0x9888), 0x10d50000 },
+	{ _MMIO(0x9888), 0x03888000 },
+	{ _MMIO(0x9888), 0x05888000 },
+	{ _MMIO(0x9888), 0x07888000 },
+	{ _MMIO(0x9888), 0x09888000 },
+	{ _MMIO(0x9888), 0x0b888000 },
+	{ _MMIO(0x9888), 0x0d880400 },
+	{ _MMIO(0x9888), 0x0f8a8000 },
+	{ _MMIO(0x9888), 0x198a8000 },
+	{ _MMIO(0x9888), 0x1b8aaaa0 },
+	{ _MMIO(0x9888), 0x1d8a0002 },
+	{ _MMIO(0x9888), 0x258b555a },
+	{ _MMIO(0x9888), 0x278b0015 },
+	{ _MMIO(0x9888), 0x238b5500 },
+	{ _MMIO(0x9888), 0x038c4000 },
+	{ _MMIO(0x9888), 0x058c4000 },
+	{ _MMIO(0x9888), 0x078c4000 },
+	{ _MMIO(0x9888), 0x098c4000 },
+	{ _MMIO(0x9888), 0x0b8c4000 },
+	{ _MMIO(0x9888), 0x0d8c4000 },
+	{ _MMIO(0x9888), 0x018d8000 },
+	{ _MMIO(0x9888), 0x038da000 },
+	{ _MMIO(0x9888), 0x058da000 },
+	{ _MMIO(0x9888), 0x078d2000 },
+	{ _MMIO(0x9888), 0x2185aaaa },
+	{ _MMIO(0x9888), 0x2385002a },
+	{ _MMIO(0x9888), 0x1f85aa00 },
+	{ _MMIO(0x9888), 0x0f834000 },
+	{ _MMIO(0x9888), 0x19835400 },
+	{ _MMIO(0x9888), 0x1b830155 },
+	{ _MMIO(0x9888), 0x03834000 },
+	{ _MMIO(0x9888), 0x05834000 },
+	{ _MMIO(0x9888), 0x07834000 },
+	{ _MMIO(0x9888), 0x09834000 },
+	{ _MMIO(0x9888), 0x0b834000 },
+	{ _MMIO(0x9888), 0x0d834000 },
+	{ _MMIO(0x9888), 0x0784c000 },
+	{ _MMIO(0x9888), 0x0984c000 },
+	{ _MMIO(0x9888), 0x0b84c000 },
+	{ _MMIO(0x9888), 0x0d84c000 },
+	{ _MMIO(0x9888), 0x0f84c000 },
+	{ _MMIO(0x9888), 0x01848000 },
+	{ _MMIO(0x9888), 0x0384c000 },
+	{ _MMIO(0x9888), 0x0584c000 },
+	{ _MMIO(0x9888), 0x1780c000 },
+	{ _MMIO(0x9888), 0x1980c000 },
+	{ _MMIO(0x9888), 0x1b80c000 },
+	{ _MMIO(0x9888), 0x1d80c000 },
+	{ _MMIO(0x9888), 0x1f80c000 },
+	{ _MMIO(0x9888), 0x11808000 },
+	{ _MMIO(0x9888), 0x1380c000 },
+	{ _MMIO(0x9888), 0x1580c000 },
+	{ _MMIO(0x9888), 0x4f800000 },
+	{ _MMIO(0x9888), 0x43800882 },
+	{ _MMIO(0x9888), 0x51800000 },
+	{ _MMIO(0x9888), 0x45801082 },
+	{ _MMIO(0x9888), 0x53800000 },
+	{ _MMIO(0x9888), 0x478014a5 },
+	{ _MMIO(0x9888), 0x21800000 },
+	{ _MMIO(0x9888), 0x31800000 },
+	{ _MMIO(0x9888), 0x4d800000 },
+	{ _MMIO(0x9888), 0x3f800002 },
+	{ _MMIO(0x9888), 0x41800c62 },
+};
+
+static int
+get_tdl_2_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_2;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_test_oa[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2770), 0x00000004 },
+	{ _MMIO(0x2774), 0x00000000 },
+	{ _MMIO(0x2778), 0x00000003 },
+	{ _MMIO(0x277c), 0x00000000 },
+	{ _MMIO(0x2780), 0x00000007 },
+	{ _MMIO(0x2784), 0x00000000 },
+	{ _MMIO(0x2788), 0x00100002 },
+	{ _MMIO(0x278c), 0x0000fff7 },
+	{ _MMIO(0x2790), 0x00100002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00100082 },
+	{ _MMIO(0x279c), 0x0000ffef },
+	{ _MMIO(0x27a0), 0x001000c2 },
+	{ _MMIO(0x27a4), 0x0000ffe7 },
+	{ _MMIO(0x27a8), 0x00100001 },
+	{ _MMIO(0x27ac), 0x0000ffe7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_test_oa[] = {
+};
+
+static const struct i915_oa_reg mux_config_test_oa[] = {
+	{ _MMIO(0x9888), 0x59800000 },
+	{ _MMIO(0x9888), 0x59800001 },
+	{ _MMIO(0x9888), 0x338b0000 },
+	{ _MMIO(0x9888), 0x258b0066 },
+	{ _MMIO(0x9888), 0x058b0000 },
+	{ _MMIO(0x9888), 0x038b0000 },
+	{ _MMIO(0x9888), 0x03844000 },
+	{ _MMIO(0x9888), 0x47800080 },
+	{ _MMIO(0x9888), 0x57800000 },
+	{ _MMIO(0x1823a4), 0x00000000 },
+	{ _MMIO(0x9888), 0x59800000 },
+};
+
+static int
+get_test_oa_mux_config(struct drm_i915_private *dev_priv,
+		       const struct i915_oa_reg **regs,
+		       int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_test_oa;
+	lens[n] = ARRAY_SIZE(mux_config_test_oa);
+	n++;
+
+	return n;
+}
+
 int i915_oa_select_metric_set_chv(struct drm_i915_private *dev_priv)
 {
 	dev_priv->perf.oa.n_mux_configs = 0;
@@ -154,14 +2035,300 @@ int i915_oa_select_metric_set_chv(struct drm_i915_private *dev_priv)
 	dev_priv->perf.oa.flex_regs = NULL;
 	dev_priv->perf.oa.flex_regs_len = 0;
 
-	switch (dev_priv->perf.oa.metrics_set) {
-	case METRIC_SET_ID_RENDER_BASIC:
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_basic_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_basic_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
+		return 0;
+	case METRIC_SET_ID_RENDER_PIPE_PROFILE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_pipe_profile_mux_config(dev_priv,
+							   dev_priv->perf.oa.mux_regs,
+							   dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_PIPE_PROFILE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_pipe_profile;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_pipe_profile);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_pipe_profile;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_pipe_profile);
+
+		return 0;
+	case METRIC_SET_ID_HDC_AND_SF:
+		dev_priv->perf.oa.n_mux_configs =
+			get_hdc_and_sf_mux_config(dev_priv,
+						  dev_priv->perf.oa.mux_regs,
+						  dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"HDC_AND_SF\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_hdc_and_sf;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_hdc_and_sf);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_hdc_and_sf;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_hdc_and_sf);
+
+		return 0;
+	case METRIC_SET_ID_L3_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_1_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_1);
+
+		return 0;
+	case METRIC_SET_ID_L3_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_2_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_2);
+
+		return 0;
+	case METRIC_SET_ID_L3_3:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_3_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_3\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_3;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_3);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_3;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_3);
+
+		return 0;
+	case METRIC_SET_ID_L3_4:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_4_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_4\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_4;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_4);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_4;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_4);
+
+		return 0;
+	case METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND:
+		dev_priv->perf.oa.n_mux_configs =
+			get_rasterizer_and_pixel_backend_mux_config(dev_priv,
+								    dev_priv->perf.oa.mux_regs,
+								    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RASTERIZER_AND_PIXEL_BACKEND\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_rasterizer_and_pixel_backend);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_rasterizer_and_pixel_backend);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_sampler_1_mux_config(dev_priv,
+						 dev_priv->perf.oa.mux_regs,
+						 dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler_1);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_sampler_2_mux_config(dev_priv,
+						 dev_priv->perf.oa.mux_regs,
+						 dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler_2);
+
+		return 0;
+	case METRIC_SET_ID_TDL_1:
 		dev_priv->perf.oa.n_mux_configs =
-			get_render_basic_mux_config(dev_priv,
-						    dev_priv->perf.oa.mux_regs,
-						    dev_priv->perf.oa.mux_regs_lens);
+			get_tdl_1_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
 		if (dev_priv->perf.oa.n_mux_configs == 0) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_1\" metric set\n");
 
 			/* EINVAL because *_register_sysfs already checked this
 			 * and so it wouldn't have been advertised to userspace and
@@ -171,14 +2338,66 @@ int i915_oa_select_metric_set_chv(struct drm_i915_private *dev_priv)
 		}
 
 		dev_priv->perf.oa.b_counter_regs =
-			b_counter_config_render_basic;
+			b_counter_config_tdl_1;
 		dev_priv->perf.oa.b_counter_regs_len =
-			ARRAY_SIZE(b_counter_config_render_basic);
+			ARRAY_SIZE(b_counter_config_tdl_1);
 
 		dev_priv->perf.oa.flex_regs =
-			flex_eu_config_render_basic;
+			flex_eu_config_tdl_1;
 		dev_priv->perf.oa.flex_regs_len =
-			ARRAY_SIZE(flex_eu_config_render_basic);
+			ARRAY_SIZE(flex_eu_config_tdl_1);
+
+		return 0;
+	case METRIC_SET_ID_TDL_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_2_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_2);
+
+		return 0;
+	case METRIC_SET_ID_TEST_OA:
+		dev_priv->perf.oa.n_mux_configs =
+			get_test_oa_mux_config(dev_priv,
+					       dev_priv->perf.oa.mux_regs,
+					       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TEST_OA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_test_oa;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_test_oa);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_test_oa;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_test_oa);
 
 		return 0;
 	default:
@@ -208,6 +2427,292 @@ static struct attribute_group group_render_basic = {
 	.attrs =  attrs_render_basic,
 };
 
+static ssize_t
+show_compute_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_BASIC);
+}
+
+static struct device_attribute dev_attr_compute_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_basic[] = {
+	&dev_attr_compute_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_basic = {
+	.name = "f522a89c-ecd1-4522-8331-3383c54af5f5",
+	.attrs =  attrs_compute_basic,
+};
+
+static ssize_t
+show_render_pipe_profile_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_PIPE_PROFILE);
+}
+
+static struct device_attribute dev_attr_render_pipe_profile_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_pipe_profile_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_pipe_profile[] = {
+	&dev_attr_render_pipe_profile_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_pipe_profile = {
+	.name = "a9ccc03d-a943-4e6b-9cd6-13e063075927",
+	.attrs =  attrs_render_pipe_profile,
+};
+
+static ssize_t
+show_hdc_and_sf_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_HDC_AND_SF);
+}
+
+static struct device_attribute dev_attr_hdc_and_sf_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_hdc_and_sf_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_hdc_and_sf[] = {
+	&dev_attr_hdc_and_sf_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_hdc_and_sf = {
+	.name = "2cf0c064-68df-4fac-9b3f-57f51ca8a069",
+	.attrs =  attrs_hdc_and_sf,
+};
+
+static ssize_t
+show_l3_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_1);
+}
+
+static struct device_attribute dev_attr_l3_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_1[] = {
+	&dev_attr_l3_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_1 = {
+	.name = "78a87ff9-543a-49ce-95ea-26d86071ea93",
+	.attrs =  attrs_l3_1,
+};
+
+static ssize_t
+show_l3_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_2);
+}
+
+static struct device_attribute dev_attr_l3_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_2[] = {
+	&dev_attr_l3_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_2 = {
+	.name = "9f2cece5-7bfe-4320-ad66-8c7cc526bec5",
+	.attrs =  attrs_l3_2,
+};
+
+static ssize_t
+show_l3_3_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_3);
+}
+
+static struct device_attribute dev_attr_l3_3_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_3_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_3[] = {
+	&dev_attr_l3_3_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_3 = {
+	.name = "d890ef38-d309-47e4-b8b5-aa779bb19ab0",
+	.attrs =  attrs_l3_3,
+};
+
+static ssize_t
+show_l3_4_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_4);
+}
+
+static struct device_attribute dev_attr_l3_4_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_4_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_4[] = {
+	&dev_attr_l3_4_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_4 = {
+	.name = "5fdff4a6-9dc8-45e1-bfda-ef54869fbdd4",
+	.attrs =  attrs_l3_4,
+};
+
+static ssize_t
+show_rasterizer_and_pixel_backend_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND);
+}
+
+static struct device_attribute dev_attr_rasterizer_and_pixel_backend_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_rasterizer_and_pixel_backend_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_rasterizer_and_pixel_backend[] = {
+	&dev_attr_rasterizer_and_pixel_backend_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_rasterizer_and_pixel_backend = {
+	.name = "2c0e45e1-7e2c-4a14-ae00-0b7ec868b8aa",
+	.attrs =  attrs_rasterizer_and_pixel_backend,
+};
+
+static ssize_t
+show_sampler_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER_1);
+}
+
+static struct device_attribute dev_attr_sampler_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler_1[] = {
+	&dev_attr_sampler_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler_1 = {
+	.name = "71148d78-baf5-474f-878a-e23158d0265d",
+	.attrs =  attrs_sampler_1,
+};
+
+static ssize_t
+show_sampler_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER_2);
+}
+
+static struct device_attribute dev_attr_sampler_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler_2[] = {
+	&dev_attr_sampler_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler_2 = {
+	.name = "b996a2b7-c59c-492d-877a-8cd54fd6df84",
+	.attrs =  attrs_sampler_2,
+};
+
+static ssize_t
+show_tdl_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_1);
+}
+
+static struct device_attribute dev_attr_tdl_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_1[] = {
+	&dev_attr_tdl_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_1 = {
+	.name = "eb2fecba-b431-42e7-8261-fe9429a6e67a",
+	.attrs =  attrs_tdl_1,
+};
+
+static ssize_t
+show_tdl_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_2);
+}
+
+static struct device_attribute dev_attr_tdl_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_2[] = {
+	&dev_attr_tdl_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_2 = {
+	.name = "60749470-a648-4a4b-9f10-dbfe1e36e44d",
+	.attrs =  attrs_tdl_2,
+};
+
+static ssize_t
+show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TEST_OA);
+}
+
+static struct device_attribute dev_attr_test_oa_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_test_oa_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_test_oa[] = {
+	&dev_attr_test_oa_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_test_oa = {
+	.name = "4a534b07-cba3-414d-8d60-874830e883aa",
+	.attrs =  attrs_test_oa,
+};
+
 int
 i915_perf_register_sysfs_chv(struct drm_i915_private *dev_priv)
 {
@@ -220,9 +2725,113 @@ i915_perf_register_sysfs_chv(struct drm_i915_private *dev_priv)
 		if (ret)
 			goto error_render_basic;
 	}
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+		if (ret)
+			goto error_compute_basic;
+	}
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+		if (ret)
+			goto error_render_pipe_profile;
+	}
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+		if (ret)
+			goto error_hdc_and_sf;
+	}
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+		if (ret)
+			goto error_l3_1;
+	}
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+		if (ret)
+			goto error_l3_2;
+	}
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+		if (ret)
+			goto error_l3_3;
+	}
+	if (get_l3_4_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_4);
+		if (ret)
+			goto error_l3_4;
+	}
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+		if (ret)
+			goto error_rasterizer_and_pixel_backend;
+	}
+	if (get_sampler_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler_1);
+		if (ret)
+			goto error_sampler_1;
+	}
+	if (get_sampler_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler_2);
+		if (ret)
+			goto error_sampler_2;
+	}
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+		if (ret)
+			goto error_tdl_1;
+	}
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+		if (ret)
+			goto error_tdl_2;
+	}
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+		if (ret)
+			goto error_test_oa;
+	}
 
 	return 0;
 
+error_test_oa:
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+error_tdl_2:
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+error_tdl_1:
+	if (get_sampler_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_2);
+error_sampler_2:
+	if (get_sampler_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_1);
+error_sampler_1:
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+error_rasterizer_and_pixel_backend:
+	if (get_l3_4_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_4);
+error_l3_4:
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+error_l3_3:
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+error_l3_2:
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+error_l3_1:
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+error_hdc_and_sf:
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+error_render_pipe_profile:
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+error_compute_basic:
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
 error_render_basic:
 	return ret;
 }
@@ -235,4 +2844,30 @@ i915_perf_unregister_sysfs_chv(struct drm_i915_private *dev_priv)
 
 	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+	if (get_l3_4_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_4);
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+	if (get_sampler_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_1);
+	if (get_sampler_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler_2);
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_test_oa);
 }
diff --git a/drivers/gpu/drm/i915/i915_oa_hsw.c b/drivers/gpu/drm/i915/i915_oa_hsw.c
index 8c13e0880e53..10f169f683b7 100644
--- a/drivers/gpu/drm/i915/i915_oa_hsw.c
+++ b/drivers/gpu/drm/i915/i915_oa_hsw.c
@@ -49,6 +49,9 @@ static const struct i915_oa_reg b_counter_config_render_basic[] = {
 	{ _MMIO(0x2710), 0x00000000 },
 };
 
+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+};
+
 static const struct i915_oa_reg mux_config_render_basic[] = {
 	{ _MMIO(0x253a4), 0x01600000 },
 	{ _MMIO(0x25440), 0x00100000 },
@@ -148,6 +151,9 @@ static const struct i915_oa_reg b_counter_config_compute_basic[] = {
 	{ _MMIO(0x236c), 0x00000000 },
 };
 
+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+};
+
 static const struct i915_oa_reg mux_config_compute_basic[] = {
 	{ _MMIO(0x253a4), 0x00000000 },
 	{ _MMIO(0x2681c), 0x01f00800 },
@@ -223,6 +229,9 @@ static const struct i915_oa_reg b_counter_config_compute_extended[] = {
 	{ _MMIO(0x27ac), 0x0000fffe },
 };
 
+static const struct i915_oa_reg flex_eu_config_compute_extended[] = {
+};
+
 static const struct i915_oa_reg mux_config_compute_extended[] = {
 	{ _MMIO(0x2681c), 0x3eb00800 },
 	{ _MMIO(0x26820), 0x00900000 },
@@ -289,6 +298,9 @@ static const struct i915_oa_reg b_counter_config_memory_reads[] = {
 	{ _MMIO(0x27ac), 0x0000fc00 },
 };
 
+static const struct i915_oa_reg flex_eu_config_memory_reads[] = {
+};
+
 static const struct i915_oa_reg mux_config_memory_reads[] = {
 	{ _MMIO(0x253a4), 0x34300000 },
 	{ _MMIO(0x25440), 0x2d800000 },
@@ -358,6 +370,9 @@ static const struct i915_oa_reg b_counter_config_memory_writes[] = {
 	{ _MMIO(0x27ac), 0x0000fc00 },
 };
 
+static const struct i915_oa_reg flex_eu_config_memory_writes[] = {
+};
+
 static const struct i915_oa_reg mux_config_memory_writes[] = {
 	{ _MMIO(0x253a4), 0x34300000 },
 	{ _MMIO(0x25440), 0x01500000 },
@@ -405,6 +420,9 @@ static const struct i915_oa_reg b_counter_config_sampler_balance[] = {
 	{ _MMIO(0x2724), 0x00800000 },
 };
 
+static const struct i915_oa_reg flex_eu_config_sampler_balance[] = {
+};
+
 static const struct i915_oa_reg mux_config_sampler_balance[] = {
 	{ _MMIO(0x2eb9c), 0x01906400 },
 	{ _MMIO(0x2fb9c), 0x01906400 },
@@ -492,6 +510,11 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 		dev_priv->perf.oa.b_counter_regs_len =
 			ARRAY_SIZE(b_counter_config_render_basic);
 
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
 		return 0;
 	case METRIC_SET_ID_COMPUTE_BASIC:
 		dev_priv->perf.oa.n_mux_configs =
@@ -513,6 +536,11 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 		dev_priv->perf.oa.b_counter_regs_len =
 			ARRAY_SIZE(b_counter_config_compute_basic);
 
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
 		return 0;
 	case METRIC_SET_ID_COMPUTE_EXTENDED:
 		dev_priv->perf.oa.n_mux_configs =
@@ -534,6 +562,11 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 		dev_priv->perf.oa.b_counter_regs_len =
 			ARRAY_SIZE(b_counter_config_compute_extended);
 
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extended;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extended);
+
 		return 0;
 	case METRIC_SET_ID_MEMORY_READS:
 		dev_priv->perf.oa.n_mux_configs =
@@ -555,6 +588,11 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 		dev_priv->perf.oa.b_counter_regs_len =
 			ARRAY_SIZE(b_counter_config_memory_reads);
 
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_reads;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_reads);
+
 		return 0;
 	case METRIC_SET_ID_MEMORY_WRITES:
 		dev_priv->perf.oa.n_mux_configs =
@@ -576,6 +614,11 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 		dev_priv->perf.oa.b_counter_regs_len =
 			ARRAY_SIZE(b_counter_config_memory_writes);
 
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_writes;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_writes);
+
 		return 0;
 	case METRIC_SET_ID_SAMPLER_BALANCE:
 		dev_priv->perf.oa.n_mux_configs =
@@ -597,6 +640,11 @@ int i915_oa_select_metric_set_hsw(struct drm_i915_private *dev_priv)
 		dev_priv->perf.oa.b_counter_regs_len =
 			ARRAY_SIZE(b_counter_config_sampler_balance);
 
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler_balance;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler_balance);
+
 		return 0;
 	default:
 		return -ENODEV;
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt2.c b/drivers/gpu/drm/i915/i915_oa_sklgt2.c
index 9ab9d21ec335..1268beda212c 100644
--- a/drivers/gpu/drm/i915/i915_oa_sklgt2.c
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt2.c
@@ -33,9 +33,26 @@
 
 enum metric_set_id {
 	METRIC_SET_ID_RENDER_BASIC = 1,
+	METRIC_SET_ID_COMPUTE_BASIC,
+	METRIC_SET_ID_RENDER_PIPE_PROFILE,
+	METRIC_SET_ID_MEMORY_READS,
+	METRIC_SET_ID_MEMORY_WRITES,
+	METRIC_SET_ID_COMPUTE_EXTENDED,
+	METRIC_SET_ID_COMPUTE_L3_CACHE,
+	METRIC_SET_ID_HDC_AND_SF,
+	METRIC_SET_ID_L3_1,
+	METRIC_SET_ID_L3_2,
+	METRIC_SET_ID_L3_3,
+	METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND,
+	METRIC_SET_ID_SAMPLER,
+	METRIC_SET_ID_TDL_1,
+	METRIC_SET_ID_TDL_2,
+	METRIC_SET_ID_COMPUTE_EXTRA,
+	METRIC_SET_ID_VME_PIPE,
+	METRIC_SET_ID_TEST_OA,
 };
 
-int i915_oa_n_builtin_metric_sets_sklgt2 = 1;
+int i915_oa_n_builtin_metric_sets_sklgt2 = 18;
 
 static const struct i915_oa_reg b_counter_config_render_basic[] = {
 	{ _MMIO(0x2710), 0x00000000 },
@@ -146,66 +163,3120 @@ get_render_basic_mux_config(struct drm_i915_private *dev_priv,
 	return n;
 }
 
+static const struct i915_oa_reg b_counter_config_compute_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic_0_slices_0x01_and_sku_lt_0x02[] = {
+	{ _MMIO(0x9888), 0x104f00e0 },
+	{ _MMIO(0x9888), 0x124f1c00 },
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f901403 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4e8200 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x004f0db2 },
+	{ _MMIO(0x9888), 0x064f0900 },
+	{ _MMIO(0x9888), 0x084f1880 },
+	{ _MMIO(0x9888), 0x0a4f0011 },
+	{ _MMIO(0x9888), 0x0c4f0e3c },
+	{ _MMIO(0x9888), 0x0e4f1d80 },
+	{ _MMIO(0x9888), 0x086c0002 },
+	{ _MMIO(0x9888), 0x0a6c0100 },
+	{ _MMIO(0x9888), 0x0e6c000c },
+	{ _MMIO(0x9888), 0x026c000b },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x081b4000 },
+	{ _MMIO(0x9888), 0x0a1b8000 },
+	{ _MMIO(0x9888), 0x0e1b4000 },
+	{ _MMIO(0x9888), 0x021b4000 },
+	{ _MMIO(0x9888), 0x1a1c4000 },
+	{ _MMIO(0x9888), 0x1c1c0012 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x005bc000 },
+	{ _MMIO(0x9888), 0x065b8000 },
+	{ _MMIO(0x9888), 0x085b8000 },
+	{ _MMIO(0x9888), 0x0a5b4000 },
+	{ _MMIO(0x9888), 0x0c5bc000 },
+	{ _MMIO(0x9888), 0x0e5b8000 },
+	{ _MMIO(0x9888), 0x105c8000 },
+	{ _MMIO(0x9888), 0x1a5ca000 },
+	{ _MMIO(0x9888), 0x1c5c002d },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x0a4c0800 },
+	{ _MMIO(0x9888), 0x0c4c0082 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020d2000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5500 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002cc000 },
+	{ _MMIO(0x9888), 0x0e2cc000 },
+	{ _MMIO(0x9888), 0x162cbe00 },
+	{ _MMIO(0x9888), 0x182c00ef },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x19900157 },
+	{ _MMIO(0x9888), 0x1b900167 },
+	{ _MMIO(0x9888), 0x1d900105 },
+	{ _MMIO(0x9888), 0x1f900103 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0xd28), 0x00000000 },
+	{ _MMIO(0x9888), 0x11900fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900840 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900842 },
+	{ _MMIO(0x9888), 0x47900840 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900840 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900040 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900840 },
+	{ _MMIO(0x9888), 0x53901111 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic_0_slices_0x01_and_sku_gte_0x02[] = {
+	{ _MMIO(0x9888), 0x104f00e0 },
+	{ _MMIO(0x9888), 0x124f1c00 },
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f901403 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x1a4e0820 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x064f0900 },
+	{ _MMIO(0x9888), 0x084f0032 },
+	{ _MMIO(0x9888), 0x0a4f1810 },
+	{ _MMIO(0x9888), 0x0c4f0e00 },
+	{ _MMIO(0x9888), 0x0e4f003c },
+	{ _MMIO(0x9888), 0x004f0d80 },
+	{ _MMIO(0x9888), 0x024f003b },
+	{ _MMIO(0x9888), 0x006c0002 },
+	{ _MMIO(0x9888), 0x086c0000 },
+	{ _MMIO(0x9888), 0x0c6c000c },
+	{ _MMIO(0x9888), 0x0e6c0b00 },
+	{ _MMIO(0x9888), 0x186c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x081b8000 },
+	{ _MMIO(0x9888), 0x0c1b4000 },
+	{ _MMIO(0x9888), 0x0e1b8000 },
+	{ _MMIO(0x9888), 0x101c8000 },
+	{ _MMIO(0x9888), 0x1a1c8000 },
+	{ _MMIO(0x9888), 0x1c1c0024 },
+	{ _MMIO(0x9888), 0x065b8000 },
+	{ _MMIO(0x9888), 0x085b4000 },
+	{ _MMIO(0x9888), 0x0a5bc000 },
+	{ _MMIO(0x9888), 0x0c5b8000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x005b8000 },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x1a5c6000 },
+	{ _MMIO(0x9888), 0x1c5c001b },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x145c8000 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4c2000 },
+	{ _MMIO(0x9888), 0x0c4c0208 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020d2000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5500 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2cc000 },
+	{ _MMIO(0x9888), 0x162cfb00 },
+	{ _MMIO(0x9888), 0x182c00be },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042cc000 },
+	{ _MMIO(0x9888), 0x19900157 },
+	{ _MMIO(0x9888), 0x1b900167 },
+	{ _MMIO(0x9888), 0x1d900105 },
+	{ _MMIO(0x9888), 0x1f900103 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x11900fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900842 },
+	{ _MMIO(0x9888), 0x47900802 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900802 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900002 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53901111 },
+};
+
+static int
+get_compute_basic_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 2);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 2);
+
+	if ((INTEL_INFO(dev_priv)->sseu.slice_mask & 0x01) &&
+	    (dev_priv->drm.pdev->revision < 0x02)) {
+		regs[n] = mux_config_compute_basic_0_slices_0x01_and_sku_lt_0x02;
+		lens[n] = ARRAY_SIZE(mux_config_compute_basic_0_slices_0x01_and_sku_lt_0x02);
+		n++;
+	}
+	if ((INTEL_INFO(dev_priv)->sseu.slice_mask & 0x01) &&
+		   (dev_priv->drm.pdev->revision >= 0x02)) {
+		regs[n] = mux_config_compute_basic_0_slices_0x01_and_sku_gte_0x02;
+		lens[n] = ARRAY_SIZE(mux_config_compute_basic_0_slices_0x01_and_sku_gte_0x02);
+		n++;
+	}
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_render_pipe_profile[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ffea },
+	{ _MMIO(0x2774), 0x00007ffc },
+	{ _MMIO(0x2778), 0x0007affa },
+	{ _MMIO(0x277c), 0x0000f5fd },
+	{ _MMIO(0x2780), 0x00079ffa },
+	{ _MMIO(0x2784), 0x0000f3fb },
+	{ _MMIO(0x2788), 0x0007bf7a },
+	{ _MMIO(0x278c), 0x0000f7e7 },
+	{ _MMIO(0x2790), 0x0007fefa },
+	{ _MMIO(0x2794), 0x0000f7cf },
+	{ _MMIO(0x2798), 0x00077ffa },
+	{ _MMIO(0x279c), 0x0000efdf },
+	{ _MMIO(0x27a0), 0x0006fffa },
+	{ _MMIO(0x27a4), 0x0000cfbf },
+	{ _MMIO(0x27a8), 0x0003fffa },
+	{ _MMIO(0x27ac), 0x00005f7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_pipe_profile[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile_0_sku_lt_0x02[] = {
+	{ _MMIO(0x9888), 0x0c0e001f },
+	{ _MMIO(0x9888), 0x0a0f0000 },
+	{ _MMIO(0x9888), 0x10116800 },
+	{ _MMIO(0x9888), 0x178a03e0 },
+	{ _MMIO(0x9888), 0x11824c00 },
+	{ _MMIO(0x9888), 0x11830020 },
+	{ _MMIO(0x9888), 0x13840020 },
+	{ _MMIO(0x9888), 0x11850019 },
+	{ _MMIO(0x9888), 0x11860007 },
+	{ _MMIO(0x9888), 0x01870c40 },
+	{ _MMIO(0x9888), 0x17880000 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x0a4c0040 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x040d4000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020e5400 },
+	{ _MMIO(0x9888), 0x000e0000 },
+	{ _MMIO(0x9888), 0x080f0040 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x100f0000 },
+	{ _MMIO(0x9888), 0x0e0f0040 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06110012 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x01898000 },
+	{ _MMIO(0x9888), 0x0d890100 },
+	{ _MMIO(0x9888), 0x03898000 },
+	{ _MMIO(0x9888), 0x09808000 },
+	{ _MMIO(0x9888), 0x0b808000 },
+	{ _MMIO(0x9888), 0x0380c000 },
+	{ _MMIO(0x9888), 0x0f8a0075 },
+	{ _MMIO(0x9888), 0x1d8a0000 },
+	{ _MMIO(0x9888), 0x118a8000 },
+	{ _MMIO(0x9888), 0x1b8a4000 },
+	{ _MMIO(0x9888), 0x138a8000 },
+	{ _MMIO(0x9888), 0x1d81a000 },
+	{ _MMIO(0x9888), 0x15818000 },
+	{ _MMIO(0x9888), 0x17818000 },
+	{ _MMIO(0x9888), 0x0b820030 },
+	{ _MMIO(0x9888), 0x07828000 },
+	{ _MMIO(0x9888), 0x0d824000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x05824000 },
+	{ _MMIO(0x9888), 0x0d830003 },
+	{ _MMIO(0x9888), 0x0583000c },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x03838000 },
+	{ _MMIO(0x9888), 0x07838000 },
+	{ _MMIO(0x9888), 0x0b840980 },
+	{ _MMIO(0x9888), 0x03844d80 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x09848000 },
+	{ _MMIO(0x9888), 0x09850080 },
+	{ _MMIO(0x9888), 0x03850003 },
+	{ _MMIO(0x9888), 0x01850000 },
+	{ _MMIO(0x9888), 0x07860000 },
+	{ _MMIO(0x9888), 0x0f860400 },
+	{ _MMIO(0x9888), 0x09870032 },
+	{ _MMIO(0x9888), 0x01888052 },
+	{ _MMIO(0x9888), 0x11880000 },
+	{ _MMIO(0x9888), 0x09884000 },
+	{ _MMIO(0x9888), 0x15968000 },
+	{ _MMIO(0x9888), 0x17968000 },
+	{ _MMIO(0x9888), 0x0f96c000 },
+	{ _MMIO(0x9888), 0x1f950011 },
+	{ _MMIO(0x9888), 0x1d950014 },
+	{ _MMIO(0x9888), 0x0592c000 },
+	{ _MMIO(0x9888), 0x0b928000 },
+	{ _MMIO(0x9888), 0x0d924000 },
+	{ _MMIO(0x9888), 0x0f924000 },
+	{ _MMIO(0x9888), 0x11928000 },
+	{ _MMIO(0x9888), 0x1392c000 },
+	{ _MMIO(0x9888), 0x09924000 },
+	{ _MMIO(0x9888), 0x01985000 },
+	{ _MMIO(0x9888), 0x07988000 },
+	{ _MMIO(0x9888), 0x09981000 },
+	{ _MMIO(0x9888), 0x0b982000 },
+	{ _MMIO(0x9888), 0x0d982000 },
+	{ _MMIO(0x9888), 0x0f989000 },
+	{ _MMIO(0x9888), 0x05982000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x0b978000 },
+	{ _MMIO(0x9888), 0x0f974000 },
+	{ _MMIO(0x9888), 0x11974000 },
+	{ _MMIO(0x9888), 0x13978000 },
+	{ _MMIO(0x9888), 0x09974000 },
+	{ _MMIO(0xd28), 0x00000000 },
+	{ _MMIO(0x9888), 0x1190c080 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x419010a0 },
+	{ _MMIO(0x9888), 0x55904000 },
+	{ _MMIO(0x9888), 0x45901000 },
+	{ _MMIO(0x9888), 0x47900084 },
+	{ _MMIO(0x9888), 0x57904400 },
+	{ _MMIO(0x9888), 0x499000a5 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900081 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x439014a4 },
+	{ _MMIO(0x9888), 0x53900400 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile_0_sku_gte_0x02[] = {
+	{ _MMIO(0x9888), 0x0c0e001f },
+	{ _MMIO(0x9888), 0x0a0f0000 },
+	{ _MMIO(0x9888), 0x10116800 },
+	{ _MMIO(0x9888), 0x178a03e0 },
+	{ _MMIO(0x9888), 0x11824c00 },
+	{ _MMIO(0x9888), 0x11830020 },
+	{ _MMIO(0x9888), 0x13840020 },
+	{ _MMIO(0x9888), 0x11850019 },
+	{ _MMIO(0x9888), 0x11860007 },
+	{ _MMIO(0x9888), 0x01870c40 },
+	{ _MMIO(0x9888), 0x17880000 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x0a4c0040 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x040d4000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020e5400 },
+	{ _MMIO(0x9888), 0x000e0000 },
+	{ _MMIO(0x9888), 0x080f0040 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x100f0000 },
+	{ _MMIO(0x9888), 0x0e0f0040 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06110012 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x01898000 },
+	{ _MMIO(0x9888), 0x0d890100 },
+	{ _MMIO(0x9888), 0x03898000 },
+	{ _MMIO(0x9888), 0x09808000 },
+	{ _MMIO(0x9888), 0x0b808000 },
+	{ _MMIO(0x9888), 0x0380c000 },
+	{ _MMIO(0x9888), 0x0f8a0075 },
+	{ _MMIO(0x9888), 0x1d8a0000 },
+	{ _MMIO(0x9888), 0x118a8000 },
+	{ _MMIO(0x9888), 0x1b8a4000 },
+	{ _MMIO(0x9888), 0x138a8000 },
+	{ _MMIO(0x9888), 0x1d81a000 },
+	{ _MMIO(0x9888), 0x15818000 },
+	{ _MMIO(0x9888), 0x17818000 },
+	{ _MMIO(0x9888), 0x0b820030 },
+	{ _MMIO(0x9888), 0x07828000 },
+	{ _MMIO(0x9888), 0x0d824000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x05824000 },
+	{ _MMIO(0x9888), 0x0d830003 },
+	{ _MMIO(0x9888), 0x0583000c },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x03838000 },
+	{ _MMIO(0x9888), 0x07838000 },
+	{ _MMIO(0x9888), 0x0b840980 },
+	{ _MMIO(0x9888), 0x03844d80 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x09848000 },
+	{ _MMIO(0x9888), 0x09850080 },
+	{ _MMIO(0x9888), 0x03850003 },
+	{ _MMIO(0x9888), 0x01850000 },
+	{ _MMIO(0x9888), 0x07860000 },
+	{ _MMIO(0x9888), 0x0f860400 },
+	{ _MMIO(0x9888), 0x09870032 },
+	{ _MMIO(0x9888), 0x01888052 },
+	{ _MMIO(0x9888), 0x11880000 },
+	{ _MMIO(0x9888), 0x09884000 },
+	{ _MMIO(0x9888), 0x1b931001 },
+	{ _MMIO(0x9888), 0x1d930001 },
+	{ _MMIO(0x9888), 0x19934000 },
+	{ _MMIO(0x9888), 0x1b958000 },
+	{ _MMIO(0x9888), 0x1d950094 },
+	{ _MMIO(0x9888), 0x19958000 },
+	{ _MMIO(0x9888), 0x05e5a000 },
+	{ _MMIO(0x9888), 0x01e5c000 },
+	{ _MMIO(0x9888), 0x0592c000 },
+	{ _MMIO(0x9888), 0x0b928000 },
+	{ _MMIO(0x9888), 0x0d924000 },
+	{ _MMIO(0x9888), 0x0f924000 },
+	{ _MMIO(0x9888), 0x11928000 },
+	{ _MMIO(0x9888), 0x1392c000 },
+	{ _MMIO(0x9888), 0x09924000 },
+	{ _MMIO(0x9888), 0x01985000 },
+	{ _MMIO(0x9888), 0x07988000 },
+	{ _MMIO(0x9888), 0x09981000 },
+	{ _MMIO(0x9888), 0x0b982000 },
+	{ _MMIO(0x9888), 0x0d982000 },
+	{ _MMIO(0x9888), 0x0f989000 },
+	{ _MMIO(0x9888), 0x05982000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1190c080 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x419010a0 },
+	{ _MMIO(0x9888), 0x55904000 },
+	{ _MMIO(0x9888), 0x45901000 },
+	{ _MMIO(0x9888), 0x47900084 },
+	{ _MMIO(0x9888), 0x57904400 },
+	{ _MMIO(0x9888), 0x499000a5 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900081 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x439014a4 },
+	{ _MMIO(0x9888), 0x53900400 },
+};
+
+static int
+get_render_pipe_profile_mux_config(struct drm_i915_private *dev_priv,
+				   const struct i915_oa_reg **regs,
+				   int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 2);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 2);
+
+	if (dev_priv->drm.pdev->revision < 0x02) {
+		regs[n] = mux_config_render_pipe_profile_0_sku_lt_0x02;
+		lens[n] = ARRAY_SIZE(mux_config_render_pipe_profile_0_sku_lt_0x02);
+		n++;
+	}
+	if (dev_priv->drm.pdev->revision >= 0x02) {
+		regs[n] = mux_config_render_pipe_profile_0_sku_gte_0x02;
+		lens[n] = ARRAY_SIZE(mux_config_render_pipe_profile_0_sku_gte_0x02);
+		n++;
+	}
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_reads[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f872 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_reads[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads_0_slices_0x01_and_sku_lt_0x02[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x13946000 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x0f968000 },
+	{ _MMIO(0x9888), 0x1196c000 },
+	{ _MMIO(0x9888), 0x13964000 },
+	{ _MMIO(0x9888), 0x11938000 },
+	{ _MMIO(0x9888), 0x1b93fe00 },
+	{ _MMIO(0x9888), 0x01940010 },
+	{ _MMIO(0x9888), 0x07941100 },
+	{ _MMIO(0x9888), 0x09941312 },
+	{ _MMIO(0x9888), 0x0b941514 },
+	{ _MMIO(0x9888), 0x0d941716 },
+	{ _MMIO(0x9888), 0x11940000 },
+	{ _MMIO(0x9888), 0x19940000 },
+	{ _MMIO(0x9888), 0x1b940000 },
+	{ _MMIO(0x9888), 0x1d940000 },
+	{ _MMIO(0x9888), 0x1b954000 },
+	{ _MMIO(0x9888), 0x1d95a550 },
+	{ _MMIO(0x9888), 0x1f9502aa },
+	{ _MMIO(0x9888), 0x2f900157 },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13908000 },
+	{ _MMIO(0x9888), 0x21908000 },
+	{ _MMIO(0x9888), 0x23908000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27908000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0xd28), 0x00000000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads_0_sku_lt_0x05_and_sku_gte_0x02[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x13946000 },
+	{ _MMIO(0x9888), 0x15940016 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x19930800 },
+	{ _MMIO(0x9888), 0x1b93aa55 },
+	{ _MMIO(0x9888), 0x1d9300aa },
+	{ _MMIO(0x9888), 0x01940010 },
+	{ _MMIO(0x9888), 0x07941100 },
+	{ _MMIO(0x9888), 0x09941312 },
+	{ _MMIO(0x9888), 0x0b941514 },
+	{ _MMIO(0x9888), 0x0d941716 },
+	{ _MMIO(0x9888), 0x0f940018 },
+	{ _MMIO(0x9888), 0x1b940000 },
+	{ _MMIO(0x9888), 0x11940000 },
+	{ _MMIO(0x9888), 0x01e58000 },
+	{ _MMIO(0x9888), 0x03e57000 },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13908000 },
+	{ _MMIO(0x9888), 0x21908000 },
+	{ _MMIO(0x9888), 0x23908000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27908000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d908000 },
+	{ _MMIO(0x9888), 0x2f908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c20 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900400 },
+	{ _MMIO(0x9888), 0x47900421 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900421 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900061 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads_0_sku_gte_0x05[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f900064 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930055 },
+	{ _MMIO(0x9888), 0x03e58000 },
+	{ _MMIO(0x9888), 0x05e5c000 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x13900150 },
+	{ _MMIO(0x9888), 0x21900151 },
+	{ _MMIO(0x9888), 0x23900152 },
+	{ _MMIO(0x9888), 0x25900153 },
+	{ _MMIO(0x9888), 0x27900154 },
+	{ _MMIO(0x9888), 0x29900155 },
+	{ _MMIO(0x9888), 0x2b900156 },
+	{ _MMIO(0x9888), 0x2d900157 },
+	{ _MMIO(0x9888), 0x2f90015f },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c60 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900c63 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c63 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static int
+get_memory_reads_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 3);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 3);
+
+	if ((INTEL_INFO(dev_priv)->sseu.slice_mask & 0x01) &&
+	    (dev_priv->drm.pdev->revision < 0x02)) {
+		regs[n] = mux_config_memory_reads_0_slices_0x01_and_sku_lt_0x02;
+		lens[n] = ARRAY_SIZE(mux_config_memory_reads_0_slices_0x01_and_sku_lt_0x02);
+		n++;
+	}
+	if ((dev_priv->drm.pdev->revision < 0x05) &&
+		   (dev_priv->drm.pdev->revision >= 0x02)) {
+		regs[n] = mux_config_memory_reads_0_sku_lt_0x05_and_sku_gte_0x02;
+		lens[n] = ARRAY_SIZE(mux_config_memory_reads_0_sku_lt_0x05_and_sku_gte_0x02);
+		n++;
+	}
+	if (dev_priv->drm.pdev->revision >= 0x05) {
+		regs[n] = mux_config_memory_reads_0_sku_gte_0x05;
+		lens[n] = ARRAY_SIZE(mux_config_memory_reads_0_sku_gte_0x05);
+		n++;
+	}
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_writes[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f822 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_writes[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes_0_slices_0x01_and_sku_lt_0x02[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x13945400 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f901400 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x0f968000 },
+	{ _MMIO(0x9888), 0x1196c000 },
+	{ _MMIO(0x9888), 0x13964000 },
+	{ _MMIO(0x9888), 0x11938000 },
+	{ _MMIO(0x9888), 0x1b93fe00 },
+	{ _MMIO(0x9888), 0x01940010 },
+	{ _MMIO(0x9888), 0x07941100 },
+	{ _MMIO(0x9888), 0x09941312 },
+	{ _MMIO(0x9888), 0x0b941514 },
+	{ _MMIO(0x9888), 0x0d941716 },
+	{ _MMIO(0x9888), 0x11940000 },
+	{ _MMIO(0x9888), 0x19940000 },
+	{ _MMIO(0x9888), 0x1b940000 },
+	{ _MMIO(0x9888), 0x1d940000 },
+	{ _MMIO(0x9888), 0x1b954000 },
+	{ _MMIO(0x9888), 0x1d95a550 },
+	{ _MMIO(0x9888), 0x1f9502aa },
+	{ _MMIO(0x9888), 0x2f900167 },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13908000 },
+	{ _MMIO(0x9888), 0x21908000 },
+	{ _MMIO(0x9888), 0x23908000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27908000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0xd28), 0x00000000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes_0_sku_lt_0x05_and_sku_gte_0x02[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x13945400 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f901400 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x19930800 },
+	{ _MMIO(0x9888), 0x1b93aa55 },
+	{ _MMIO(0x9888), 0x1d93002a },
+	{ _MMIO(0x9888), 0x01940010 },
+	{ _MMIO(0x9888), 0x07941100 },
+	{ _MMIO(0x9888), 0x09941312 },
+	{ _MMIO(0x9888), 0x0b941514 },
+	{ _MMIO(0x9888), 0x0d941716 },
+	{ _MMIO(0x9888), 0x1b940000 },
+	{ _MMIO(0x9888), 0x11940000 },
+	{ _MMIO(0x9888), 0x01e58000 },
+	{ _MMIO(0x9888), 0x03e57000 },
+	{ _MMIO(0x9888), 0x2f900167 },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13908000 },
+	{ _MMIO(0x9888), 0x21908000 },
+	{ _MMIO(0x9888), 0x23908000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27908000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c20 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900400 },
+	{ _MMIO(0x9888), 0x47900421 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900421 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes_0_sku_gte_0x05[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f901000 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930055 },
+	{ _MMIO(0x9888), 0x03e58000 },
+	{ _MMIO(0x9888), 0x05e5c000 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x13900160 },
+	{ _MMIO(0x9888), 0x21900161 },
+	{ _MMIO(0x9888), 0x23900162 },
+	{ _MMIO(0x9888), 0x25900163 },
+	{ _MMIO(0x9888), 0x27900164 },
+	{ _MMIO(0x9888), 0x29900165 },
+	{ _MMIO(0x9888), 0x2b900166 },
+	{ _MMIO(0x9888), 0x2d900167 },
+	{ _MMIO(0x9888), 0x2f900150 },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c60 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900c63 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c63 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static int
+get_memory_writes_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 3);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 3);
+
+	if ((INTEL_INFO(dev_priv)->sseu.slice_mask & 0x01) &&
+	    (dev_priv->drm.pdev->revision < 0x02)) {
+		regs[n] = mux_config_memory_writes_0_slices_0x01_and_sku_lt_0x02;
+		lens[n] = ARRAY_SIZE(mux_config_memory_writes_0_slices_0x01_and_sku_lt_0x02);
+		n++;
+	}
+	if ((dev_priv->drm.pdev->revision < 0x05) &&
+		   (dev_priv->drm.pdev->revision >= 0x02)) {
+		regs[n] = mux_config_memory_writes_0_sku_lt_0x05_and_sku_gte_0x02;
+		lens[n] = ARRAY_SIZE(mux_config_memory_writes_0_sku_lt_0x05_and_sku_gte_0x02);
+		n++;
+	}
+	if (dev_priv->drm.pdev->revision >= 0x05) {
+		regs[n] = mux_config_memory_writes_0_sku_gte_0x05;
+		lens[n] = ARRAY_SIZE(mux_config_memory_writes_0_sku_gte_0x05);
+		n++;
+	}
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extended[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fc2a },
+	{ _MMIO(0x2774), 0x0000bf00 },
+	{ _MMIO(0x2778), 0x0007fc6a },
+	{ _MMIO(0x277c), 0x0000bf00 },
+	{ _MMIO(0x2780), 0x0007fc92 },
+	{ _MMIO(0x2784), 0x0000bf00 },
+	{ _MMIO(0x2788), 0x0007fca2 },
+	{ _MMIO(0x278c), 0x0000bf00 },
+	{ _MMIO(0x2790), 0x0007fc32 },
+	{ _MMIO(0x2794), 0x0000bf00 },
+	{ _MMIO(0x2798), 0x0007fc9a },
+	{ _MMIO(0x279c), 0x0000bf00 },
+	{ _MMIO(0x27a0), 0x0007fe6a },
+	{ _MMIO(0x27a4), 0x0000bf00 },
+	{ _MMIO(0x27a8), 0x0007fe7a },
+	{ _MMIO(0x27ac), 0x0000bf00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extended[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended_0_subslices_0x01[] = {
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x141c8160 },
+	{ _MMIO(0x9888), 0x161c8015 },
+	{ _MMIO(0x9888), 0x181c0120 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4eaaa0 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0e6c0b01 },
+	{ _MMIO(0x9888), 0x006c0200 },
+	{ _MMIO(0x9888), 0x026c000c },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x001b8000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x001c0041 },
+	{ _MMIO(0x9888), 0x061c4200 },
+	{ _MMIO(0x9888), 0x081c4443 },
+	{ _MMIO(0x9888), 0x0a1c4645 },
+	{ _MMIO(0x9888), 0x0c1c7647 },
+	{ _MMIO(0x9888), 0x041c7357 },
+	{ _MMIO(0x9888), 0x1c1c0030 },
+	{ _MMIO(0x9888), 0x101c0000 },
+	{ _MMIO(0x9888), 0x1a1c0000 },
+	{ _MMIO(0x9888), 0x121c8000 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4caa2a },
+	{ _MMIO(0x9888), 0x0c4c02aa },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5515 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162caa00 },
+	{ _MMIO(0x9888), 0x182c00aa },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0xd28), 0x00000000 },
+	{ _MMIO(0x9888), 0x11907fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900040 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900802 },
+	{ _MMIO(0x9888), 0x47900842 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900842 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900800 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static int
+get_compute_extended_mux_config(struct drm_i915_private *dev_priv,
+				const struct i915_oa_reg **regs,
+				int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	if (INTEL_INFO(dev_priv)->sseu.subslice_mask & 0x01) {
+		regs[n] = mux_config_compute_extended_0_subslices_0x01;
+		lens[n] = ARRAY_SIZE(mux_config_compute_extended_0_subslices_0x01);
+		n++;
+	}
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_l3_cache[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fffa },
+	{ _MMIO(0x2774), 0x0000fefe },
+	{ _MMIO(0x2778), 0x0007fffa },
+	{ _MMIO(0x277c), 0x0000fefd },
+	{ _MMIO(0x2790), 0x0007fffa },
+	{ _MMIO(0x2794), 0x0000fbef },
+	{ _MMIO(0x2798), 0x0007fffa },
+	{ _MMIO(0x279c), 0x0000fbdf },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_l3_cache[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00101100 },
+	{ _MMIO(0xe45c), 0x00201200 },
+	{ _MMIO(0xe55c), 0x00301300 },
+	{ _MMIO(0xe65c), 0x00401400 },
+};
+
+static const struct i915_oa_reg mux_config_compute_l3_cache[] = {
+	{ _MMIO(0x9888), 0x166c0760 },
+	{ _MMIO(0x9888), 0x1593001e },
+	{ _MMIO(0x9888), 0x3f901403 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4e8020 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x006c0051 },
+	{ _MMIO(0x9888), 0x066c5000 },
+	{ _MMIO(0x9888), 0x086c5c5d },
+	{ _MMIO(0x9888), 0x0e6c5e5f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x186c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x061b8000 },
+	{ _MMIO(0x9888), 0x081bc000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x101c8000 },
+	{ _MMIO(0x9888), 0x1a1ce000 },
+	{ _MMIO(0x9888), 0x1c1c0030 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4c2a00 },
+	{ _MMIO(0x9888), 0x0c4c0280 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f1500 },
+	{ _MMIO(0x9888), 0x100f0140 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162c0a00 },
+	{ _MMIO(0x9888), 0x182c00a0 },
+	{ _MMIO(0x9888), 0x03933300 },
+	{ _MMIO(0x9888), 0x05930032 },
+	{ _MMIO(0x9888), 0x11930000 },
+	{ _MMIO(0x9888), 0x1b930000 },
+	{ _MMIO(0x9888), 0x1d900157 },
+	{ _MMIO(0x9888), 0x1f900167 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1190030f },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900042 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x53901111 },
+	{ _MMIO(0x9888), 0x43900420 },
+};
+
+static int
+get_compute_l3_cache_mux_config(struct drm_i915_private *dev_priv,
+				const struct i915_oa_reg **regs,
+				int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_l3_cache;
+	lens[n] = ARRAY_SIZE(mux_config_compute_l3_cache);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_hdc_and_sf[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x10800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+};
+
+static const struct i915_oa_reg flex_eu_config_hdc_and_sf[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_hdc_and_sf[] = {
+	{ _MMIO(0x9888), 0x104f0232 },
+	{ _MMIO(0x9888), 0x124f4640 },
+	{ _MMIO(0x9888), 0x106c0232 },
+	{ _MMIO(0x9888), 0x11834400 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x004f1880 },
+	{ _MMIO(0x9888), 0x024f08bb },
+	{ _MMIO(0x9888), 0x044f001b },
+	{ _MMIO(0x9888), 0x046c0100 },
+	{ _MMIO(0x9888), 0x066c000b },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x041b8000 },
+	{ _MMIO(0x9888), 0x061b4000 },
+	{ _MMIO(0x9888), 0x1a1c1800 },
+	{ _MMIO(0x9888), 0x005b8000 },
+	{ _MMIO(0x9888), 0x025bc000 },
+	{ _MMIO(0x9888), 0x045b4000 },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x145c8000 },
+	{ _MMIO(0x9888), 0x165c8000 },
+	{ _MMIO(0x9888), 0x185c8000 },
+	{ _MMIO(0x9888), 0x0a4c00a0 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042cc000 },
+	{ _MMIO(0x9888), 0x062cc000 },
+	{ _MMIO(0x9888), 0x082cc000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x0f8305c0 },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x07830000 },
+	{ _MMIO(0x9888), 0x1d950080 },
+	{ _MMIO(0x9888), 0x13928000 },
+	{ _MMIO(0x9888), 0x0f988000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b9000a0 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_hdc_and_sf_mux_config(struct drm_i915_private *dev_priv,
+			  const struct i915_oa_reg **regs,
+			  int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_hdc_and_sf;
+	lens[n] = ARRAY_SIZE(mux_config_hdc_and_sf);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1[] = {
+	{ _MMIO(0x9888), 0x126c7b40 },
+	{ _MMIO(0x9888), 0x166c0020 },
+	{ _MMIO(0x9888), 0x0a603444 },
+	{ _MMIO(0x9888), 0x0a613400 },
+	{ _MMIO(0x9888), 0x1a4ea800 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x064f4000 },
+	{ _MMIO(0x9888), 0x0c6c5327 },
+	{ _MMIO(0x9888), 0x0e6c5425 },
+	{ _MMIO(0x9888), 0x006c2a00 },
+	{ _MMIO(0x9888), 0x026c285b },
+	{ _MMIO(0x9888), 0x046c005c },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0800 },
+	{ _MMIO(0x9888), 0x0c1bc000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x001b8000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x1c1c003c },
+	{ _MMIO(0x9888), 0x121c8000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c0800 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x1a5c1000 },
+	{ _MMIO(0x9888), 0x10600000 },
+	{ _MMIO(0x9888), 0x04600000 },
+	{ _MMIO(0x9888), 0x0c610044 },
+	{ _MMIO(0x9888), 0x10610000 },
+	{ _MMIO(0x9888), 0x06610000 },
+	{ _MMIO(0x9888), 0x0c4c02a8 },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x0a4c002a },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0154 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x182c00aa },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2cc000 },
+	{ _MMIO(0x9888), 0x1190ffc0 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900420 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900021 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900400 },
+	{ _MMIO(0x9888), 0x43900421 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+};
+
+static int
+get_l3_1_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_1;
+	lens[n] = ARRAY_SIZE(mux_config_l3_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00028002 },
+	{ _MMIO(0x277c), 0x000087ff },
+	{ _MMIO(0x2780), 0x00020002 },
+	{ _MMIO(0x2784), 0x00008fff },
+	{ _MMIO(0x2788), 0x00008002 },
+	{ _MMIO(0x278c), 0x0000a7ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_2[] = {
+	{ _MMIO(0x9888), 0x126c02e0 },
+	{ _MMIO(0x9888), 0x146c0001 },
+	{ _MMIO(0x9888), 0x0a623400 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x064f4000 },
+	{ _MMIO(0x9888), 0x026c3324 },
+	{ _MMIO(0x9888), 0x046c3422 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c0800 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x1a5c1000 },
+	{ _MMIO(0x9888), 0x06614000 },
+	{ _MMIO(0x9888), 0x0c620044 },
+	{ _MMIO(0x9888), 0x10620000 },
+	{ _MMIO(0x9888), 0x06620000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c002a },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f4000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2cc000 },
+	{ _MMIO(0x9888), 0x1190f800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_l3_2_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_2;
+	lens[n] = ARRAY_SIZE(mux_config_l3_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_3[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00028002 },
+	{ _MMIO(0x277c), 0x000087ff },
+	{ _MMIO(0x2780), 0x00020002 },
+	{ _MMIO(0x2784), 0x00008fff },
+	{ _MMIO(0x2788), 0x00008002 },
+	{ _MMIO(0x278c), 0x0000a7ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_3[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_3[] = {
+	{ _MMIO(0x9888), 0x126c4e80 },
+	{ _MMIO(0x9888), 0x146c0000 },
+	{ _MMIO(0x9888), 0x0a633400 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x026c3321 },
+	{ _MMIO(0x9888), 0x046c342f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1a6c2000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x061b4000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c1800 },
+	{ _MMIO(0x9888), 0x06604000 },
+	{ _MMIO(0x9888), 0x0c630044 },
+	{ _MMIO(0x9888), 0x10630000 },
+	{ _MMIO(0x9888), 0x06630000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c00aa },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f4000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x1190f800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900002 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_l3_3_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_3;
+	lens[n] = ARRAY_SIZE(mux_config_l3_3);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000efff },
+	{ _MMIO(0x2778), 0x00006000 },
+	{ _MMIO(0x277c), 0x0000f3ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x9888), 0x102f3800 },
+	{ _MMIO(0x9888), 0x144d0500 },
+	{ _MMIO(0x9888), 0x120d03c0 },
+	{ _MMIO(0x9888), 0x140d03cf },
+	{ _MMIO(0x9888), 0x0c0f0004 },
+	{ _MMIO(0x9888), 0x0c4e4000 },
+	{ _MMIO(0x9888), 0x042f0480 },
+	{ _MMIO(0x9888), 0x082f0000 },
+	{ _MMIO(0x9888), 0x022f0000 },
+	{ _MMIO(0x9888), 0x0a4c0090 },
+	{ _MMIO(0x9888), 0x064d0027 },
+	{ _MMIO(0x9888), 0x004d0000 },
+	{ _MMIO(0x9888), 0x000d0d40 },
+	{ _MMIO(0x9888), 0x020d803f },
+	{ _MMIO(0x9888), 0x040d8023 },
+	{ _MMIO(0x9888), 0x100d0000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020f0010 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x0e0f0050 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41901400 },
+	{ _MMIO(0x9888), 0x43901485 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900001 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_rasterizer_and_pixel_backend_mux_config(struct drm_i915_private *dev_priv,
+					    const struct i915_oa_reg **regs,
+					    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_rasterizer_and_pixel_backend;
+	lens[n] = ARRAY_SIZE(mux_config_rasterizer_and_pixel_backend);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler[] = {
+	{ _MMIO(0x9888), 0x14152c00 },
+	{ _MMIO(0x9888), 0x16150005 },
+	{ _MMIO(0x9888), 0x121600a0 },
+	{ _MMIO(0x9888), 0x14352c00 },
+	{ _MMIO(0x9888), 0x16350005 },
+	{ _MMIO(0x9888), 0x123600a0 },
+	{ _MMIO(0x9888), 0x14552c00 },
+	{ _MMIO(0x9888), 0x16550005 },
+	{ _MMIO(0x9888), 0x125600a0 },
+	{ _MMIO(0x9888), 0x062f6000 },
+	{ _MMIO(0x9888), 0x022f2000 },
+	{ _MMIO(0x9888), 0x0c4c0050 },
+	{ _MMIO(0x9888), 0x0a4c0010 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0350 },
+	{ _MMIO(0x9888), 0x0c0fb000 },
+	{ _MMIO(0x9888), 0x0e0f00da },
+	{ _MMIO(0x9888), 0x182c0028 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x022dc000 },
+	{ _MMIO(0x9888), 0x042d4000 },
+	{ _MMIO(0x9888), 0x0c138000 },
+	{ _MMIO(0x9888), 0x0e132000 },
+	{ _MMIO(0x9888), 0x0413c000 },
+	{ _MMIO(0x9888), 0x1c140018 },
+	{ _MMIO(0x9888), 0x0c157000 },
+	{ _MMIO(0x9888), 0x0e150078 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x04162180 },
+	{ _MMIO(0x9888), 0x02160000 },
+	{ _MMIO(0x9888), 0x04174000 },
+	{ _MMIO(0x9888), 0x0233a000 },
+	{ _MMIO(0x9888), 0x04333000 },
+	{ _MMIO(0x9888), 0x14348000 },
+	{ _MMIO(0x9888), 0x16348000 },
+	{ _MMIO(0x9888), 0x02357870 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x04360043 },
+	{ _MMIO(0x9888), 0x02360000 },
+	{ _MMIO(0x9888), 0x04371000 },
+	{ _MMIO(0x9888), 0x0e538000 },
+	{ _MMIO(0x9888), 0x00538000 },
+	{ _MMIO(0x9888), 0x06533000 },
+	{ _MMIO(0x9888), 0x1c540020 },
+	{ _MMIO(0x9888), 0x12548000 },
+	{ _MMIO(0x9888), 0x0e557000 },
+	{ _MMIO(0x9888), 0x00557800 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x06560043 },
+	{ _MMIO(0x9888), 0x02560000 },
+	{ _MMIO(0x9888), 0x06571000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900060 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900060 },
+};
+
+static int
+get_sampler_mux_config(struct drm_i915_private *dev_priv,
+		       const struct i915_oa_reg **regs,
+		       int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_sampler;
+	lens[n] = ARRAY_SIZE(mux_config_sampler);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x00007fff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x00009fff },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000efff },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000f3ff },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fdff },
+	{ _MMIO(0x2798), 0x00000000 },
+	{ _MMIO(0x279c), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_1[] = {
+	{ _MMIO(0x9888), 0x12120000 },
+	{ _MMIO(0x9888), 0x12320000 },
+	{ _MMIO(0x9888), 0x12520000 },
+	{ _MMIO(0x9888), 0x002f8000 },
+	{ _MMIO(0x9888), 0x022f3000 },
+	{ _MMIO(0x9888), 0x0a4c0015 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f03a0 },
+	{ _MMIO(0x9888), 0x0c0ff000 },
+	{ _MMIO(0x9888), 0x0e0f0095 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2d8000 },
+	{ _MMIO(0x9888), 0x0e2d4000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x02108000 },
+	{ _MMIO(0x9888), 0x0410c000 },
+	{ _MMIO(0x9888), 0x02118000 },
+	{ _MMIO(0x9888), 0x0411c000 },
+	{ _MMIO(0x9888), 0x02121880 },
+	{ _MMIO(0x9888), 0x041219b5 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x02134000 },
+	{ _MMIO(0x9888), 0x04135000 },
+	{ _MMIO(0x9888), 0x0c308000 },
+	{ _MMIO(0x9888), 0x0e304000 },
+	{ _MMIO(0x9888), 0x06304000 },
+	{ _MMIO(0x9888), 0x0c318000 },
+	{ _MMIO(0x9888), 0x0e314000 },
+	{ _MMIO(0x9888), 0x06314000 },
+	{ _MMIO(0x9888), 0x0c321a80 },
+	{ _MMIO(0x9888), 0x0e320033 },
+	{ _MMIO(0x9888), 0x06320031 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x0c334000 },
+	{ _MMIO(0x9888), 0x0e331000 },
+	{ _MMIO(0x9888), 0x06331000 },
+	{ _MMIO(0x9888), 0x0e508000 },
+	{ _MMIO(0x9888), 0x00508000 },
+	{ _MMIO(0x9888), 0x02504000 },
+	{ _MMIO(0x9888), 0x0e518000 },
+	{ _MMIO(0x9888), 0x00518000 },
+	{ _MMIO(0x9888), 0x02514000 },
+	{ _MMIO(0x9888), 0x0e521880 },
+	{ _MMIO(0x9888), 0x00521a80 },
+	{ _MMIO(0x9888), 0x02520033 },
+	{ _MMIO(0x9888), 0x0e534000 },
+	{ _MMIO(0x9888), 0x00534000 },
+	{ _MMIO(0x9888), 0x02531000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900062 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+};
+
+static int
+get_tdl_1_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_1;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_2[] = {
+	{ _MMIO(0x9888), 0x12124d60 },
+	{ _MMIO(0x9888), 0x12322e60 },
+	{ _MMIO(0x9888), 0x12524d60 },
+	{ _MMIO(0x9888), 0x022f3000 },
+	{ _MMIO(0x9888), 0x0a4c0014 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0fe000 },
+	{ _MMIO(0x9888), 0x0e0f0097 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x002d8000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x0410c000 },
+	{ _MMIO(0x9888), 0x0411c000 },
+	{ _MMIO(0x9888), 0x04121fb7 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x04135000 },
+	{ _MMIO(0x9888), 0x00308000 },
+	{ _MMIO(0x9888), 0x06304000 },
+	{ _MMIO(0x9888), 0x00318000 },
+	{ _MMIO(0x9888), 0x06314000 },
+	{ _MMIO(0x9888), 0x00321b80 },
+	{ _MMIO(0x9888), 0x0632003f },
+	{ _MMIO(0x9888), 0x00334000 },
+	{ _MMIO(0x9888), 0x06331000 },
+	{ _MMIO(0x9888), 0x0250c000 },
+	{ _MMIO(0x9888), 0x0251c000 },
+	{ _MMIO(0x9888), 0x02521fb7 },
+	{ _MMIO(0x9888), 0x00520000 },
+	{ _MMIO(0x9888), 0x02535000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x43900063 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_tdl_2_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_2;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extra[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extra[] = {
+	{ _MMIO(0xe458), 0x00001000 },
+	{ _MMIO(0xe558), 0x00003002 },
+	{ _MMIO(0xe658), 0x00005004 },
+	{ _MMIO(0xe758), 0x00011010 },
+	{ _MMIO(0xe45c), 0x00050012 },
+	{ _MMIO(0xe55c), 0x00052051 },
+	{ _MMIO(0xe65c), 0x00000008 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extra[] = {
+	{ _MMIO(0x9888), 0x121203e0 },
+	{ _MMIO(0x9888), 0x123203e0 },
+	{ _MMIO(0x9888), 0x125203e0 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x0a4c0040 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0e0f006c },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x042d8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06114000 },
+	{ _MMIO(0x9888), 0x06120033 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x04308000 },
+	{ _MMIO(0x9888), 0x04318000 },
+	{ _MMIO(0x9888), 0x04321980 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x04334000 },
+	{ _MMIO(0x9888), 0x04504000 },
+	{ _MMIO(0x9888), 0x04514000 },
+	{ _MMIO(0x9888), 0x04520033 },
+	{ _MMIO(0x9888), 0x00520000 },
+	{ _MMIO(0x9888), 0x04531000 },
+	{ _MMIO(0x9888), 0x1190e000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900c00 },
+	{ _MMIO(0x9888), 0x45900002 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_compute_extra_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_extra;
+	lens[n] = ARRAY_SIZE(mux_config_compute_extra);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_vme_pipe[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00100030 },
+	{ _MMIO(0x2774), 0x0000fff9 },
+	{ _MMIO(0x2778), 0x00000002 },
+	{ _MMIO(0x277c), 0x0000fffc },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000fff3 },
+	{ _MMIO(0x2788), 0x00100180 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00000002 },
+	{ _MMIO(0x279c), 0x0000ff3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_vme_pipe[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00008003 },
+};
+
+static const struct i915_oa_reg mux_config_vme_pipe[] = {
+	{ _MMIO(0x9888), 0x141a5800 },
+	{ _MMIO(0x9888), 0x161a00c0 },
+	{ _MMIO(0x9888), 0x12180240 },
+	{ _MMIO(0x9888), 0x14180002 },
+	{ _MMIO(0x9888), 0x143a5800 },
+	{ _MMIO(0x9888), 0x163a00c0 },
+	{ _MMIO(0x9888), 0x12380240 },
+	{ _MMIO(0x9888), 0x14380002 },
+	{ _MMIO(0x9888), 0x002f1000 },
+	{ _MMIO(0x9888), 0x022f8000 },
+	{ _MMIO(0x9888), 0x042f3000 },
+	{ _MMIO(0x9888), 0x004c4000 },
+	{ _MMIO(0x9888), 0x0a4c1500 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f9500 },
+	{ _MMIO(0x9888), 0x100f002a },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162c0a00 },
+	{ _MMIO(0x9888), 0x0a2dc000 },
+	{ _MMIO(0x9888), 0x0c2dc000 },
+	{ _MMIO(0x9888), 0x04193000 },
+	{ _MMIO(0x9888), 0x081a28c1 },
+	{ _MMIO(0x9888), 0x001a0000 },
+	{ _MMIO(0x9888), 0x00133000 },
+	{ _MMIO(0x9888), 0x0613c000 },
+	{ _MMIO(0x9888), 0x0813f000 },
+	{ _MMIO(0x9888), 0x00172000 },
+	{ _MMIO(0x9888), 0x06178000 },
+	{ _MMIO(0x9888), 0x0817a000 },
+	{ _MMIO(0x9888), 0x00180037 },
+	{ _MMIO(0x9888), 0x06180940 },
+	{ _MMIO(0x9888), 0x08180000 },
+	{ _MMIO(0x9888), 0x02180000 },
+	{ _MMIO(0x9888), 0x04183000 },
+	{ _MMIO(0x9888), 0x06393000 },
+	{ _MMIO(0x9888), 0x0c3a28c1 },
+	{ _MMIO(0x9888), 0x003a0000 },
+	{ _MMIO(0x9888), 0x0a33f000 },
+	{ _MMIO(0x9888), 0x0c33f000 },
+	{ _MMIO(0x9888), 0x0a37a000 },
+	{ _MMIO(0x9888), 0x0c37a000 },
+	{ _MMIO(0x9888), 0x0a380977 },
+	{ _MMIO(0x9888), 0x08380000 },
+	{ _MMIO(0x9888), 0x04380000 },
+	{ _MMIO(0x9888), 0x06383000 },
+	{ _MMIO(0x9888), 0x119000ff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900040 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900800 },
+	{ _MMIO(0x9888), 0x47901000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900844 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_vme_pipe_mux_config(struct drm_i915_private *dev_priv,
+			const struct i915_oa_reg **regs,
+			int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_vme_pipe;
+	lens[n] = ARRAY_SIZE(mux_config_vme_pipe);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_test_oa[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2770), 0x00000004 },
+	{ _MMIO(0x2774), 0x00000000 },
+	{ _MMIO(0x2778), 0x00000003 },
+	{ _MMIO(0x277c), 0x00000000 },
+	{ _MMIO(0x2780), 0x00000007 },
+	{ _MMIO(0x2784), 0x00000000 },
+	{ _MMIO(0x2788), 0x00100002 },
+	{ _MMIO(0x278c), 0x0000fff7 },
+	{ _MMIO(0x2790), 0x00100002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00100082 },
+	{ _MMIO(0x279c), 0x0000ffef },
+	{ _MMIO(0x27a0), 0x001000c2 },
+	{ _MMIO(0x27a4), 0x0000ffe7 },
+	{ _MMIO(0x27a8), 0x00100001 },
+	{ _MMIO(0x27ac), 0x0000ffe7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_test_oa[] = {
+};
+
+static const struct i915_oa_reg mux_config_test_oa[] = {
+	{ _MMIO(0x9888), 0x11810000 },
+	{ _MMIO(0x9888), 0x07810016 },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930040 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_test_oa_mux_config(struct drm_i915_private *dev_priv,
+		       const struct i915_oa_reg **regs,
+		       int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_test_oa;
+	lens[n] = ARRAY_SIZE(mux_config_test_oa);
+	n++;
+
+	return n;
+}
+
 int i915_oa_select_metric_set_sklgt2(struct drm_i915_private *dev_priv)
 {
-	dev_priv->perf.oa.n_mux_configs = 0;
-	dev_priv->perf.oa.b_counter_regs = NULL;
-	dev_priv->perf.oa.b_counter_regs_len = 0;
-	dev_priv->perf.oa.flex_regs = NULL;
-	dev_priv->perf.oa.flex_regs_len = 0;
+	dev_priv->perf.oa.n_mux_configs = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_basic_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_basic_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
+		return 0;
+	case METRIC_SET_ID_RENDER_PIPE_PROFILE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_pipe_profile_mux_config(dev_priv,
+							   dev_priv->perf.oa.mux_regs,
+							   dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_PIPE_PROFILE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_pipe_profile;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_pipe_profile);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_pipe_profile;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_pipe_profile);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_READS:
+		dev_priv->perf.oa.n_mux_configs =
+			get_memory_reads_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_READS\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_reads;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_reads);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_reads;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_reads);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_WRITES:
+		dev_priv->perf.oa.n_mux_configs =
+			get_memory_writes_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_WRITES\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_writes;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_writes);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_writes;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_writes);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTENDED:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_extended_mux_config(dev_priv,
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTENDED\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extended;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extended);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extended;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extended);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_L3_CACHE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_l3_cache_mux_config(dev_priv,
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_L3_CACHE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_l3_cache;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_l3_cache);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_l3_cache;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_l3_cache);
+
+		return 0;
+	case METRIC_SET_ID_HDC_AND_SF:
+		dev_priv->perf.oa.n_mux_configs =
+			get_hdc_and_sf_mux_config(dev_priv,
+						  dev_priv->perf.oa.mux_regs,
+						  dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"HDC_AND_SF\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_hdc_and_sf;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_hdc_and_sf);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_hdc_and_sf;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_hdc_and_sf);
+
+		return 0;
+	case METRIC_SET_ID_L3_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_1_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_1);
+
+		return 0;
+	case METRIC_SET_ID_L3_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_2_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_2);
+
+		return 0;
+	case METRIC_SET_ID_L3_3:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_3_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_3\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_3;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_3);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_3;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_3);
+
+		return 0;
+	case METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND:
+		dev_priv->perf.oa.n_mux_configs =
+			get_rasterizer_and_pixel_backend_mux_config(dev_priv,
+								    dev_priv->perf.oa.mux_regs,
+								    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RASTERIZER_AND_PIXEL_BACKEND\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_rasterizer_and_pixel_backend);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_rasterizer_and_pixel_backend);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER:
+		dev_priv->perf.oa.n_mux_configs =
+			get_sampler_mux_config(dev_priv,
+					       dev_priv->perf.oa.mux_regs,
+					       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler);
+
+		return 0;
+	case METRIC_SET_ID_TDL_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_1_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_1);
+
+		return 0;
+	case METRIC_SET_ID_TDL_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_2_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_2);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTRA:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_extra_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTRA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extra;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extra);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extra;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extra);
+
+		return 0;
+	case METRIC_SET_ID_VME_PIPE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_vme_pipe_mux_config(dev_priv,
+						dev_priv->perf.oa.mux_regs,
+						dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"VME_PIPE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_vme_pipe;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_vme_pipe);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_vme_pipe;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_vme_pipe);
+
+		return 0;
+	case METRIC_SET_ID_TEST_OA:
+		dev_priv->perf.oa.n_mux_configs =
+			get_test_oa_mux_config(dev_priv,
+					       dev_priv->perf.oa.mux_regs,
+					       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TEST_OA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_test_oa;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_test_oa);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_test_oa;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_test_oa);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "f519e481-24d2-4d42-87c9-3fdd12c00202",
+	.attrs =  attrs_render_basic,
+};
+
+static ssize_t
+show_compute_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_BASIC);
+}
+
+static struct device_attribute dev_attr_compute_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_basic[] = {
+	&dev_attr_compute_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_basic = {
+	.name = "fe47b29d-ae51-423e-bff4-27d965a95b60",
+	.attrs =  attrs_compute_basic,
+};
+
+static ssize_t
+show_render_pipe_profile_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_PIPE_PROFILE);
+}
+
+static struct device_attribute dev_attr_render_pipe_profile_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_pipe_profile_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_pipe_profile[] = {
+	&dev_attr_render_pipe_profile_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_pipe_profile = {
+	.name = "e0ad5ae0-84ba-4f29-a723-1906c12cb774",
+	.attrs =  attrs_render_pipe_profile,
+};
+
+static ssize_t
+show_memory_reads_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_READS);
+}
 
-	switch (dev_priv->perf.oa.metrics_set) {
-	case METRIC_SET_ID_RENDER_BASIC:
-		dev_priv->perf.oa.n_mux_configs =
-			get_render_basic_mux_config(dev_priv,
-						    dev_priv->perf.oa.mux_regs,
-						    dev_priv->perf.oa.mux_regs_lens);
-		if (dev_priv->perf.oa.n_mux_configs == 0) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+static struct device_attribute dev_attr_memory_reads_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_reads_id,
+	.store = NULL,
+};
 
-			/* EINVAL because *_register_sysfs already checked this
-			 * and so it wouldn't have been advertised to userspace and
-			 * so shouldn't have been requested
-			 */
-			return -EINVAL;
-		}
+static struct attribute *attrs_memory_reads[] = {
+	&dev_attr_memory_reads_id.attr,
+	NULL,
+};
 
-		dev_priv->perf.oa.b_counter_regs =
-			b_counter_config_render_basic;
-		dev_priv->perf.oa.b_counter_regs_len =
-			ARRAY_SIZE(b_counter_config_render_basic);
+static struct attribute_group group_memory_reads = {
+	.name = "9bc436dd-6130-4add-affc-283eb6eaa864",
+	.attrs =  attrs_memory_reads,
+};
 
-		dev_priv->perf.oa.flex_regs =
-			flex_eu_config_render_basic;
-		dev_priv->perf.oa.flex_regs_len =
-			ARRAY_SIZE(flex_eu_config_render_basic);
+static ssize_t
+show_memory_writes_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_WRITES);
+}
 
-		return 0;
-	default:
-		return -ENODEV;
-	}
+static struct device_attribute dev_attr_memory_writes_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_writes_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_writes[] = {
+	&dev_attr_memory_writes_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_writes = {
+	.name = "2ea0da8f-3527-4669-9d9d-13099a7435bf",
+	.attrs =  attrs_memory_writes,
+};
+
+static ssize_t
+show_compute_extended_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTENDED);
+}
+
+static struct device_attribute dev_attr_compute_extended_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extended_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extended[] = {
+	&dev_attr_compute_extended_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extended = {
+	.name = "d97d16af-028b-4cd1-a672-6210cb5513dd",
+	.attrs =  attrs_compute_extended,
+};
+
+static ssize_t
+show_compute_l3_cache_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_L3_CACHE);
 }
 
+static struct device_attribute dev_attr_compute_l3_cache_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_l3_cache_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_l3_cache[] = {
+	&dev_attr_compute_l3_cache_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_l3_cache = {
+	.name = "9fb22842-e708-43f7-9752-e0e41670c39e",
+	.attrs =  attrs_compute_l3_cache,
+};
+
 static ssize_t
-show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+show_hdc_and_sf_id(struct device *kdev, struct device_attribute *attr, char *buf)
 {
-	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+	return sprintf(buf, "%d\n", METRIC_SET_ID_HDC_AND_SF);
 }
 
-static struct device_attribute dev_attr_render_basic_id = {
+static struct device_attribute dev_attr_hdc_and_sf_id = {
 	.attr = { .name = "id", .mode = 0444 },
-	.show = show_render_basic_id,
+	.show = show_hdc_and_sf_id,
 	.store = NULL,
 };
 
-static struct attribute *attrs_render_basic[] = {
-	&dev_attr_render_basic_id.attr,
+static struct attribute *attrs_hdc_and_sf[] = {
+	&dev_attr_hdc_and_sf_id.attr,
 	NULL,
 };
 
-static struct attribute_group group_render_basic = {
-	.name = "f519e481-24d2-4d42-87c9-3fdd12c00202",
-	.attrs =  attrs_render_basic,
+static struct attribute_group group_hdc_and_sf = {
+	.name = "5378e2a1-4248-4188-a4ae-da25a794c603",
+	.attrs =  attrs_hdc_and_sf,
+};
+
+static ssize_t
+show_l3_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_1);
+}
+
+static struct device_attribute dev_attr_l3_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_1[] = {
+	&dev_attr_l3_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_1 = {
+	.name = "f42cdd6a-b000-42cb-870f-5eb423a7f514",
+	.attrs =  attrs_l3_1,
+};
+
+static ssize_t
+show_l3_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_2);
+}
+
+static struct device_attribute dev_attr_l3_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_2[] = {
+	&dev_attr_l3_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_2 = {
+	.name = "b9bf2423-d88c-4a7b-a051-627611d00dcc",
+	.attrs =  attrs_l3_2,
+};
+
+static ssize_t
+show_l3_3_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_3);
+}
+
+static struct device_attribute dev_attr_l3_3_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_3_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_3[] = {
+	&dev_attr_l3_3_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_3 = {
+	.name = "2414a93d-d84f-406e-99c0-472161194b40",
+	.attrs =  attrs_l3_3,
+};
+
+static ssize_t
+show_rasterizer_and_pixel_backend_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND);
+}
+
+static struct device_attribute dev_attr_rasterizer_and_pixel_backend_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_rasterizer_and_pixel_backend_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_rasterizer_and_pixel_backend[] = {
+	&dev_attr_rasterizer_and_pixel_backend_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_rasterizer_and_pixel_backend = {
+	.name = "53a45d2d-170b-4cf5-b7bb-585120c8e2f5",
+	.attrs =  attrs_rasterizer_and_pixel_backend,
+};
+
+static ssize_t
+show_sampler_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER);
+}
+
+static struct device_attribute dev_attr_sampler_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler[] = {
+	&dev_attr_sampler_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler = {
+	.name = "b4cff514-a91e-4798-a0b3-426ca13fc9c1",
+	.attrs =  attrs_sampler,
+};
+
+static ssize_t
+show_tdl_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_1);
+}
+
+static struct device_attribute dev_attr_tdl_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_1[] = {
+	&dev_attr_tdl_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_1 = {
+	.name = "7821d13b-9b8b-4405-9618-78cd56b62cce",
+	.attrs =  attrs_tdl_1,
+};
+
+static ssize_t
+show_tdl_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_2);
+}
+
+static struct device_attribute dev_attr_tdl_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_2[] = {
+	&dev_attr_tdl_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_2 = {
+	.name = "893f1a4d-919d-4388-8cb7-746d73ea7259",
+	.attrs =  attrs_tdl_2,
+};
+
+static ssize_t
+show_compute_extra_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTRA);
+}
+
+static struct device_attribute dev_attr_compute_extra_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extra_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extra[] = {
+	&dev_attr_compute_extra_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extra = {
+	.name = "41a24047-7484-4ead-ae37-de907e5ff2b2",
+	.attrs =  attrs_compute_extra,
+};
+
+static ssize_t
+show_vme_pipe_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_VME_PIPE);
+}
+
+static struct device_attribute dev_attr_vme_pipe_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_vme_pipe_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_vme_pipe[] = {
+	&dev_attr_vme_pipe_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_vme_pipe = {
+	.name = "95910492-943f-44bd-9461-390240f243fd",
+	.attrs =  attrs_vme_pipe,
+};
+
+static ssize_t
+show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TEST_OA);
+}
+
+static struct device_attribute dev_attr_test_oa_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_test_oa_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_test_oa[] = {
+	&dev_attr_test_oa_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_test_oa = {
+	.name = "1651949f-0ac0-4cb1-a06f-dafd74a407d1",
+	.attrs =  attrs_test_oa,
 };
 
 int
@@ -220,9 +3291,145 @@ i915_perf_register_sysfs_sklgt2(struct drm_i915_private *dev_priv)
 		if (ret)
 			goto error_render_basic;
 	}
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+		if (ret)
+			goto error_compute_basic;
+	}
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+		if (ret)
+			goto error_render_pipe_profile;
+	}
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+		if (ret)
+			goto error_memory_reads;
+	}
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+		if (ret)
+			goto error_memory_writes;
+	}
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+		if (ret)
+			goto error_compute_extended;
+	}
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+		if (ret)
+			goto error_compute_l3_cache;
+	}
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+		if (ret)
+			goto error_hdc_and_sf;
+	}
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+		if (ret)
+			goto error_l3_1;
+	}
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+		if (ret)
+			goto error_l3_2;
+	}
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+		if (ret)
+			goto error_l3_3;
+	}
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+		if (ret)
+			goto error_rasterizer_and_pixel_backend;
+	}
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler);
+		if (ret)
+			goto error_sampler;
+	}
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+		if (ret)
+			goto error_tdl_1;
+	}
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+		if (ret)
+			goto error_tdl_2;
+	}
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+		if (ret)
+			goto error_compute_extra;
+	}
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+		if (ret)
+			goto error_vme_pipe;
+	}
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+		if (ret)
+			goto error_test_oa;
+	}
 
 	return 0;
 
+error_test_oa:
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+error_vme_pipe:
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+error_compute_extra:
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+error_tdl_2:
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+error_tdl_1:
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+error_sampler:
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+error_rasterizer_and_pixel_backend:
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+error_l3_3:
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+error_l3_2:
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+error_l3_1:
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+error_hdc_and_sf:
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+error_compute_l3_cache:
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+error_compute_extended:
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+error_memory_writes:
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+error_memory_reads:
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+error_render_pipe_profile:
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+error_compute_basic:
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
 error_render_basic:
 	return ret;
 }
@@ -235,4 +3442,38 @@ i915_perf_unregister_sysfs_sklgt2(struct drm_i915_private *dev_priv)
 
 	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_test_oa);
 }
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt3.c b/drivers/gpu/drm/i915/i915_oa_sklgt3.c
index e32d3b3ad77a..7765e22dfa17 100644
--- a/drivers/gpu/drm/i915/i915_oa_sklgt3.c
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt3.c
@@ -33,9 +33,26 @@
 
 enum metric_set_id {
 	METRIC_SET_ID_RENDER_BASIC = 1,
+	METRIC_SET_ID_COMPUTE_BASIC,
+	METRIC_SET_ID_RENDER_PIPE_PROFILE,
+	METRIC_SET_ID_MEMORY_READS,
+	METRIC_SET_ID_MEMORY_WRITES,
+	METRIC_SET_ID_COMPUTE_EXTENDED,
+	METRIC_SET_ID_COMPUTE_L3_CACHE,
+	METRIC_SET_ID_HDC_AND_SF,
+	METRIC_SET_ID_L3_1,
+	METRIC_SET_ID_L3_2,
+	METRIC_SET_ID_L3_3,
+	METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND,
+	METRIC_SET_ID_SAMPLER,
+	METRIC_SET_ID_TDL_1,
+	METRIC_SET_ID_TDL_2,
+	METRIC_SET_ID_COMPUTE_EXTRA,
+	METRIC_SET_ID_VME_PIPE,
+	METRIC_SET_ID_TEST_OA,
 };
 
-int i915_oa_n_builtin_metric_sets_sklgt3 = 1;
+int i915_oa_n_builtin_metric_sets_sklgt3 = 18;
 
 static const struct i915_oa_reg b_counter_config_render_basic[] = {
 	{ _MMIO(0x2710), 0x00000000 },
@@ -157,66 +174,2669 @@ get_render_basic_mux_config(struct drm_i915_private *dev_priv,
 	return n;
 }
 
+static const struct i915_oa_reg b_counter_config_compute_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic[] = {
+	{ _MMIO(0x9888), 0x104f00e0 },
+	{ _MMIO(0x9888), 0x124f1c00 },
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x1a4e0820 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x064f0900 },
+	{ _MMIO(0x9888), 0x084f0032 },
+	{ _MMIO(0x9888), 0x0a4f1891 },
+	{ _MMIO(0x9888), 0x0c4f0e00 },
+	{ _MMIO(0x9888), 0x0e4f003c },
+	{ _MMIO(0x9888), 0x004f0d80 },
+	{ _MMIO(0x9888), 0x024f003b },
+	{ _MMIO(0x9888), 0x006c0002 },
+	{ _MMIO(0x9888), 0x086c0100 },
+	{ _MMIO(0x9888), 0x0c6c000c },
+	{ _MMIO(0x9888), 0x0e6c0b00 },
+	{ _MMIO(0x9888), 0x186c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x081b8000 },
+	{ _MMIO(0x9888), 0x0c1b4000 },
+	{ _MMIO(0x9888), 0x0e1b8000 },
+	{ _MMIO(0x9888), 0x101c8000 },
+	{ _MMIO(0x9888), 0x1a1c8000 },
+	{ _MMIO(0x9888), 0x1c1c0024 },
+	{ _MMIO(0x9888), 0x065b8000 },
+	{ _MMIO(0x9888), 0x085b4000 },
+	{ _MMIO(0x9888), 0x0a5bc000 },
+	{ _MMIO(0x9888), 0x0c5b8000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x005b8000 },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x1a5c6000 },
+	{ _MMIO(0x9888), 0x1c5c001b },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x145c8000 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4c2000 },
+	{ _MMIO(0x9888), 0x0c4c0208 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020d2000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5500 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2cc000 },
+	{ _MMIO(0x9888), 0x162cfb00 },
+	{ _MMIO(0x9888), 0x182c00be },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042cc000 },
+	{ _MMIO(0x9888), 0x19900157 },
+	{ _MMIO(0x9888), 0x1b900158 },
+	{ _MMIO(0x9888), 0x1d900105 },
+	{ _MMIO(0x9888), 0x1f900103 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x11900fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900863 },
+	{ _MMIO(0x9888), 0x47900802 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900802 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900002 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900c62 },
+	{ _MMIO(0x9888), 0x53903333 },
+};
+
+static int
+get_compute_basic_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_basic;
+	lens[n] = ARRAY_SIZE(mux_config_compute_basic);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_render_pipe_profile[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ffea },
+	{ _MMIO(0x2774), 0x00007ffc },
+	{ _MMIO(0x2778), 0x0007affa },
+	{ _MMIO(0x277c), 0x0000f5fd },
+	{ _MMIO(0x2780), 0x00079ffa },
+	{ _MMIO(0x2784), 0x0000f3fb },
+	{ _MMIO(0x2788), 0x0007bf7a },
+	{ _MMIO(0x278c), 0x0000f7e7 },
+	{ _MMIO(0x2790), 0x0007fefa },
+	{ _MMIO(0x2794), 0x0000f7cf },
+	{ _MMIO(0x2798), 0x00077ffa },
+	{ _MMIO(0x279c), 0x0000efdf },
+	{ _MMIO(0x27a0), 0x0006fffa },
+	{ _MMIO(0x27a4), 0x0000cfbf },
+	{ _MMIO(0x27a8), 0x0003fffa },
+	{ _MMIO(0x27ac), 0x00005f7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_pipe_profile[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile[] = {
+	{ _MMIO(0x9888), 0x0c0e001f },
+	{ _MMIO(0x9888), 0x0a0f0000 },
+	{ _MMIO(0x9888), 0x10116800 },
+	{ _MMIO(0x9888), 0x178a03e0 },
+	{ _MMIO(0x9888), 0x11824c00 },
+	{ _MMIO(0x9888), 0x11830020 },
+	{ _MMIO(0x9888), 0x13840020 },
+	{ _MMIO(0x9888), 0x11850019 },
+	{ _MMIO(0x9888), 0x11860007 },
+	{ _MMIO(0x9888), 0x01870c40 },
+	{ _MMIO(0x9888), 0x17880000 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x0a4c0040 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x040d4000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020e5400 },
+	{ _MMIO(0x9888), 0x000e0000 },
+	{ _MMIO(0x9888), 0x080f0040 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x100f0000 },
+	{ _MMIO(0x9888), 0x0e0f0040 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06110012 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x01898000 },
+	{ _MMIO(0x9888), 0x0d890100 },
+	{ _MMIO(0x9888), 0x03898000 },
+	{ _MMIO(0x9888), 0x09808000 },
+	{ _MMIO(0x9888), 0x0b808000 },
+	{ _MMIO(0x9888), 0x0380c000 },
+	{ _MMIO(0x9888), 0x0f8a0075 },
+	{ _MMIO(0x9888), 0x1d8a0000 },
+	{ _MMIO(0x9888), 0x118a8000 },
+	{ _MMIO(0x9888), 0x1b8a4000 },
+	{ _MMIO(0x9888), 0x138a8000 },
+	{ _MMIO(0x9888), 0x1d81a000 },
+	{ _MMIO(0x9888), 0x15818000 },
+	{ _MMIO(0x9888), 0x17818000 },
+	{ _MMIO(0x9888), 0x0b820030 },
+	{ _MMIO(0x9888), 0x07828000 },
+	{ _MMIO(0x9888), 0x0d824000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x05824000 },
+	{ _MMIO(0x9888), 0x0d830003 },
+	{ _MMIO(0x9888), 0x0583000c },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x03838000 },
+	{ _MMIO(0x9888), 0x07838000 },
+	{ _MMIO(0x9888), 0x0b840980 },
+	{ _MMIO(0x9888), 0x03844d80 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x09848000 },
+	{ _MMIO(0x9888), 0x09850080 },
+	{ _MMIO(0x9888), 0x03850003 },
+	{ _MMIO(0x9888), 0x01850000 },
+	{ _MMIO(0x9888), 0x07860000 },
+	{ _MMIO(0x9888), 0x0f860400 },
+	{ _MMIO(0x9888), 0x09870032 },
+	{ _MMIO(0x9888), 0x01888052 },
+	{ _MMIO(0x9888), 0x11880000 },
+	{ _MMIO(0x9888), 0x09884000 },
+	{ _MMIO(0x9888), 0x1b931001 },
+	{ _MMIO(0x9888), 0x1d930001 },
+	{ _MMIO(0x9888), 0x19934000 },
+	{ _MMIO(0x9888), 0x1b958000 },
+	{ _MMIO(0x9888), 0x1d950094 },
+	{ _MMIO(0x9888), 0x19958000 },
+	{ _MMIO(0x9888), 0x09e58000 },
+	{ _MMIO(0x9888), 0x0be58000 },
+	{ _MMIO(0x9888), 0x03e5c000 },
+	{ _MMIO(0x9888), 0x0592c000 },
+	{ _MMIO(0x9888), 0x0b928000 },
+	{ _MMIO(0x9888), 0x0d924000 },
+	{ _MMIO(0x9888), 0x0f924000 },
+	{ _MMIO(0x9888), 0x11928000 },
+	{ _MMIO(0x9888), 0x1392c000 },
+	{ _MMIO(0x9888), 0x09924000 },
+	{ _MMIO(0x9888), 0x01985000 },
+	{ _MMIO(0x9888), 0x07988000 },
+	{ _MMIO(0x9888), 0x09981000 },
+	{ _MMIO(0x9888), 0x0b982000 },
+	{ _MMIO(0x9888), 0x0d982000 },
+	{ _MMIO(0x9888), 0x0f989000 },
+	{ _MMIO(0x9888), 0x05982000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1190c080 },
+	{ _MMIO(0x9888), 0x51901150 },
+	{ _MMIO(0x9888), 0x41901400 },
+	{ _MMIO(0x9888), 0x55905111 },
+	{ _MMIO(0x9888), 0x45901400 },
+	{ _MMIO(0x9888), 0x479004a5 },
+	{ _MMIO(0x9888), 0x57903455 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b9000a0 },
+	{ _MMIO(0x9888), 0x59900001 },
+	{ _MMIO(0x9888), 0x43900005 },
+	{ _MMIO(0x9888), 0x53900455 },
+};
+
+static int
+get_render_pipe_profile_mux_config(struct drm_i915_private *dev_priv,
+				   const struct i915_oa_reg **regs,
+				   int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_render_pipe_profile;
+	lens[n] = ARRAY_SIZE(mux_config_render_pipe_profile);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_reads[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f872 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_reads[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f900064 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930055 },
+	{ _MMIO(0x9888), 0x03e58000 },
+	{ _MMIO(0x9888), 0x05e5c000 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x13900150 },
+	{ _MMIO(0x9888), 0x21900151 },
+	{ _MMIO(0x9888), 0x23900152 },
+	{ _MMIO(0x9888), 0x25900153 },
+	{ _MMIO(0x9888), 0x27900154 },
+	{ _MMIO(0x9888), 0x29900155 },
+	{ _MMIO(0x9888), 0x2b900156 },
+	{ _MMIO(0x9888), 0x2d900157 },
+	{ _MMIO(0x9888), 0x2f90015f },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c60 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900c63 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c63 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static int
+get_memory_reads_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_memory_reads;
+	lens[n] = ARRAY_SIZE(mux_config_memory_reads);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_writes[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f822 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_writes[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f901000 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930055 },
+	{ _MMIO(0x9888), 0x03e58000 },
+	{ _MMIO(0x9888), 0x05e5c000 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x13900160 },
+	{ _MMIO(0x9888), 0x21900161 },
+	{ _MMIO(0x9888), 0x23900162 },
+	{ _MMIO(0x9888), 0x25900163 },
+	{ _MMIO(0x9888), 0x27900164 },
+	{ _MMIO(0x9888), 0x29900165 },
+	{ _MMIO(0x9888), 0x2b900166 },
+	{ _MMIO(0x9888), 0x2d900167 },
+	{ _MMIO(0x9888), 0x2f900150 },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c60 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900c63 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c63 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static int
+get_memory_writes_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_memory_writes;
+	lens[n] = ARRAY_SIZE(mux_config_memory_writes);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extended[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fc2a },
+	{ _MMIO(0x2774), 0x0000bf00 },
+	{ _MMIO(0x2778), 0x0007fc6a },
+	{ _MMIO(0x277c), 0x0000bf00 },
+	{ _MMIO(0x2780), 0x0007fc92 },
+	{ _MMIO(0x2784), 0x0000bf00 },
+	{ _MMIO(0x2788), 0x0007fca2 },
+	{ _MMIO(0x278c), 0x0000bf00 },
+	{ _MMIO(0x2790), 0x0007fc32 },
+	{ _MMIO(0x2794), 0x0000bf00 },
+	{ _MMIO(0x2798), 0x0007fc9a },
+	{ _MMIO(0x279c), 0x0000bf00 },
+	{ _MMIO(0x27a0), 0x0007fe6a },
+	{ _MMIO(0x27a4), 0x0000bf00 },
+	{ _MMIO(0x27a8), 0x0007fe7a },
+	{ _MMIO(0x27ac), 0x0000bf00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extended[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended[] = {
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x141c8160 },
+	{ _MMIO(0x9888), 0x161c8015 },
+	{ _MMIO(0x9888), 0x181c0120 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4eaaa0 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0e6c0b01 },
+	{ _MMIO(0x9888), 0x006c0200 },
+	{ _MMIO(0x9888), 0x026c000c },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x001b8000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x001c0041 },
+	{ _MMIO(0x9888), 0x061c4200 },
+	{ _MMIO(0x9888), 0x081c4443 },
+	{ _MMIO(0x9888), 0x0a1c4645 },
+	{ _MMIO(0x9888), 0x0c1c7647 },
+	{ _MMIO(0x9888), 0x041c7357 },
+	{ _MMIO(0x9888), 0x1c1c0030 },
+	{ _MMIO(0x9888), 0x101c0000 },
+	{ _MMIO(0x9888), 0x1a1c0000 },
+	{ _MMIO(0x9888), 0x121c8000 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4caa2a },
+	{ _MMIO(0x9888), 0x0c4c02aa },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5515 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162caa00 },
+	{ _MMIO(0x9888), 0x182c00aa },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x11907fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900040 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900802 },
+	{ _MMIO(0x9888), 0x47900842 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900842 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900800 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static int
+get_compute_extended_mux_config(struct drm_i915_private *dev_priv,
+				const struct i915_oa_reg **regs,
+				int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_extended;
+	lens[n] = ARRAY_SIZE(mux_config_compute_extended);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_l3_cache[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fffa },
+	{ _MMIO(0x2774), 0x0000fefe },
+	{ _MMIO(0x2778), 0x0007fffa },
+	{ _MMIO(0x277c), 0x0000fefd },
+	{ _MMIO(0x2790), 0x0007fffa },
+	{ _MMIO(0x2794), 0x0000fbef },
+	{ _MMIO(0x2798), 0x0007fffa },
+	{ _MMIO(0x279c), 0x0000fbdf },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_l3_cache[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00101100 },
+	{ _MMIO(0xe45c), 0x00201200 },
+	{ _MMIO(0xe55c), 0x00301300 },
+	{ _MMIO(0xe65c), 0x00401400 },
+};
+
+static const struct i915_oa_reg mux_config_compute_l3_cache[] = {
+	{ _MMIO(0x9888), 0x166c0760 },
+	{ _MMIO(0x9888), 0x1593001e },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4e8020 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x006c0051 },
+	{ _MMIO(0x9888), 0x066c5000 },
+	{ _MMIO(0x9888), 0x086c5c5d },
+	{ _MMIO(0x9888), 0x0e6c5e5f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x186c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x061b8000 },
+	{ _MMIO(0x9888), 0x081bc000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x101c8000 },
+	{ _MMIO(0x9888), 0x1a1ce000 },
+	{ _MMIO(0x9888), 0x1c1c0030 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4c2a00 },
+	{ _MMIO(0x9888), 0x0c4c0280 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f1500 },
+	{ _MMIO(0x9888), 0x100f0140 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162c0a00 },
+	{ _MMIO(0x9888), 0x182c00a0 },
+	{ _MMIO(0x9888), 0x03933300 },
+	{ _MMIO(0x9888), 0x05930032 },
+	{ _MMIO(0x9888), 0x11930000 },
+	{ _MMIO(0x9888), 0x1b930000 },
+	{ _MMIO(0x9888), 0x1d900157 },
+	{ _MMIO(0x9888), 0x1f900158 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1190030f },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900063 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x53903333 },
+	{ _MMIO(0x9888), 0x43900840 },
+};
+
+static int
+get_compute_l3_cache_mux_config(struct drm_i915_private *dev_priv,
+				const struct i915_oa_reg **regs,
+				int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_l3_cache;
+	lens[n] = ARRAY_SIZE(mux_config_compute_l3_cache);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_hdc_and_sf[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x10800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+};
+
+static const struct i915_oa_reg flex_eu_config_hdc_and_sf[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_hdc_and_sf[] = {
+	{ _MMIO(0x9888), 0x104f0232 },
+	{ _MMIO(0x9888), 0x124f4640 },
+	{ _MMIO(0x9888), 0x106c0232 },
+	{ _MMIO(0x9888), 0x11834400 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x004f1880 },
+	{ _MMIO(0x9888), 0x024f08bb },
+	{ _MMIO(0x9888), 0x044f001b },
+	{ _MMIO(0x9888), 0x046c0100 },
+	{ _MMIO(0x9888), 0x066c000b },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x041b8000 },
+	{ _MMIO(0x9888), 0x061b4000 },
+	{ _MMIO(0x9888), 0x1a1c1800 },
+	{ _MMIO(0x9888), 0x005b8000 },
+	{ _MMIO(0x9888), 0x025bc000 },
+	{ _MMIO(0x9888), 0x045b4000 },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x145c8000 },
+	{ _MMIO(0x9888), 0x165c8000 },
+	{ _MMIO(0x9888), 0x185c8000 },
+	{ _MMIO(0x9888), 0x0a4c00a0 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042cc000 },
+	{ _MMIO(0x9888), 0x062cc000 },
+	{ _MMIO(0x9888), 0x082cc000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x0f8305c0 },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x07830000 },
+	{ _MMIO(0x9888), 0x1d950080 },
+	{ _MMIO(0x9888), 0x13928000 },
+	{ _MMIO(0x9888), 0x0f988000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x59900005 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_hdc_and_sf_mux_config(struct drm_i915_private *dev_priv,
+			  const struct i915_oa_reg **regs,
+			  int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_hdc_and_sf;
+	lens[n] = ARRAY_SIZE(mux_config_hdc_and_sf);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1[] = {
+	{ _MMIO(0x9888), 0x126c7b40 },
+	{ _MMIO(0x9888), 0x166c0020 },
+	{ _MMIO(0x9888), 0x0a603444 },
+	{ _MMIO(0x9888), 0x0a613400 },
+	{ _MMIO(0x9888), 0x1a4ea800 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x064f4000 },
+	{ _MMIO(0x9888), 0x0c6c5327 },
+	{ _MMIO(0x9888), 0x0e6c5425 },
+	{ _MMIO(0x9888), 0x006c2a00 },
+	{ _MMIO(0x9888), 0x026c285b },
+	{ _MMIO(0x9888), 0x046c005c },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0800 },
+	{ _MMIO(0x9888), 0x0c1bc000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x001b8000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x1c1c003c },
+	{ _MMIO(0x9888), 0x121c8000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c0800 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x1a5c1000 },
+	{ _MMIO(0x9888), 0x10600000 },
+	{ _MMIO(0x9888), 0x04600000 },
+	{ _MMIO(0x9888), 0x0c610044 },
+	{ _MMIO(0x9888), 0x10610000 },
+	{ _MMIO(0x9888), 0x06610000 },
+	{ _MMIO(0x9888), 0x0c4c02a8 },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x0a4c002a },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0154 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x182c00aa },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2cc000 },
+	{ _MMIO(0x9888), 0x1190ffc0 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900420 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900021 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900400 },
+	{ _MMIO(0x9888), 0x43900421 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+};
+
+static int
+get_l3_1_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_1;
+	lens[n] = ARRAY_SIZE(mux_config_l3_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00028002 },
+	{ _MMIO(0x277c), 0x000087ff },
+	{ _MMIO(0x2780), 0x00020002 },
+	{ _MMIO(0x2784), 0x00008fff },
+	{ _MMIO(0x2788), 0x00008002 },
+	{ _MMIO(0x278c), 0x0000a7ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_2[] = {
+	{ _MMIO(0x9888), 0x126c02e0 },
+	{ _MMIO(0x9888), 0x146c0001 },
+	{ _MMIO(0x9888), 0x0a623400 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x064f4000 },
+	{ _MMIO(0x9888), 0x026c3324 },
+	{ _MMIO(0x9888), 0x046c3422 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c0800 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x1a5c1000 },
+	{ _MMIO(0x9888), 0x06614000 },
+	{ _MMIO(0x9888), 0x0c620044 },
+	{ _MMIO(0x9888), 0x10620000 },
+	{ _MMIO(0x9888), 0x06620000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c002a },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f4000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2cc000 },
+	{ _MMIO(0x9888), 0x1190f800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_l3_2_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_2;
+	lens[n] = ARRAY_SIZE(mux_config_l3_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_3[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00028002 },
+	{ _MMIO(0x277c), 0x000087ff },
+	{ _MMIO(0x2780), 0x00020002 },
+	{ _MMIO(0x2784), 0x00008fff },
+	{ _MMIO(0x2788), 0x00008002 },
+	{ _MMIO(0x278c), 0x0000a7ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_3[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_3[] = {
+	{ _MMIO(0x9888), 0x126c4e80 },
+	{ _MMIO(0x9888), 0x146c0000 },
+	{ _MMIO(0x9888), 0x0a633400 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x026c3321 },
+	{ _MMIO(0x9888), 0x046c342f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1a6c2000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x061b4000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c1800 },
+	{ _MMIO(0x9888), 0x06604000 },
+	{ _MMIO(0x9888), 0x0c630044 },
+	{ _MMIO(0x9888), 0x10630000 },
+	{ _MMIO(0x9888), 0x06630000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c00aa },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f4000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x1190f800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900002 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_l3_3_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_3;
+	lens[n] = ARRAY_SIZE(mux_config_l3_3);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000efff },
+	{ _MMIO(0x2778), 0x00006000 },
+	{ _MMIO(0x277c), 0x0000f3ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x9888), 0x102f3800 },
+	{ _MMIO(0x9888), 0x144d0500 },
+	{ _MMIO(0x9888), 0x120d03c0 },
+	{ _MMIO(0x9888), 0x140d03cf },
+	{ _MMIO(0x9888), 0x0c0f0004 },
+	{ _MMIO(0x9888), 0x0c4e4000 },
+	{ _MMIO(0x9888), 0x042f0480 },
+	{ _MMIO(0x9888), 0x082f0000 },
+	{ _MMIO(0x9888), 0x022f0000 },
+	{ _MMIO(0x9888), 0x0a4c0090 },
+	{ _MMIO(0x9888), 0x064d0027 },
+	{ _MMIO(0x9888), 0x004d0000 },
+	{ _MMIO(0x9888), 0x000d0d40 },
+	{ _MMIO(0x9888), 0x020d803f },
+	{ _MMIO(0x9888), 0x040d8023 },
+	{ _MMIO(0x9888), 0x100d0000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020f0010 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x0e0f0050 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41901400 },
+	{ _MMIO(0x9888), 0x43901485 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900001 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_rasterizer_and_pixel_backend_mux_config(struct drm_i915_private *dev_priv,
+					    const struct i915_oa_reg **regs,
+					    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_rasterizer_and_pixel_backend;
+	lens[n] = ARRAY_SIZE(mux_config_rasterizer_and_pixel_backend);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler[] = {
+	{ _MMIO(0x9888), 0x14152c00 },
+	{ _MMIO(0x9888), 0x16150005 },
+	{ _MMIO(0x9888), 0x121600a0 },
+	{ _MMIO(0x9888), 0x14352c00 },
+	{ _MMIO(0x9888), 0x16350005 },
+	{ _MMIO(0x9888), 0x123600a0 },
+	{ _MMIO(0x9888), 0x14552c00 },
+	{ _MMIO(0x9888), 0x16550005 },
+	{ _MMIO(0x9888), 0x125600a0 },
+	{ _MMIO(0x9888), 0x062f6000 },
+	{ _MMIO(0x9888), 0x022f2000 },
+	{ _MMIO(0x9888), 0x0c4c0050 },
+	{ _MMIO(0x9888), 0x0a4c0010 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0350 },
+	{ _MMIO(0x9888), 0x0c0fb000 },
+	{ _MMIO(0x9888), 0x0e0f00da },
+	{ _MMIO(0x9888), 0x182c0028 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x022dc000 },
+	{ _MMIO(0x9888), 0x042d4000 },
+	{ _MMIO(0x9888), 0x0c138000 },
+	{ _MMIO(0x9888), 0x0e132000 },
+	{ _MMIO(0x9888), 0x0413c000 },
+	{ _MMIO(0x9888), 0x1c140018 },
+	{ _MMIO(0x9888), 0x0c157000 },
+	{ _MMIO(0x9888), 0x0e150078 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x04162180 },
+	{ _MMIO(0x9888), 0x02160000 },
+	{ _MMIO(0x9888), 0x04174000 },
+	{ _MMIO(0x9888), 0x0233a000 },
+	{ _MMIO(0x9888), 0x04333000 },
+	{ _MMIO(0x9888), 0x14348000 },
+	{ _MMIO(0x9888), 0x16348000 },
+	{ _MMIO(0x9888), 0x02357870 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x04360043 },
+	{ _MMIO(0x9888), 0x02360000 },
+	{ _MMIO(0x9888), 0x04371000 },
+	{ _MMIO(0x9888), 0x0e538000 },
+	{ _MMIO(0x9888), 0x00538000 },
+	{ _MMIO(0x9888), 0x06533000 },
+	{ _MMIO(0x9888), 0x1c540020 },
+	{ _MMIO(0x9888), 0x12548000 },
+	{ _MMIO(0x9888), 0x0e557000 },
+	{ _MMIO(0x9888), 0x00557800 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x06560043 },
+	{ _MMIO(0x9888), 0x02560000 },
+	{ _MMIO(0x9888), 0x06571000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900060 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900060 },
+};
+
+static int
+get_sampler_mux_config(struct drm_i915_private *dev_priv,
+		       const struct i915_oa_reg **regs,
+		       int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_sampler;
+	lens[n] = ARRAY_SIZE(mux_config_sampler);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x00007fff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x00009fff },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000efff },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000f3ff },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fdff },
+	{ _MMIO(0x2798), 0x00000000 },
+	{ _MMIO(0x279c), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_1[] = {
+	{ _MMIO(0x9888), 0x12120000 },
+	{ _MMIO(0x9888), 0x12320000 },
+	{ _MMIO(0x9888), 0x12520000 },
+	{ _MMIO(0x9888), 0x002f8000 },
+	{ _MMIO(0x9888), 0x022f3000 },
+	{ _MMIO(0x9888), 0x0a4c0015 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f03a0 },
+	{ _MMIO(0x9888), 0x0c0ff000 },
+	{ _MMIO(0x9888), 0x0e0f0095 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2d8000 },
+	{ _MMIO(0x9888), 0x0e2d4000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x02108000 },
+	{ _MMIO(0x9888), 0x0410c000 },
+	{ _MMIO(0x9888), 0x02118000 },
+	{ _MMIO(0x9888), 0x0411c000 },
+	{ _MMIO(0x9888), 0x02121880 },
+	{ _MMIO(0x9888), 0x041219b5 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x02134000 },
+	{ _MMIO(0x9888), 0x04135000 },
+	{ _MMIO(0x9888), 0x0c308000 },
+	{ _MMIO(0x9888), 0x0e304000 },
+	{ _MMIO(0x9888), 0x06304000 },
+	{ _MMIO(0x9888), 0x0c318000 },
+	{ _MMIO(0x9888), 0x0e314000 },
+	{ _MMIO(0x9888), 0x06314000 },
+	{ _MMIO(0x9888), 0x0c321a80 },
+	{ _MMIO(0x9888), 0x0e320033 },
+	{ _MMIO(0x9888), 0x06320031 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x0c334000 },
+	{ _MMIO(0x9888), 0x0e331000 },
+	{ _MMIO(0x9888), 0x06331000 },
+	{ _MMIO(0x9888), 0x0e508000 },
+	{ _MMIO(0x9888), 0x00508000 },
+	{ _MMIO(0x9888), 0x02504000 },
+	{ _MMIO(0x9888), 0x0e518000 },
+	{ _MMIO(0x9888), 0x00518000 },
+	{ _MMIO(0x9888), 0x02514000 },
+	{ _MMIO(0x9888), 0x0e521880 },
+	{ _MMIO(0x9888), 0x00521a80 },
+	{ _MMIO(0x9888), 0x02520033 },
+	{ _MMIO(0x9888), 0x0e534000 },
+	{ _MMIO(0x9888), 0x00534000 },
+	{ _MMIO(0x9888), 0x02531000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900062 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+};
+
+static int
+get_tdl_1_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_1;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_2[] = {
+	{ _MMIO(0x9888), 0x12124d60 },
+	{ _MMIO(0x9888), 0x12322e60 },
+	{ _MMIO(0x9888), 0x12524d60 },
+	{ _MMIO(0x9888), 0x022f3000 },
+	{ _MMIO(0x9888), 0x0a4c0014 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0fe000 },
+	{ _MMIO(0x9888), 0x0e0f0097 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x002d8000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x0410c000 },
+	{ _MMIO(0x9888), 0x0411c000 },
+	{ _MMIO(0x9888), 0x04121fb7 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x04135000 },
+	{ _MMIO(0x9888), 0x00308000 },
+	{ _MMIO(0x9888), 0x06304000 },
+	{ _MMIO(0x9888), 0x00318000 },
+	{ _MMIO(0x9888), 0x06314000 },
+	{ _MMIO(0x9888), 0x00321b80 },
+	{ _MMIO(0x9888), 0x0632003f },
+	{ _MMIO(0x9888), 0x00334000 },
+	{ _MMIO(0x9888), 0x06331000 },
+	{ _MMIO(0x9888), 0x0250c000 },
+	{ _MMIO(0x9888), 0x0251c000 },
+	{ _MMIO(0x9888), 0x02521fb7 },
+	{ _MMIO(0x9888), 0x00520000 },
+	{ _MMIO(0x9888), 0x02535000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x43900063 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_tdl_2_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_2;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extra[] = {
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extra[] = {
+};
+
+static const struct i915_oa_reg mux_config_compute_extra[] = {
+	{ _MMIO(0x9888), 0x121203e0 },
+	{ _MMIO(0x9888), 0x123203e0 },
+	{ _MMIO(0x9888), 0x125203e0 },
+	{ _MMIO(0x9888), 0x129203e0 },
+	{ _MMIO(0x9888), 0x12b203e0 },
+	{ _MMIO(0x9888), 0x12d203e0 },
+	{ _MMIO(0x9888), 0x024ec000 },
+	{ _MMIO(0x9888), 0x044ec000 },
+	{ _MMIO(0x9888), 0x064ec000 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x0a4c0042 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f006d },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x042d8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06114000 },
+	{ _MMIO(0x9888), 0x06120033 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x04308000 },
+	{ _MMIO(0x9888), 0x04318000 },
+	{ _MMIO(0x9888), 0x04321980 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x04334000 },
+	{ _MMIO(0x9888), 0x04504000 },
+	{ _MMIO(0x9888), 0x04514000 },
+	{ _MMIO(0x9888), 0x04520033 },
+	{ _MMIO(0x9888), 0x00520000 },
+	{ _MMIO(0x9888), 0x04531000 },
+	{ _MMIO(0x9888), 0x00af8000 },
+	{ _MMIO(0x9888), 0x0acc0001 },
+	{ _MMIO(0x9888), 0x008d8000 },
+	{ _MMIO(0x9888), 0x028da000 },
+	{ _MMIO(0x9888), 0x0c8fb000 },
+	{ _MMIO(0x9888), 0x0e8f0001 },
+	{ _MMIO(0x9888), 0x06ac8000 },
+	{ _MMIO(0x9888), 0x02ad4000 },
+	{ _MMIO(0x9888), 0x02908000 },
+	{ _MMIO(0x9888), 0x02918000 },
+	{ _MMIO(0x9888), 0x02921980 },
+	{ _MMIO(0x9888), 0x00920000 },
+	{ _MMIO(0x9888), 0x02934000 },
+	{ _MMIO(0x9888), 0x02b04000 },
+	{ _MMIO(0x9888), 0x02b14000 },
+	{ _MMIO(0x9888), 0x02b20033 },
+	{ _MMIO(0x9888), 0x00b20000 },
+	{ _MMIO(0x9888), 0x02b31000 },
+	{ _MMIO(0x9888), 0x00d08000 },
+	{ _MMIO(0x9888), 0x00d18000 },
+	{ _MMIO(0x9888), 0x00d21980 },
+	{ _MMIO(0x9888), 0x00d34000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900402 },
+	{ _MMIO(0x9888), 0x53901550 },
+	{ _MMIO(0x9888), 0x45900080 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_compute_extra_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_extra;
+	lens[n] = ARRAY_SIZE(mux_config_compute_extra);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_vme_pipe[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00100030 },
+	{ _MMIO(0x2774), 0x0000fff9 },
+	{ _MMIO(0x2778), 0x00000002 },
+	{ _MMIO(0x277c), 0x0000fffc },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000fff3 },
+	{ _MMIO(0x2788), 0x00100180 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00000002 },
+	{ _MMIO(0x279c), 0x0000ff3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_vme_pipe[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00008003 },
+};
+
+static const struct i915_oa_reg mux_config_vme_pipe[] = {
+	{ _MMIO(0x9888), 0x141a5800 },
+	{ _MMIO(0x9888), 0x161a00c0 },
+	{ _MMIO(0x9888), 0x12180240 },
+	{ _MMIO(0x9888), 0x14180002 },
+	{ _MMIO(0x9888), 0x149a5800 },
+	{ _MMIO(0x9888), 0x169a00c0 },
+	{ _MMIO(0x9888), 0x12980240 },
+	{ _MMIO(0x9888), 0x14980002 },
+	{ _MMIO(0x9888), 0x1a4e3fc0 },
+	{ _MMIO(0x9888), 0x002f1000 },
+	{ _MMIO(0x9888), 0x022f8000 },
+	{ _MMIO(0x9888), 0x042f3000 },
+	{ _MMIO(0x9888), 0x004c4000 },
+	{ _MMIO(0x9888), 0x0a4c9500 },
+	{ _MMIO(0x9888), 0x0c4c002a },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f5500 },
+	{ _MMIO(0x9888), 0x100f0015 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162caa00 },
+	{ _MMIO(0x9888), 0x182c000a },
+	{ _MMIO(0x9888), 0x04193000 },
+	{ _MMIO(0x9888), 0x081a28c1 },
+	{ _MMIO(0x9888), 0x001a0000 },
+	{ _MMIO(0x9888), 0x00133000 },
+	{ _MMIO(0x9888), 0x0613c000 },
+	{ _MMIO(0x9888), 0x0813f000 },
+	{ _MMIO(0x9888), 0x00172000 },
+	{ _MMIO(0x9888), 0x06178000 },
+	{ _MMIO(0x9888), 0x0817a000 },
+	{ _MMIO(0x9888), 0x00180037 },
+	{ _MMIO(0x9888), 0x06180940 },
+	{ _MMIO(0x9888), 0x08180000 },
+	{ _MMIO(0x9888), 0x02180000 },
+	{ _MMIO(0x9888), 0x04183000 },
+	{ _MMIO(0x9888), 0x04afc000 },
+	{ _MMIO(0x9888), 0x06af3000 },
+	{ _MMIO(0x9888), 0x0acc4000 },
+	{ _MMIO(0x9888), 0x0ccc0015 },
+	{ _MMIO(0x9888), 0x0a8da000 },
+	{ _MMIO(0x9888), 0x0c8da000 },
+	{ _MMIO(0x9888), 0x0e8f4000 },
+	{ _MMIO(0x9888), 0x108f0015 },
+	{ _MMIO(0x9888), 0x16aca000 },
+	{ _MMIO(0x9888), 0x18ac000a },
+	{ _MMIO(0x9888), 0x06993000 },
+	{ _MMIO(0x9888), 0x0c9a28c1 },
+	{ _MMIO(0x9888), 0x009a0000 },
+	{ _MMIO(0x9888), 0x0a93f000 },
+	{ _MMIO(0x9888), 0x0c93f000 },
+	{ _MMIO(0x9888), 0x0a97a000 },
+	{ _MMIO(0x9888), 0x0c97a000 },
+	{ _MMIO(0x9888), 0x0a980977 },
+	{ _MMIO(0x9888), 0x08980000 },
+	{ _MMIO(0x9888), 0x04980000 },
+	{ _MMIO(0x9888), 0x06983000 },
+	{ _MMIO(0x9888), 0x119000ff },
+	{ _MMIO(0x9888), 0x51900050 },
+	{ _MMIO(0x9888), 0x41900000 },
+	{ _MMIO(0x9888), 0x55900115 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x47900884 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900002 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_vme_pipe_mux_config(struct drm_i915_private *dev_priv,
+			const struct i915_oa_reg **regs,
+			int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_vme_pipe;
+	lens[n] = ARRAY_SIZE(mux_config_vme_pipe);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_test_oa[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2770), 0x00000004 },
+	{ _MMIO(0x2774), 0x00000000 },
+	{ _MMIO(0x2778), 0x00000003 },
+	{ _MMIO(0x277c), 0x00000000 },
+	{ _MMIO(0x2780), 0x00000007 },
+	{ _MMIO(0x2784), 0x00000000 },
+	{ _MMIO(0x2788), 0x00100002 },
+	{ _MMIO(0x278c), 0x0000fff7 },
+	{ _MMIO(0x2790), 0x00100002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00100082 },
+	{ _MMIO(0x279c), 0x0000ffef },
+	{ _MMIO(0x27a0), 0x001000c2 },
+	{ _MMIO(0x27a4), 0x0000ffe7 },
+	{ _MMIO(0x27a8), 0x00100001 },
+	{ _MMIO(0x27ac), 0x0000ffe7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_test_oa[] = {
+};
+
+static const struct i915_oa_reg mux_config_test_oa[] = {
+	{ _MMIO(0x9888), 0x11810000 },
+	{ _MMIO(0x9888), 0x07810013 },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930040 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_test_oa_mux_config(struct drm_i915_private *dev_priv,
+		       const struct i915_oa_reg **regs,
+		       int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_test_oa;
+	lens[n] = ARRAY_SIZE(mux_config_test_oa);
+	n++;
+
+	return n;
+}
+
 int i915_oa_select_metric_set_sklgt3(struct drm_i915_private *dev_priv)
 {
-	dev_priv->perf.oa.n_mux_configs = 0;
-	dev_priv->perf.oa.b_counter_regs = NULL;
-	dev_priv->perf.oa.b_counter_regs_len = 0;
-	dev_priv->perf.oa.flex_regs = NULL;
-	dev_priv->perf.oa.flex_regs_len = 0;
+	dev_priv->perf.oa.n_mux_configs = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_basic_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_basic_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
+		return 0;
+	case METRIC_SET_ID_RENDER_PIPE_PROFILE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_pipe_profile_mux_config(dev_priv,
+							   dev_priv->perf.oa.mux_regs,
+							   dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_PIPE_PROFILE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_pipe_profile;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_pipe_profile);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_pipe_profile;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_pipe_profile);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_READS:
+		dev_priv->perf.oa.n_mux_configs =
+			get_memory_reads_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_READS\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_reads;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_reads);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_reads;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_reads);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_WRITES:
+		dev_priv->perf.oa.n_mux_configs =
+			get_memory_writes_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_WRITES\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_writes;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_writes);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_writes;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_writes);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTENDED:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_extended_mux_config(dev_priv,
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTENDED\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extended;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extended);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extended;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extended);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_L3_CACHE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_l3_cache_mux_config(dev_priv,
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_L3_CACHE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_l3_cache;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_l3_cache);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_l3_cache;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_l3_cache);
+
+		return 0;
+	case METRIC_SET_ID_HDC_AND_SF:
+		dev_priv->perf.oa.n_mux_configs =
+			get_hdc_and_sf_mux_config(dev_priv,
+						  dev_priv->perf.oa.mux_regs,
+						  dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"HDC_AND_SF\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_hdc_and_sf;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_hdc_and_sf);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_hdc_and_sf;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_hdc_and_sf);
+
+		return 0;
+	case METRIC_SET_ID_L3_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_1_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_1);
+
+		return 0;
+	case METRIC_SET_ID_L3_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_2_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_2);
+
+		return 0;
+	case METRIC_SET_ID_L3_3:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_3_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_3\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_3;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_3);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_3;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_3);
+
+		return 0;
+	case METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND:
+		dev_priv->perf.oa.n_mux_configs =
+			get_rasterizer_and_pixel_backend_mux_config(dev_priv,
+								    dev_priv->perf.oa.mux_regs,
+								    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RASTERIZER_AND_PIXEL_BACKEND\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_rasterizer_and_pixel_backend);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_rasterizer_and_pixel_backend);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER:
+		dev_priv->perf.oa.n_mux_configs =
+			get_sampler_mux_config(dev_priv,
+					       dev_priv->perf.oa.mux_regs,
+					       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler);
+
+		return 0;
+	case METRIC_SET_ID_TDL_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_1_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_1);
+
+		return 0;
+	case METRIC_SET_ID_TDL_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_2_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_2);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTRA:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_extra_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTRA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extra;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extra);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extra;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extra);
+
+		return 0;
+	case METRIC_SET_ID_VME_PIPE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_vme_pipe_mux_config(dev_priv,
+						dev_priv->perf.oa.mux_regs,
+						dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"VME_PIPE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_vme_pipe;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_vme_pipe);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_vme_pipe;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_vme_pipe);
+
+		return 0;
+	case METRIC_SET_ID_TEST_OA:
+		dev_priv->perf.oa.n_mux_configs =
+			get_test_oa_mux_config(dev_priv,
+					       dev_priv->perf.oa.mux_regs,
+					       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TEST_OA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_test_oa;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_test_oa);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_test_oa;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_test_oa);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "4616d450-2393-4836-8146-53c5ed84d359",
+	.attrs =  attrs_render_basic,
+};
+
+static ssize_t
+show_compute_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_BASIC);
+}
+
+static struct device_attribute dev_attr_compute_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_basic[] = {
+	&dev_attr_compute_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_basic = {
+	.name = "4320492b-fd03-42ac-922f-dbe1ef3b7b58",
+	.attrs =  attrs_compute_basic,
+};
+
+static ssize_t
+show_render_pipe_profile_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_PIPE_PROFILE);
+}
+
+static struct device_attribute dev_attr_render_pipe_profile_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_pipe_profile_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_pipe_profile[] = {
+	&dev_attr_render_pipe_profile_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_pipe_profile = {
+	.name = "bd2d9cae-b9ec-4f5b-9d2f-934bed398a2d",
+	.attrs =  attrs_render_pipe_profile,
+};
+
+static ssize_t
+show_memory_reads_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_READS);
+}
 
-	switch (dev_priv->perf.oa.metrics_set) {
-	case METRIC_SET_ID_RENDER_BASIC:
-		dev_priv->perf.oa.n_mux_configs =
-			get_render_basic_mux_config(dev_priv,
-						    dev_priv->perf.oa.mux_regs,
-						    dev_priv->perf.oa.mux_regs_lens);
-		if (dev_priv->perf.oa.n_mux_configs == 0) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+static struct device_attribute dev_attr_memory_reads_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_reads_id,
+	.store = NULL,
+};
 
-			/* EINVAL because *_register_sysfs already checked this
-			 * and so it wouldn't have been advertised to userspace and
-			 * so shouldn't have been requested
-			 */
-			return -EINVAL;
-		}
+static struct attribute *attrs_memory_reads[] = {
+	&dev_attr_memory_reads_id.attr,
+	NULL,
+};
 
-		dev_priv->perf.oa.b_counter_regs =
-			b_counter_config_render_basic;
-		dev_priv->perf.oa.b_counter_regs_len =
-			ARRAY_SIZE(b_counter_config_render_basic);
+static struct attribute_group group_memory_reads = {
+	.name = "4ca0f3fe-7fd3-4924-98cb-1807d9879767",
+	.attrs =  attrs_memory_reads,
+};
 
-		dev_priv->perf.oa.flex_regs =
-			flex_eu_config_render_basic;
-		dev_priv->perf.oa.flex_regs_len =
-			ARRAY_SIZE(flex_eu_config_render_basic);
+static ssize_t
+show_memory_writes_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_WRITES);
+}
 
-		return 0;
-	default:
-		return -ENODEV;
-	}
+static struct device_attribute dev_attr_memory_writes_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_writes_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_writes[] = {
+	&dev_attr_memory_writes_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_writes = {
+	.name = "a0c0172c-ee13-403d-99ff-2bdf6936cf14",
+	.attrs =  attrs_memory_writes,
+};
+
+static ssize_t
+show_compute_extended_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTENDED);
+}
+
+static struct device_attribute dev_attr_compute_extended_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extended_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extended[] = {
+	&dev_attr_compute_extended_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extended = {
+	.name = "52435e0b-f188-42ea-8680-21a56ee20dee",
+	.attrs =  attrs_compute_extended,
+};
+
+static ssize_t
+show_compute_l3_cache_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_L3_CACHE);
 }
 
+static struct device_attribute dev_attr_compute_l3_cache_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_l3_cache_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_l3_cache[] = {
+	&dev_attr_compute_l3_cache_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_l3_cache = {
+	.name = "27076eeb-49f3-4fed-8423-c66506005c63",
+	.attrs =  attrs_compute_l3_cache,
+};
+
 static ssize_t
-show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+show_hdc_and_sf_id(struct device *kdev, struct device_attribute *attr, char *buf)
 {
-	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+	return sprintf(buf, "%d\n", METRIC_SET_ID_HDC_AND_SF);
 }
 
-static struct device_attribute dev_attr_render_basic_id = {
+static struct device_attribute dev_attr_hdc_and_sf_id = {
 	.attr = { .name = "id", .mode = 0444 },
-	.show = show_render_basic_id,
+	.show = show_hdc_and_sf_id,
 	.store = NULL,
 };
 
-static struct attribute *attrs_render_basic[] = {
-	&dev_attr_render_basic_id.attr,
+static struct attribute *attrs_hdc_and_sf[] = {
+	&dev_attr_hdc_and_sf_id.attr,
 	NULL,
 };
 
-static struct attribute_group group_render_basic = {
-	.name = "4616d450-2393-4836-8146-53c5ed84d359",
-	.attrs =  attrs_render_basic,
+static struct attribute_group group_hdc_and_sf = {
+	.name = "8071b409-c39a-4674-94d7-32962ecfb512",
+	.attrs =  attrs_hdc_and_sf,
+};
+
+static ssize_t
+show_l3_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_1);
+}
+
+static struct device_attribute dev_attr_l3_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_1[] = {
+	&dev_attr_l3_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_1 = {
+	.name = "5e0b391e-9ea8-4901-b2ff-b64ff616c7ed",
+	.attrs =  attrs_l3_1,
+};
+
+static ssize_t
+show_l3_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_2);
+}
+
+static struct device_attribute dev_attr_l3_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_2[] = {
+	&dev_attr_l3_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_2 = {
+	.name = "25dc828e-1d2d-426e-9546-a1d4233cdf16",
+	.attrs =  attrs_l3_2,
+};
+
+static ssize_t
+show_l3_3_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_3);
+}
+
+static struct device_attribute dev_attr_l3_3_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_3_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_3[] = {
+	&dev_attr_l3_3_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_3 = {
+	.name = "3dba9405-2d7e-4d70-8199-e734e82fd6bf",
+	.attrs =  attrs_l3_3,
+};
+
+static ssize_t
+show_rasterizer_and_pixel_backend_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND);
+}
+
+static struct device_attribute dev_attr_rasterizer_and_pixel_backend_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_rasterizer_and_pixel_backend_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_rasterizer_and_pixel_backend[] = {
+	&dev_attr_rasterizer_and_pixel_backend_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_rasterizer_and_pixel_backend = {
+	.name = "76935d7b-09c9-46bf-87f1-c18b4a86ebe5",
+	.attrs =  attrs_rasterizer_and_pixel_backend,
+};
+
+static ssize_t
+show_sampler_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER);
+}
+
+static struct device_attribute dev_attr_sampler_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler[] = {
+	&dev_attr_sampler_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler = {
+	.name = "1b34c0d6-4f4c-4d7b-833f-4aaf236d87a6",
+	.attrs =  attrs_sampler,
+};
+
+static ssize_t
+show_tdl_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_1);
+}
+
+static struct device_attribute dev_attr_tdl_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_1[] = {
+	&dev_attr_tdl_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_1 = {
+	.name = "b375c985-9953-455b-bda2-b03f7594e9db",
+	.attrs =  attrs_tdl_1,
+};
+
+static ssize_t
+show_tdl_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_2);
+}
+
+static struct device_attribute dev_attr_tdl_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_2[] = {
+	&dev_attr_tdl_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_2 = {
+	.name = "3e2be2bb-884a-49bb-82c5-2358e6bd5f2d",
+	.attrs =  attrs_tdl_2,
+};
+
+static ssize_t
+show_compute_extra_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTRA);
+}
+
+static struct device_attribute dev_attr_compute_extra_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extra_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extra[] = {
+	&dev_attr_compute_extra_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extra = {
+	.name = "2d80a648-7b5a-4e92-bbe7-3b5c76f2e221",
+	.attrs =  attrs_compute_extra,
+};
+
+static ssize_t
+show_vme_pipe_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_VME_PIPE);
+}
+
+static struct device_attribute dev_attr_vme_pipe_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_vme_pipe_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_vme_pipe[] = {
+	&dev_attr_vme_pipe_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_vme_pipe = {
+	.name = "cfae9232-6ffc-42cc-a703-9790016925f0",
+	.attrs =  attrs_vme_pipe,
+};
+
+static ssize_t
+show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TEST_OA);
+}
+
+static struct device_attribute dev_attr_test_oa_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_test_oa_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_test_oa[] = {
+	&dev_attr_test_oa_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_test_oa = {
+	.name = "2b985803-d3c9-4629-8a4f-634bfecba0e8",
+	.attrs =  attrs_test_oa,
 };
 
 int
@@ -231,9 +2851,145 @@ i915_perf_register_sysfs_sklgt3(struct drm_i915_private *dev_priv)
 		if (ret)
 			goto error_render_basic;
 	}
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+		if (ret)
+			goto error_compute_basic;
+	}
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+		if (ret)
+			goto error_render_pipe_profile;
+	}
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+		if (ret)
+			goto error_memory_reads;
+	}
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+		if (ret)
+			goto error_memory_writes;
+	}
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+		if (ret)
+			goto error_compute_extended;
+	}
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+		if (ret)
+			goto error_compute_l3_cache;
+	}
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+		if (ret)
+			goto error_hdc_and_sf;
+	}
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+		if (ret)
+			goto error_l3_1;
+	}
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+		if (ret)
+			goto error_l3_2;
+	}
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+		if (ret)
+			goto error_l3_3;
+	}
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+		if (ret)
+			goto error_rasterizer_and_pixel_backend;
+	}
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler);
+		if (ret)
+			goto error_sampler;
+	}
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+		if (ret)
+			goto error_tdl_1;
+	}
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+		if (ret)
+			goto error_tdl_2;
+	}
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+		if (ret)
+			goto error_compute_extra;
+	}
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+		if (ret)
+			goto error_vme_pipe;
+	}
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+		if (ret)
+			goto error_test_oa;
+	}
 
 	return 0;
 
+error_test_oa:
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+error_vme_pipe:
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+error_compute_extra:
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+error_tdl_2:
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+error_tdl_1:
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+error_sampler:
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+error_rasterizer_and_pixel_backend:
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+error_l3_3:
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+error_l3_2:
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+error_l3_1:
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+error_hdc_and_sf:
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+error_compute_l3_cache:
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+error_compute_extended:
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+error_memory_writes:
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+error_memory_reads:
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+error_render_pipe_profile:
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+error_compute_basic:
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
 error_render_basic:
 	return ret;
 }
@@ -246,4 +3002,38 @@ i915_perf_unregister_sysfs_sklgt3(struct drm_i915_private *dev_priv)
 
 	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_test_oa);
 }
diff --git a/drivers/gpu/drm/i915/i915_oa_sklgt4.c b/drivers/gpu/drm/i915/i915_oa_sklgt4.c
index ed034f190a6c..9ddab43a2176 100644
--- a/drivers/gpu/drm/i915/i915_oa_sklgt4.c
+++ b/drivers/gpu/drm/i915/i915_oa_sklgt4.c
@@ -33,9 +33,26 @@
 
 enum metric_set_id {
 	METRIC_SET_ID_RENDER_BASIC = 1,
+	METRIC_SET_ID_COMPUTE_BASIC,
+	METRIC_SET_ID_RENDER_PIPE_PROFILE,
+	METRIC_SET_ID_MEMORY_READS,
+	METRIC_SET_ID_MEMORY_WRITES,
+	METRIC_SET_ID_COMPUTE_EXTENDED,
+	METRIC_SET_ID_COMPUTE_L3_CACHE,
+	METRIC_SET_ID_HDC_AND_SF,
+	METRIC_SET_ID_L3_1,
+	METRIC_SET_ID_L3_2,
+	METRIC_SET_ID_L3_3,
+	METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND,
+	METRIC_SET_ID_SAMPLER,
+	METRIC_SET_ID_TDL_1,
+	METRIC_SET_ID_TDL_2,
+	METRIC_SET_ID_COMPUTE_EXTRA,
+	METRIC_SET_ID_VME_PIPE,
+	METRIC_SET_ID_TEST_OA,
 };
 
-int i915_oa_n_builtin_metric_sets_sklgt4 = 1;
+int i915_oa_n_builtin_metric_sets_sklgt4 = 18;
 
 static const struct i915_oa_reg b_counter_config_render_basic[] = {
 	{ _MMIO(0x2710), 0x00000000 },
@@ -168,66 +185,2712 @@ get_render_basic_mux_config(struct drm_i915_private *dev_priv,
 	return n;
 }
 
+static const struct i915_oa_reg b_counter_config_compute_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic[] = {
+	{ _MMIO(0x9888), 0x104f00e0 },
+	{ _MMIO(0x9888), 0x124f1c00 },
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x1a4e0820 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x064f0900 },
+	{ _MMIO(0x9888), 0x084f0032 },
+	{ _MMIO(0x9888), 0x0a4f1891 },
+	{ _MMIO(0x9888), 0x0c4f0e00 },
+	{ _MMIO(0x9888), 0x0e4f003c },
+	{ _MMIO(0x9888), 0x004f0d80 },
+	{ _MMIO(0x9888), 0x024f003b },
+	{ _MMIO(0x9888), 0x006c0002 },
+	{ _MMIO(0x9888), 0x086c0100 },
+	{ _MMIO(0x9888), 0x0c6c000c },
+	{ _MMIO(0x9888), 0x0e6c0b00 },
+	{ _MMIO(0x9888), 0x186c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x081b8000 },
+	{ _MMIO(0x9888), 0x0c1b4000 },
+	{ _MMIO(0x9888), 0x0e1b8000 },
+	{ _MMIO(0x9888), 0x101c8000 },
+	{ _MMIO(0x9888), 0x1a1c8000 },
+	{ _MMIO(0x9888), 0x1c1c0024 },
+	{ _MMIO(0x9888), 0x065b8000 },
+	{ _MMIO(0x9888), 0x085b4000 },
+	{ _MMIO(0x9888), 0x0a5bc000 },
+	{ _MMIO(0x9888), 0x0c5b8000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x005b8000 },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x1a5c6000 },
+	{ _MMIO(0x9888), 0x1c5c001b },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x145c8000 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4c2000 },
+	{ _MMIO(0x9888), 0x0c4c0208 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020d2000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5500 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2cc000 },
+	{ _MMIO(0x9888), 0x162cfb00 },
+	{ _MMIO(0x9888), 0x182c00be },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042cc000 },
+	{ _MMIO(0x9888), 0x19900157 },
+	{ _MMIO(0x9888), 0x1b900158 },
+	{ _MMIO(0x9888), 0x1d900105 },
+	{ _MMIO(0x9888), 0x1f900103 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x11900fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900821 },
+	{ _MMIO(0x9888), 0x47900802 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900802 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900002 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900422 },
+	{ _MMIO(0x9888), 0x53905555 },
+};
+
+static int
+get_compute_basic_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_basic;
+	lens[n] = ARRAY_SIZE(mux_config_compute_basic);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_render_pipe_profile[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ffea },
+	{ _MMIO(0x2774), 0x00007ffc },
+	{ _MMIO(0x2778), 0x0007affa },
+	{ _MMIO(0x277c), 0x0000f5fd },
+	{ _MMIO(0x2780), 0x00079ffa },
+	{ _MMIO(0x2784), 0x0000f3fb },
+	{ _MMIO(0x2788), 0x0007bf7a },
+	{ _MMIO(0x278c), 0x0000f7e7 },
+	{ _MMIO(0x2790), 0x0007fefa },
+	{ _MMIO(0x2794), 0x0000f7cf },
+	{ _MMIO(0x2798), 0x00077ffa },
+	{ _MMIO(0x279c), 0x0000efdf },
+	{ _MMIO(0x27a0), 0x0006fffa },
+	{ _MMIO(0x27a4), 0x0000cfbf },
+	{ _MMIO(0x27a8), 0x0003fffa },
+	{ _MMIO(0x27ac), 0x00005f7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_pipe_profile[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile[] = {
+	{ _MMIO(0x9888), 0x0c0e001f },
+	{ _MMIO(0x9888), 0x0a0f0000 },
+	{ _MMIO(0x9888), 0x10116800 },
+	{ _MMIO(0x9888), 0x178a03e0 },
+	{ _MMIO(0x9888), 0x11824c00 },
+	{ _MMIO(0x9888), 0x11830020 },
+	{ _MMIO(0x9888), 0x13840020 },
+	{ _MMIO(0x9888), 0x11850019 },
+	{ _MMIO(0x9888), 0x11860007 },
+	{ _MMIO(0x9888), 0x01870c40 },
+	{ _MMIO(0x9888), 0x17880000 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x0a4c0040 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x040d4000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020e5400 },
+	{ _MMIO(0x9888), 0x000e0000 },
+	{ _MMIO(0x9888), 0x080f0040 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x100f0000 },
+	{ _MMIO(0x9888), 0x0e0f0040 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06110012 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x01898000 },
+	{ _MMIO(0x9888), 0x0d890100 },
+	{ _MMIO(0x9888), 0x03898000 },
+	{ _MMIO(0x9888), 0x09808000 },
+	{ _MMIO(0x9888), 0x0b808000 },
+	{ _MMIO(0x9888), 0x0380c000 },
+	{ _MMIO(0x9888), 0x0f8a0075 },
+	{ _MMIO(0x9888), 0x1d8a0000 },
+	{ _MMIO(0x9888), 0x118a8000 },
+	{ _MMIO(0x9888), 0x1b8a4000 },
+	{ _MMIO(0x9888), 0x138a8000 },
+	{ _MMIO(0x9888), 0x1d81a000 },
+	{ _MMIO(0x9888), 0x15818000 },
+	{ _MMIO(0x9888), 0x17818000 },
+	{ _MMIO(0x9888), 0x0b820030 },
+	{ _MMIO(0x9888), 0x07828000 },
+	{ _MMIO(0x9888), 0x0d824000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x05824000 },
+	{ _MMIO(0x9888), 0x0d830003 },
+	{ _MMIO(0x9888), 0x0583000c },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x03838000 },
+	{ _MMIO(0x9888), 0x07838000 },
+	{ _MMIO(0x9888), 0x0b840980 },
+	{ _MMIO(0x9888), 0x03844d80 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x09848000 },
+	{ _MMIO(0x9888), 0x09850080 },
+	{ _MMIO(0x9888), 0x03850003 },
+	{ _MMIO(0x9888), 0x01850000 },
+	{ _MMIO(0x9888), 0x07860000 },
+	{ _MMIO(0x9888), 0x0f860400 },
+	{ _MMIO(0x9888), 0x09870032 },
+	{ _MMIO(0x9888), 0x01888052 },
+	{ _MMIO(0x9888), 0x11880000 },
+	{ _MMIO(0x9888), 0x09884000 },
+	{ _MMIO(0x9888), 0x1b931001 },
+	{ _MMIO(0x9888), 0x1d930001 },
+	{ _MMIO(0x9888), 0x19934000 },
+	{ _MMIO(0x9888), 0x1b958000 },
+	{ _MMIO(0x9888), 0x1d950094 },
+	{ _MMIO(0x9888), 0x19958000 },
+	{ _MMIO(0x9888), 0x09e58000 },
+	{ _MMIO(0x9888), 0x0be58000 },
+	{ _MMIO(0x9888), 0x03e5c000 },
+	{ _MMIO(0x9888), 0x0592c000 },
+	{ _MMIO(0x9888), 0x0b928000 },
+	{ _MMIO(0x9888), 0x0d924000 },
+	{ _MMIO(0x9888), 0x0f924000 },
+	{ _MMIO(0x9888), 0x11928000 },
+	{ _MMIO(0x9888), 0x1392c000 },
+	{ _MMIO(0x9888), 0x09924000 },
+	{ _MMIO(0x9888), 0x01985000 },
+	{ _MMIO(0x9888), 0x07988000 },
+	{ _MMIO(0x9888), 0x09981000 },
+	{ _MMIO(0x9888), 0x0b982000 },
+	{ _MMIO(0x9888), 0x0d982000 },
+	{ _MMIO(0x9888), 0x0f989000 },
+	{ _MMIO(0x9888), 0x05982000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1190c080 },
+	{ _MMIO(0x9888), 0x51901110 },
+	{ _MMIO(0x9888), 0x41900440 },
+	{ _MMIO(0x9888), 0x55901111 },
+	{ _MMIO(0x9888), 0x45900400 },
+	{ _MMIO(0x9888), 0x47900c21 },
+	{ _MMIO(0x9888), 0x57901411 },
+	{ _MMIO(0x9888), 0x49900042 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900024 },
+	{ _MMIO(0x9888), 0x59900001 },
+	{ _MMIO(0x9888), 0x43900841 },
+	{ _MMIO(0x9888), 0x53900411 },
+};
+
+static int
+get_render_pipe_profile_mux_config(struct drm_i915_private *dev_priv,
+				   const struct i915_oa_reg **regs,
+				   int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_render_pipe_profile;
+	lens[n] = ARRAY_SIZE(mux_config_render_pipe_profile);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_reads[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f872 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_reads[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f900064 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930055 },
+	{ _MMIO(0x9888), 0x03e58000 },
+	{ _MMIO(0x9888), 0x05e5c000 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x13900150 },
+	{ _MMIO(0x9888), 0x21900151 },
+	{ _MMIO(0x9888), 0x23900152 },
+	{ _MMIO(0x9888), 0x25900153 },
+	{ _MMIO(0x9888), 0x27900154 },
+	{ _MMIO(0x9888), 0x29900155 },
+	{ _MMIO(0x9888), 0x2b900156 },
+	{ _MMIO(0x9888), 0x2d900157 },
+	{ _MMIO(0x9888), 0x2f90015f },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c60 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900c63 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c63 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static int
+get_memory_reads_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_memory_reads;
+	lens[n] = ARRAY_SIZE(mux_config_memory_reads);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_writes[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f822 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_writes[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f901000 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930055 },
+	{ _MMIO(0x9888), 0x03e58000 },
+	{ _MMIO(0x9888), 0x05e5c000 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x13900160 },
+	{ _MMIO(0x9888), 0x21900161 },
+	{ _MMIO(0x9888), 0x23900162 },
+	{ _MMIO(0x9888), 0x25900163 },
+	{ _MMIO(0x9888), 0x27900164 },
+	{ _MMIO(0x9888), 0x29900165 },
+	{ _MMIO(0x9888), 0x2b900166 },
+	{ _MMIO(0x9888), 0x2d900167 },
+	{ _MMIO(0x9888), 0x2f900150 },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c60 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900c63 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c63 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static int
+get_memory_writes_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_memory_writes;
+	lens[n] = ARRAY_SIZE(mux_config_memory_writes);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extended[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fc2a },
+	{ _MMIO(0x2774), 0x0000bf00 },
+	{ _MMIO(0x2778), 0x0007fc6a },
+	{ _MMIO(0x277c), 0x0000bf00 },
+	{ _MMIO(0x2780), 0x0007fc92 },
+	{ _MMIO(0x2784), 0x0000bf00 },
+	{ _MMIO(0x2788), 0x0007fca2 },
+	{ _MMIO(0x278c), 0x0000bf00 },
+	{ _MMIO(0x2790), 0x0007fc32 },
+	{ _MMIO(0x2794), 0x0000bf00 },
+	{ _MMIO(0x2798), 0x0007fc9a },
+	{ _MMIO(0x279c), 0x0000bf00 },
+	{ _MMIO(0x27a0), 0x0007fe6a },
+	{ _MMIO(0x27a4), 0x0000bf00 },
+	{ _MMIO(0x27a8), 0x0007fe7a },
+	{ _MMIO(0x27ac), 0x0000bf00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extended[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended[] = {
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x141c8160 },
+	{ _MMIO(0x9888), 0x161c8015 },
+	{ _MMIO(0x9888), 0x181c0120 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4eaaa0 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0e6c0b01 },
+	{ _MMIO(0x9888), 0x006c0200 },
+	{ _MMIO(0x9888), 0x026c000c },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x001b8000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x001c0041 },
+	{ _MMIO(0x9888), 0x061c4200 },
+	{ _MMIO(0x9888), 0x081c4443 },
+	{ _MMIO(0x9888), 0x0a1c4645 },
+	{ _MMIO(0x9888), 0x0c1c7647 },
+	{ _MMIO(0x9888), 0x041c7357 },
+	{ _MMIO(0x9888), 0x1c1c0030 },
+	{ _MMIO(0x9888), 0x101c0000 },
+	{ _MMIO(0x9888), 0x1a1c0000 },
+	{ _MMIO(0x9888), 0x121c8000 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4caa2a },
+	{ _MMIO(0x9888), 0x0c4c02aa },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5515 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162caa00 },
+	{ _MMIO(0x9888), 0x182c00aa },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x11907fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900040 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900802 },
+	{ _MMIO(0x9888), 0x47900842 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900842 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900800 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static int
+get_compute_extended_mux_config(struct drm_i915_private *dev_priv,
+				const struct i915_oa_reg **regs,
+				int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_extended;
+	lens[n] = ARRAY_SIZE(mux_config_compute_extended);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_l3_cache[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fffa },
+	{ _MMIO(0x2774), 0x0000fefe },
+	{ _MMIO(0x2778), 0x0007fffa },
+	{ _MMIO(0x277c), 0x0000fefd },
+	{ _MMIO(0x2790), 0x0007fffa },
+	{ _MMIO(0x2794), 0x0000fbef },
+	{ _MMIO(0x2798), 0x0007fffa },
+	{ _MMIO(0x279c), 0x0000fbdf },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_l3_cache[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00101100 },
+	{ _MMIO(0xe45c), 0x00201200 },
+	{ _MMIO(0xe55c), 0x00301300 },
+	{ _MMIO(0xe65c), 0x00401400 },
+};
+
+static const struct i915_oa_reg mux_config_compute_l3_cache[] = {
+	{ _MMIO(0x9888), 0x166c0760 },
+	{ _MMIO(0x9888), 0x1593001e },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4e8020 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x006c0051 },
+	{ _MMIO(0x9888), 0x066c5000 },
+	{ _MMIO(0x9888), 0x086c5c5d },
+	{ _MMIO(0x9888), 0x0e6c5e5f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x186c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x061b8000 },
+	{ _MMIO(0x9888), 0x081bc000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x101c8000 },
+	{ _MMIO(0x9888), 0x1a1ce000 },
+	{ _MMIO(0x9888), 0x1c1c0030 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4c2a00 },
+	{ _MMIO(0x9888), 0x0c4c0280 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f1500 },
+	{ _MMIO(0x9888), 0x100f0140 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162c0a00 },
+	{ _MMIO(0x9888), 0x182c00a0 },
+	{ _MMIO(0x9888), 0x03933300 },
+	{ _MMIO(0x9888), 0x05930032 },
+	{ _MMIO(0x9888), 0x11930000 },
+	{ _MMIO(0x9888), 0x1b930000 },
+	{ _MMIO(0x9888), 0x1d900157 },
+	{ _MMIO(0x9888), 0x1f900158 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1190030f },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900021 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x53905555 },
+	{ _MMIO(0x9888), 0x43900000 },
+};
+
+static int
+get_compute_l3_cache_mux_config(struct drm_i915_private *dev_priv,
+				const struct i915_oa_reg **regs,
+				int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_l3_cache;
+	lens[n] = ARRAY_SIZE(mux_config_compute_l3_cache);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_hdc_and_sf[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x10800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+};
+
+static const struct i915_oa_reg flex_eu_config_hdc_and_sf[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_hdc_and_sf[] = {
+	{ _MMIO(0x9888), 0x104f0232 },
+	{ _MMIO(0x9888), 0x124f4640 },
+	{ _MMIO(0x9888), 0x106c0232 },
+	{ _MMIO(0x9888), 0x11834400 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x004f1880 },
+	{ _MMIO(0x9888), 0x024f08bb },
+	{ _MMIO(0x9888), 0x044f001b },
+	{ _MMIO(0x9888), 0x046c0100 },
+	{ _MMIO(0x9888), 0x066c000b },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x041b8000 },
+	{ _MMIO(0x9888), 0x061b4000 },
+	{ _MMIO(0x9888), 0x1a1c1800 },
+	{ _MMIO(0x9888), 0x005b8000 },
+	{ _MMIO(0x9888), 0x025bc000 },
+	{ _MMIO(0x9888), 0x045b4000 },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x145c8000 },
+	{ _MMIO(0x9888), 0x165c8000 },
+	{ _MMIO(0x9888), 0x185c8000 },
+	{ _MMIO(0x9888), 0x0a4c00a0 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042cc000 },
+	{ _MMIO(0x9888), 0x062cc000 },
+	{ _MMIO(0x9888), 0x082cc000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x0f8305c0 },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x07830000 },
+	{ _MMIO(0x9888), 0x1d950080 },
+	{ _MMIO(0x9888), 0x13928000 },
+	{ _MMIO(0x9888), 0x0f988000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x59900001 },
+	{ _MMIO(0x9888), 0x4b900040 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_hdc_and_sf_mux_config(struct drm_i915_private *dev_priv,
+			  const struct i915_oa_reg **regs,
+			  int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_hdc_and_sf;
+	lens[n] = ARRAY_SIZE(mux_config_hdc_and_sf);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1[] = {
+	{ _MMIO(0x9888), 0x126c7b40 },
+	{ _MMIO(0x9888), 0x166c0020 },
+	{ _MMIO(0x9888), 0x0a603444 },
+	{ _MMIO(0x9888), 0x0a613400 },
+	{ _MMIO(0x9888), 0x1a4ea800 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x064f4000 },
+	{ _MMIO(0x9888), 0x0c6c5327 },
+	{ _MMIO(0x9888), 0x0e6c5425 },
+	{ _MMIO(0x9888), 0x006c2a00 },
+	{ _MMIO(0x9888), 0x026c285b },
+	{ _MMIO(0x9888), 0x046c005c },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0800 },
+	{ _MMIO(0x9888), 0x0c1bc000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x001b8000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x1c1c003c },
+	{ _MMIO(0x9888), 0x121c8000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c0800 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x1a5c1000 },
+	{ _MMIO(0x9888), 0x10600000 },
+	{ _MMIO(0x9888), 0x04600000 },
+	{ _MMIO(0x9888), 0x0c610044 },
+	{ _MMIO(0x9888), 0x10610000 },
+	{ _MMIO(0x9888), 0x06610000 },
+	{ _MMIO(0x9888), 0x0c4c02a8 },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x0a4c002a },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0154 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x182c00aa },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2cc000 },
+	{ _MMIO(0x9888), 0x1190ffc0 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900420 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900021 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900400 },
+	{ _MMIO(0x9888), 0x43900421 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+};
+
+static int
+get_l3_1_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_1;
+	lens[n] = ARRAY_SIZE(mux_config_l3_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00028002 },
+	{ _MMIO(0x277c), 0x000087ff },
+	{ _MMIO(0x2780), 0x00020002 },
+	{ _MMIO(0x2784), 0x00008fff },
+	{ _MMIO(0x2788), 0x00008002 },
+	{ _MMIO(0x278c), 0x0000a7ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_2[] = {
+	{ _MMIO(0x9888), 0x126c02e0 },
+	{ _MMIO(0x9888), 0x146c0001 },
+	{ _MMIO(0x9888), 0x0a623400 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x064f4000 },
+	{ _MMIO(0x9888), 0x026c3324 },
+	{ _MMIO(0x9888), 0x046c3422 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c0800 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x1a5c1000 },
+	{ _MMIO(0x9888), 0x06614000 },
+	{ _MMIO(0x9888), 0x0c620044 },
+	{ _MMIO(0x9888), 0x10620000 },
+	{ _MMIO(0x9888), 0x06620000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c002a },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f4000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2cc000 },
+	{ _MMIO(0x9888), 0x1190f800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_l3_2_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_2;
+	lens[n] = ARRAY_SIZE(mux_config_l3_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_3[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00028002 },
+	{ _MMIO(0x277c), 0x000087ff },
+	{ _MMIO(0x2780), 0x00020002 },
+	{ _MMIO(0x2784), 0x00008fff },
+	{ _MMIO(0x2788), 0x00008002 },
+	{ _MMIO(0x278c), 0x0000a7ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_3[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_3[] = {
+	{ _MMIO(0x9888), 0x126c4e80 },
+	{ _MMIO(0x9888), 0x146c0000 },
+	{ _MMIO(0x9888), 0x0a633400 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x026c3321 },
+	{ _MMIO(0x9888), 0x046c342f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1a6c2000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x061b4000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c1800 },
+	{ _MMIO(0x9888), 0x06604000 },
+	{ _MMIO(0x9888), 0x0c630044 },
+	{ _MMIO(0x9888), 0x10630000 },
+	{ _MMIO(0x9888), 0x06630000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c00aa },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f4000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x1190f800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900002 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_l3_3_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_3;
+	lens[n] = ARRAY_SIZE(mux_config_l3_3);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000efff },
+	{ _MMIO(0x2778), 0x00006000 },
+	{ _MMIO(0x277c), 0x0000f3ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x9888), 0x102f3800 },
+	{ _MMIO(0x9888), 0x144d0500 },
+	{ _MMIO(0x9888), 0x120d03c0 },
+	{ _MMIO(0x9888), 0x140d03cf },
+	{ _MMIO(0x9888), 0x0c0f0004 },
+	{ _MMIO(0x9888), 0x0c4e4000 },
+	{ _MMIO(0x9888), 0x042f0480 },
+	{ _MMIO(0x9888), 0x082f0000 },
+	{ _MMIO(0x9888), 0x022f0000 },
+	{ _MMIO(0x9888), 0x0a4c0090 },
+	{ _MMIO(0x9888), 0x064d0027 },
+	{ _MMIO(0x9888), 0x004d0000 },
+	{ _MMIO(0x9888), 0x000d0d40 },
+	{ _MMIO(0x9888), 0x020d803f },
+	{ _MMIO(0x9888), 0x040d8023 },
+	{ _MMIO(0x9888), 0x100d0000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020f0010 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x0e0f0050 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41901400 },
+	{ _MMIO(0x9888), 0x43901485 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900001 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_rasterizer_and_pixel_backend_mux_config(struct drm_i915_private *dev_priv,
+					    const struct i915_oa_reg **regs,
+					    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_rasterizer_and_pixel_backend;
+	lens[n] = ARRAY_SIZE(mux_config_rasterizer_and_pixel_backend);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler[] = {
+	{ _MMIO(0x9888), 0x14152c00 },
+	{ _MMIO(0x9888), 0x16150005 },
+	{ _MMIO(0x9888), 0x121600a0 },
+	{ _MMIO(0x9888), 0x14352c00 },
+	{ _MMIO(0x9888), 0x16350005 },
+	{ _MMIO(0x9888), 0x123600a0 },
+	{ _MMIO(0x9888), 0x14552c00 },
+	{ _MMIO(0x9888), 0x16550005 },
+	{ _MMIO(0x9888), 0x125600a0 },
+	{ _MMIO(0x9888), 0x062f6000 },
+	{ _MMIO(0x9888), 0x022f2000 },
+	{ _MMIO(0x9888), 0x0c4c0050 },
+	{ _MMIO(0x9888), 0x0a4c0010 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0350 },
+	{ _MMIO(0x9888), 0x0c0fb000 },
+	{ _MMIO(0x9888), 0x0e0f00da },
+	{ _MMIO(0x9888), 0x182c0028 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x022dc000 },
+	{ _MMIO(0x9888), 0x042d4000 },
+	{ _MMIO(0x9888), 0x0c138000 },
+	{ _MMIO(0x9888), 0x0e132000 },
+	{ _MMIO(0x9888), 0x0413c000 },
+	{ _MMIO(0x9888), 0x1c140018 },
+	{ _MMIO(0x9888), 0x0c157000 },
+	{ _MMIO(0x9888), 0x0e150078 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x04162180 },
+	{ _MMIO(0x9888), 0x02160000 },
+	{ _MMIO(0x9888), 0x04174000 },
+	{ _MMIO(0x9888), 0x0233a000 },
+	{ _MMIO(0x9888), 0x04333000 },
+	{ _MMIO(0x9888), 0x14348000 },
+	{ _MMIO(0x9888), 0x16348000 },
+	{ _MMIO(0x9888), 0x02357870 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x04360043 },
+	{ _MMIO(0x9888), 0x02360000 },
+	{ _MMIO(0x9888), 0x04371000 },
+	{ _MMIO(0x9888), 0x0e538000 },
+	{ _MMIO(0x9888), 0x00538000 },
+	{ _MMIO(0x9888), 0x06533000 },
+	{ _MMIO(0x9888), 0x1c540020 },
+	{ _MMIO(0x9888), 0x12548000 },
+	{ _MMIO(0x9888), 0x0e557000 },
+	{ _MMIO(0x9888), 0x00557800 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x06560043 },
+	{ _MMIO(0x9888), 0x02560000 },
+	{ _MMIO(0x9888), 0x06571000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900060 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900060 },
+};
+
+static int
+get_sampler_mux_config(struct drm_i915_private *dev_priv,
+		       const struct i915_oa_reg **regs,
+		       int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_sampler;
+	lens[n] = ARRAY_SIZE(mux_config_sampler);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x00007fff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x00009fff },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000efff },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000f3ff },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fdff },
+	{ _MMIO(0x2798), 0x00000000 },
+	{ _MMIO(0x279c), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_1[] = {
+	{ _MMIO(0x9888), 0x12120000 },
+	{ _MMIO(0x9888), 0x12320000 },
+	{ _MMIO(0x9888), 0x12520000 },
+	{ _MMIO(0x9888), 0x002f8000 },
+	{ _MMIO(0x9888), 0x022f3000 },
+	{ _MMIO(0x9888), 0x0a4c0015 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f03a0 },
+	{ _MMIO(0x9888), 0x0c0ff000 },
+	{ _MMIO(0x9888), 0x0e0f0095 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2d8000 },
+	{ _MMIO(0x9888), 0x0e2d4000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x02108000 },
+	{ _MMIO(0x9888), 0x0410c000 },
+	{ _MMIO(0x9888), 0x02118000 },
+	{ _MMIO(0x9888), 0x0411c000 },
+	{ _MMIO(0x9888), 0x02121880 },
+	{ _MMIO(0x9888), 0x041219b5 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x02134000 },
+	{ _MMIO(0x9888), 0x04135000 },
+	{ _MMIO(0x9888), 0x0c308000 },
+	{ _MMIO(0x9888), 0x0e304000 },
+	{ _MMIO(0x9888), 0x06304000 },
+	{ _MMIO(0x9888), 0x0c318000 },
+	{ _MMIO(0x9888), 0x0e314000 },
+	{ _MMIO(0x9888), 0x06314000 },
+	{ _MMIO(0x9888), 0x0c321a80 },
+	{ _MMIO(0x9888), 0x0e320033 },
+	{ _MMIO(0x9888), 0x06320031 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x0c334000 },
+	{ _MMIO(0x9888), 0x0e331000 },
+	{ _MMIO(0x9888), 0x06331000 },
+	{ _MMIO(0x9888), 0x0e508000 },
+	{ _MMIO(0x9888), 0x00508000 },
+	{ _MMIO(0x9888), 0x02504000 },
+	{ _MMIO(0x9888), 0x0e518000 },
+	{ _MMIO(0x9888), 0x00518000 },
+	{ _MMIO(0x9888), 0x02514000 },
+	{ _MMIO(0x9888), 0x0e521880 },
+	{ _MMIO(0x9888), 0x00521a80 },
+	{ _MMIO(0x9888), 0x02520033 },
+	{ _MMIO(0x9888), 0x0e534000 },
+	{ _MMIO(0x9888), 0x00534000 },
+	{ _MMIO(0x9888), 0x02531000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900062 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+};
+
+static int
+get_tdl_1_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_1;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_2[] = {
+	{ _MMIO(0x9888), 0x12124d60 },
+	{ _MMIO(0x9888), 0x12322e60 },
+	{ _MMIO(0x9888), 0x12524d60 },
+	{ _MMIO(0x9888), 0x022f3000 },
+	{ _MMIO(0x9888), 0x0a4c0014 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0fe000 },
+	{ _MMIO(0x9888), 0x0e0f0097 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x002d8000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x0410c000 },
+	{ _MMIO(0x9888), 0x0411c000 },
+	{ _MMIO(0x9888), 0x04121fb7 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x04135000 },
+	{ _MMIO(0x9888), 0x00308000 },
+	{ _MMIO(0x9888), 0x06304000 },
+	{ _MMIO(0x9888), 0x00318000 },
+	{ _MMIO(0x9888), 0x06314000 },
+	{ _MMIO(0x9888), 0x00321b80 },
+	{ _MMIO(0x9888), 0x0632003f },
+	{ _MMIO(0x9888), 0x00334000 },
+	{ _MMIO(0x9888), 0x06331000 },
+	{ _MMIO(0x9888), 0x0250c000 },
+	{ _MMIO(0x9888), 0x0251c000 },
+	{ _MMIO(0x9888), 0x02521fb7 },
+	{ _MMIO(0x9888), 0x00520000 },
+	{ _MMIO(0x9888), 0x02535000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x43900063 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_tdl_2_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_2;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extra[] = {
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extra[] = {
+};
+
+static const struct i915_oa_reg mux_config_compute_extra[] = {
+	{ _MMIO(0x9888), 0x121203e0 },
+	{ _MMIO(0x9888), 0x123203e0 },
+	{ _MMIO(0x9888), 0x125203e0 },
+	{ _MMIO(0x9888), 0x129203e0 },
+	{ _MMIO(0x9888), 0x12b203e0 },
+	{ _MMIO(0x9888), 0x12d203e0 },
+	{ _MMIO(0x9888), 0x131203e0 },
+	{ _MMIO(0x9888), 0x133203e0 },
+	{ _MMIO(0x9888), 0x135203e0 },
+	{ _MMIO(0x9888), 0x1a4ef000 },
+	{ _MMIO(0x9888), 0x1c4e0003 },
+	{ _MMIO(0x9888), 0x024ec000 },
+	{ _MMIO(0x9888), 0x044ec000 },
+	{ _MMIO(0x9888), 0x064ec000 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x0c4c02a0 },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x0a4c0042 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0150 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f006d },
+	{ _MMIO(0x9888), 0x182c00a8 },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x042d8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06114000 },
+	{ _MMIO(0x9888), 0x06120033 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x04308000 },
+	{ _MMIO(0x9888), 0x04318000 },
+	{ _MMIO(0x9888), 0x04321980 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x04334000 },
+	{ _MMIO(0x9888), 0x04504000 },
+	{ _MMIO(0x9888), 0x04514000 },
+	{ _MMIO(0x9888), 0x04520033 },
+	{ _MMIO(0x9888), 0x00520000 },
+	{ _MMIO(0x9888), 0x04531000 },
+	{ _MMIO(0x9888), 0x1acef000 },
+	{ _MMIO(0x9888), 0x1cce0003 },
+	{ _MMIO(0x9888), 0x00af8000 },
+	{ _MMIO(0x9888), 0x0ccc02a0 },
+	{ _MMIO(0x9888), 0x0acc0001 },
+	{ _MMIO(0x9888), 0x0c8d8000 },
+	{ _MMIO(0x9888), 0x0e8da000 },
+	{ _MMIO(0x9888), 0x008d8000 },
+	{ _MMIO(0x9888), 0x028da000 },
+	{ _MMIO(0x9888), 0x108f0150 },
+	{ _MMIO(0x9888), 0x0c8fb000 },
+	{ _MMIO(0x9888), 0x0e8f0001 },
+	{ _MMIO(0x9888), 0x18ac00a8 },
+	{ _MMIO(0x9888), 0x06ac8000 },
+	{ _MMIO(0x9888), 0x02ad4000 },
+	{ _MMIO(0x9888), 0x02908000 },
+	{ _MMIO(0x9888), 0x02918000 },
+	{ _MMIO(0x9888), 0x02921980 },
+	{ _MMIO(0x9888), 0x00920000 },
+	{ _MMIO(0x9888), 0x02934000 },
+	{ _MMIO(0x9888), 0x02b04000 },
+	{ _MMIO(0x9888), 0x02b14000 },
+	{ _MMIO(0x9888), 0x02b20033 },
+	{ _MMIO(0x9888), 0x00b20000 },
+	{ _MMIO(0x9888), 0x02b31000 },
+	{ _MMIO(0x9888), 0x00d08000 },
+	{ _MMIO(0x9888), 0x00d18000 },
+	{ _MMIO(0x9888), 0x00d21980 },
+	{ _MMIO(0x9888), 0x00d34000 },
+	{ _MMIO(0x9888), 0x072f8000 },
+	{ _MMIO(0x9888), 0x0d4c0100 },
+	{ _MMIO(0x9888), 0x0d0d8000 },
+	{ _MMIO(0x9888), 0x0f0da000 },
+	{ _MMIO(0x9888), 0x110f01b0 },
+	{ _MMIO(0x9888), 0x192c0080 },
+	{ _MMIO(0x9888), 0x0f2d4000 },
+	{ _MMIO(0x9888), 0x0f108000 },
+	{ _MMIO(0x9888), 0x0f118000 },
+	{ _MMIO(0x9888), 0x0f121980 },
+	{ _MMIO(0x9888), 0x01120000 },
+	{ _MMIO(0x9888), 0x0f134000 },
+	{ _MMIO(0x9888), 0x0f304000 },
+	{ _MMIO(0x9888), 0x0f314000 },
+	{ _MMIO(0x9888), 0x0f320033 },
+	{ _MMIO(0x9888), 0x01320000 },
+	{ _MMIO(0x9888), 0x0f331000 },
+	{ _MMIO(0x9888), 0x0d508000 },
+	{ _MMIO(0x9888), 0x0d518000 },
+	{ _MMIO(0x9888), 0x0d521980 },
+	{ _MMIO(0x9888), 0x01520000 },
+	{ _MMIO(0x9888), 0x0d534000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900002 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51901100 },
+	{ _MMIO(0x9888), 0x41901000 },
+	{ _MMIO(0x9888), 0x43901423 },
+	{ _MMIO(0x9888), 0x53903331 },
+	{ _MMIO(0x9888), 0x45900044 },
+};
+
+static int
+get_compute_extra_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_extra;
+	lens[n] = ARRAY_SIZE(mux_config_compute_extra);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_vme_pipe[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00100030 },
+	{ _MMIO(0x2774), 0x0000fff9 },
+	{ _MMIO(0x2778), 0x00000002 },
+	{ _MMIO(0x277c), 0x0000fffc },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000fff3 },
+	{ _MMIO(0x2788), 0x00100180 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00000002 },
+	{ _MMIO(0x279c), 0x0000ff3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_vme_pipe[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00008003 },
+};
+
+static const struct i915_oa_reg mux_config_vme_pipe[] = {
+	{ _MMIO(0x9888), 0x141a5800 },
+	{ _MMIO(0x9888), 0x161a00c0 },
+	{ _MMIO(0x9888), 0x12180240 },
+	{ _MMIO(0x9888), 0x14180002 },
+	{ _MMIO(0x9888), 0x149a5800 },
+	{ _MMIO(0x9888), 0x169a00c0 },
+	{ _MMIO(0x9888), 0x12980240 },
+	{ _MMIO(0x9888), 0x14980002 },
+	{ _MMIO(0x9888), 0x1a4e3fc0 },
+	{ _MMIO(0x9888), 0x002f1000 },
+	{ _MMIO(0x9888), 0x022f8000 },
+	{ _MMIO(0x9888), 0x042f3000 },
+	{ _MMIO(0x9888), 0x004c4000 },
+	{ _MMIO(0x9888), 0x0a4c9500 },
+	{ _MMIO(0x9888), 0x0c4c002a },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f5500 },
+	{ _MMIO(0x9888), 0x100f0015 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162caa00 },
+	{ _MMIO(0x9888), 0x182c000a },
+	{ _MMIO(0x9888), 0x04193000 },
+	{ _MMIO(0x9888), 0x081a28c1 },
+	{ _MMIO(0x9888), 0x001a0000 },
+	{ _MMIO(0x9888), 0x00133000 },
+	{ _MMIO(0x9888), 0x0613c000 },
+	{ _MMIO(0x9888), 0x0813f000 },
+	{ _MMIO(0x9888), 0x00172000 },
+	{ _MMIO(0x9888), 0x06178000 },
+	{ _MMIO(0x9888), 0x0817a000 },
+	{ _MMIO(0x9888), 0x00180037 },
+	{ _MMIO(0x9888), 0x06180940 },
+	{ _MMIO(0x9888), 0x08180000 },
+	{ _MMIO(0x9888), 0x02180000 },
+	{ _MMIO(0x9888), 0x04183000 },
+	{ _MMIO(0x9888), 0x04afc000 },
+	{ _MMIO(0x9888), 0x06af3000 },
+	{ _MMIO(0x9888), 0x0acc4000 },
+	{ _MMIO(0x9888), 0x0ccc0015 },
+	{ _MMIO(0x9888), 0x0a8da000 },
+	{ _MMIO(0x9888), 0x0c8da000 },
+	{ _MMIO(0x9888), 0x0e8f4000 },
+	{ _MMIO(0x9888), 0x108f0015 },
+	{ _MMIO(0x9888), 0x16aca000 },
+	{ _MMIO(0x9888), 0x18ac000a },
+	{ _MMIO(0x9888), 0x06993000 },
+	{ _MMIO(0x9888), 0x0c9a28c1 },
+	{ _MMIO(0x9888), 0x009a0000 },
+	{ _MMIO(0x9888), 0x0a93f000 },
+	{ _MMIO(0x9888), 0x0c93f000 },
+	{ _MMIO(0x9888), 0x0a97a000 },
+	{ _MMIO(0x9888), 0x0c97a000 },
+	{ _MMIO(0x9888), 0x0a980977 },
+	{ _MMIO(0x9888), 0x08980000 },
+	{ _MMIO(0x9888), 0x04980000 },
+	{ _MMIO(0x9888), 0x06983000 },
+	{ _MMIO(0x9888), 0x119000ff },
+	{ _MMIO(0x9888), 0x51900010 },
+	{ _MMIO(0x9888), 0x41900060 },
+	{ _MMIO(0x9888), 0x55900111 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900821 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900002 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_vme_pipe_mux_config(struct drm_i915_private *dev_priv,
+			const struct i915_oa_reg **regs,
+			int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_vme_pipe;
+	lens[n] = ARRAY_SIZE(mux_config_vme_pipe);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_test_oa[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2770), 0x00000004 },
+	{ _MMIO(0x2774), 0x00000000 },
+	{ _MMIO(0x2778), 0x00000003 },
+	{ _MMIO(0x277c), 0x00000000 },
+	{ _MMIO(0x2780), 0x00000007 },
+	{ _MMIO(0x2784), 0x00000000 },
+	{ _MMIO(0x2788), 0x00100002 },
+	{ _MMIO(0x278c), 0x0000fff7 },
+	{ _MMIO(0x2790), 0x00100002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00100082 },
+	{ _MMIO(0x279c), 0x0000ffef },
+	{ _MMIO(0x27a0), 0x001000c2 },
+	{ _MMIO(0x27a4), 0x0000ffe7 },
+	{ _MMIO(0x27a8), 0x00100001 },
+	{ _MMIO(0x27ac), 0x0000ffe7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_test_oa[] = {
+};
+
+static const struct i915_oa_reg mux_config_test_oa[] = {
+	{ _MMIO(0x9888), 0x11810000 },
+	{ _MMIO(0x9888), 0x07810013 },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930040 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_test_oa_mux_config(struct drm_i915_private *dev_priv,
+		       const struct i915_oa_reg **regs,
+		       int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_test_oa;
+	lens[n] = ARRAY_SIZE(mux_config_test_oa);
+	n++;
+
+	return n;
+}
+
 int i915_oa_select_metric_set_sklgt4(struct drm_i915_private *dev_priv)
 {
-	dev_priv->perf.oa.n_mux_configs = 0;
-	dev_priv->perf.oa.b_counter_regs = NULL;
-	dev_priv->perf.oa.b_counter_regs_len = 0;
-	dev_priv->perf.oa.flex_regs = NULL;
-	dev_priv->perf.oa.flex_regs_len = 0;
+	dev_priv->perf.oa.n_mux_configs = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_basic_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_basic_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
+		return 0;
+	case METRIC_SET_ID_RENDER_PIPE_PROFILE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_pipe_profile_mux_config(dev_priv,
+							   dev_priv->perf.oa.mux_regs,
+							   dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_PIPE_PROFILE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_pipe_profile;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_pipe_profile);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_pipe_profile;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_pipe_profile);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_READS:
+		dev_priv->perf.oa.n_mux_configs =
+			get_memory_reads_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_READS\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_reads;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_reads);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_reads;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_reads);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_WRITES:
+		dev_priv->perf.oa.n_mux_configs =
+			get_memory_writes_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_WRITES\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_writes;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_writes);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_writes;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_writes);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTENDED:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_extended_mux_config(dev_priv,
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTENDED\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extended;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extended);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extended;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extended);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_L3_CACHE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_l3_cache_mux_config(dev_priv,
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_L3_CACHE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_l3_cache;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_l3_cache);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_l3_cache;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_l3_cache);
+
+		return 0;
+	case METRIC_SET_ID_HDC_AND_SF:
+		dev_priv->perf.oa.n_mux_configs =
+			get_hdc_and_sf_mux_config(dev_priv,
+						  dev_priv->perf.oa.mux_regs,
+						  dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"HDC_AND_SF\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_hdc_and_sf;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_hdc_and_sf);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_hdc_and_sf;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_hdc_and_sf);
+
+		return 0;
+	case METRIC_SET_ID_L3_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_1_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_1);
+
+		return 0;
+	case METRIC_SET_ID_L3_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_2_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_2);
+
+		return 0;
+	case METRIC_SET_ID_L3_3:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_3_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_3\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_3;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_3);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_3;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_3);
+
+		return 0;
+	case METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND:
+		dev_priv->perf.oa.n_mux_configs =
+			get_rasterizer_and_pixel_backend_mux_config(dev_priv,
+								    dev_priv->perf.oa.mux_regs,
+								    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RASTERIZER_AND_PIXEL_BACKEND\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_rasterizer_and_pixel_backend);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_rasterizer_and_pixel_backend);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER:
+		dev_priv->perf.oa.n_mux_configs =
+			get_sampler_mux_config(dev_priv,
+					       dev_priv->perf.oa.mux_regs,
+					       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler);
+
+		return 0;
+	case METRIC_SET_ID_TDL_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_1_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_1);
+
+		return 0;
+	case METRIC_SET_ID_TDL_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_2_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_2);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTRA:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_extra_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTRA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extra;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extra);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extra;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extra);
+
+		return 0;
+	case METRIC_SET_ID_VME_PIPE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_vme_pipe_mux_config(dev_priv,
+						dev_priv->perf.oa.mux_regs,
+						dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"VME_PIPE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_vme_pipe;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_vme_pipe);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_vme_pipe;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_vme_pipe);
+
+		return 0;
+	case METRIC_SET_ID_TEST_OA:
+		dev_priv->perf.oa.n_mux_configs =
+			get_test_oa_mux_config(dev_priv,
+					       dev_priv->perf.oa.mux_regs,
+					       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TEST_OA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_test_oa;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_test_oa);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_test_oa;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_test_oa);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "bad77c24-cc64-480d-99bf-e7b740713800",
+	.attrs =  attrs_render_basic,
+};
+
+static ssize_t
+show_compute_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_BASIC);
+}
+
+static struct device_attribute dev_attr_compute_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_basic[] = {
+	&dev_attr_compute_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_basic = {
+	.name = "7277228f-e7f3-4743-945a-6a2049d11377",
+	.attrs =  attrs_compute_basic,
+};
+
+static ssize_t
+show_render_pipe_profile_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_PIPE_PROFILE);
+}
+
+static struct device_attribute dev_attr_render_pipe_profile_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_pipe_profile_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_pipe_profile[] = {
+	&dev_attr_render_pipe_profile_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_pipe_profile = {
+	.name = "463c668c-3f60-49b6-8f85-d995b635b3b2",
+	.attrs =  attrs_render_pipe_profile,
+};
+
+static ssize_t
+show_memory_reads_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_READS);
+}
 
-	switch (dev_priv->perf.oa.metrics_set) {
-	case METRIC_SET_ID_RENDER_BASIC:
-		dev_priv->perf.oa.n_mux_configs =
-			get_render_basic_mux_config(dev_priv,
-						    dev_priv->perf.oa.mux_regs,
-						    dev_priv->perf.oa.mux_regs_lens);
-		if (dev_priv->perf.oa.n_mux_configs == 0) {
-			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+static struct device_attribute dev_attr_memory_reads_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_reads_id,
+	.store = NULL,
+};
 
-			/* EINVAL because *_register_sysfs already checked this
-			 * and so it wouldn't have been advertised to userspace and
-			 * so shouldn't have been requested
-			 */
-			return -EINVAL;
-		}
+static struct attribute *attrs_memory_reads[] = {
+	&dev_attr_memory_reads_id.attr,
+	NULL,
+};
 
-		dev_priv->perf.oa.b_counter_regs =
-			b_counter_config_render_basic;
-		dev_priv->perf.oa.b_counter_regs_len =
-			ARRAY_SIZE(b_counter_config_render_basic);
+static struct attribute_group group_memory_reads = {
+	.name = "3ae6e74c-72c3-4040-9bd0-7961430b8cc8",
+	.attrs =  attrs_memory_reads,
+};
 
-		dev_priv->perf.oa.flex_regs =
-			flex_eu_config_render_basic;
-		dev_priv->perf.oa.flex_regs_len =
-			ARRAY_SIZE(flex_eu_config_render_basic);
+static ssize_t
+show_memory_writes_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_WRITES);
+}
 
-		return 0;
-	default:
-		return -ENODEV;
-	}
+static struct device_attribute dev_attr_memory_writes_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_writes_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_writes[] = {
+	&dev_attr_memory_writes_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_writes = {
+	.name = "055f256d-4052-467c-8dec-6064a4806433",
+	.attrs =  attrs_memory_writes,
+};
+
+static ssize_t
+show_compute_extended_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTENDED);
+}
+
+static struct device_attribute dev_attr_compute_extended_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extended_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extended[] = {
+	&dev_attr_compute_extended_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extended = {
+	.name = "753972d4-87cd-4460-824d-754463ac5054",
+	.attrs =  attrs_compute_extended,
+};
+
+static ssize_t
+show_compute_l3_cache_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_L3_CACHE);
 }
 
+static struct device_attribute dev_attr_compute_l3_cache_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_l3_cache_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_l3_cache[] = {
+	&dev_attr_compute_l3_cache_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_l3_cache = {
+	.name = "4e4392e9-8f73-457b-ab44-b49f7a0c733b",
+	.attrs =  attrs_compute_l3_cache,
+};
+
 static ssize_t
-show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+show_hdc_and_sf_id(struct device *kdev, struct device_attribute *attr, char *buf)
 {
-	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+	return sprintf(buf, "%d\n", METRIC_SET_ID_HDC_AND_SF);
 }
 
-static struct device_attribute dev_attr_render_basic_id = {
+static struct device_attribute dev_attr_hdc_and_sf_id = {
 	.attr = { .name = "id", .mode = 0444 },
-	.show = show_render_basic_id,
+	.show = show_hdc_and_sf_id,
 	.store = NULL,
 };
 
-static struct attribute *attrs_render_basic[] = {
-	&dev_attr_render_basic_id.attr,
+static struct attribute *attrs_hdc_and_sf[] = {
+	&dev_attr_hdc_and_sf_id.attr,
 	NULL,
 };
 
-static struct attribute_group group_render_basic = {
-	.name = "bad77c24-cc64-480d-99bf-e7b740713800",
-	.attrs =  attrs_render_basic,
+static struct attribute_group group_hdc_and_sf = {
+	.name = "730d95dd-7da8-4e1c-ab8d-c0eb1e4c1805",
+	.attrs =  attrs_hdc_and_sf,
+};
+
+static ssize_t
+show_l3_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_1);
+}
+
+static struct device_attribute dev_attr_l3_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_1[] = {
+	&dev_attr_l3_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_1 = {
+	.name = "d9e86d70-462b-462a-851e-fd63e8c13d63",
+	.attrs =  attrs_l3_1,
+};
+
+static ssize_t
+show_l3_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_2);
+}
+
+static struct device_attribute dev_attr_l3_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_2[] = {
+	&dev_attr_l3_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_2 = {
+	.name = "52200424-6ee9-48b3-b7fa-0afcf1975e4d",
+	.attrs =  attrs_l3_2,
+};
+
+static ssize_t
+show_l3_3_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_3);
+}
+
+static struct device_attribute dev_attr_l3_3_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_3_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_3[] = {
+	&dev_attr_l3_3_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_3 = {
+	.name = "1988315f-0a26-44df-acb0-df7ec86b1456",
+	.attrs =  attrs_l3_3,
+};
+
+static ssize_t
+show_rasterizer_and_pixel_backend_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND);
+}
+
+static struct device_attribute dev_attr_rasterizer_and_pixel_backend_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_rasterizer_and_pixel_backend_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_rasterizer_and_pixel_backend[] = {
+	&dev_attr_rasterizer_and_pixel_backend_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_rasterizer_and_pixel_backend = {
+	.name = "f1f17ca7-286e-4ae5-9d15-9fccad6c665d",
+	.attrs =  attrs_rasterizer_and_pixel_backend,
+};
+
+static ssize_t
+show_sampler_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER);
+}
+
+static struct device_attribute dev_attr_sampler_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler[] = {
+	&dev_attr_sampler_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler = {
+	.name = "00a9e0fb-3d2e-4405-852c-dce6334ffb3b",
+	.attrs =  attrs_sampler,
+};
+
+static ssize_t
+show_tdl_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_1);
+}
+
+static struct device_attribute dev_attr_tdl_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_1[] = {
+	&dev_attr_tdl_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_1 = {
+	.name = "13dcc50a-7ec0-409b-99d6-a3f932cedcb3",
+	.attrs =  attrs_tdl_1,
+};
+
+static ssize_t
+show_tdl_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_2);
+}
+
+static struct device_attribute dev_attr_tdl_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_2[] = {
+	&dev_attr_tdl_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_2 = {
+	.name = "97875e21-6624-4aee-9191-682feb3eae21",
+	.attrs =  attrs_tdl_2,
+};
+
+static ssize_t
+show_compute_extra_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTRA);
+}
+
+static struct device_attribute dev_attr_compute_extra_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extra_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extra[] = {
+	&dev_attr_compute_extra_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extra = {
+	.name = "a5aa857d-e8f0-4dfa-8981-ce340fa748fd",
+	.attrs =  attrs_compute_extra,
+};
+
+static ssize_t
+show_vme_pipe_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_VME_PIPE);
+}
+
+static struct device_attribute dev_attr_vme_pipe_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_vme_pipe_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_vme_pipe[] = {
+	&dev_attr_vme_pipe_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_vme_pipe = {
+	.name = "0e8d8b86-4ee7-4cdd-aaaa-58adc92cb29e",
+	.attrs =  attrs_vme_pipe,
+};
+
+static ssize_t
+show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TEST_OA);
+}
+
+static struct device_attribute dev_attr_test_oa_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_test_oa_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_test_oa[] = {
+	&dev_attr_test_oa_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_test_oa = {
+	.name = "882fa433-1f4a-4a67-a962-c741888fe5f5",
+	.attrs =  attrs_test_oa,
 };
 
 int
@@ -242,9 +2905,145 @@ i915_perf_register_sysfs_sklgt4(struct drm_i915_private *dev_priv)
 		if (ret)
 			goto error_render_basic;
 	}
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+		if (ret)
+			goto error_compute_basic;
+	}
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+		if (ret)
+			goto error_render_pipe_profile;
+	}
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+		if (ret)
+			goto error_memory_reads;
+	}
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+		if (ret)
+			goto error_memory_writes;
+	}
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+		if (ret)
+			goto error_compute_extended;
+	}
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+		if (ret)
+			goto error_compute_l3_cache;
+	}
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+		if (ret)
+			goto error_hdc_and_sf;
+	}
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+		if (ret)
+			goto error_l3_1;
+	}
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+		if (ret)
+			goto error_l3_2;
+	}
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+		if (ret)
+			goto error_l3_3;
+	}
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+		if (ret)
+			goto error_rasterizer_and_pixel_backend;
+	}
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler);
+		if (ret)
+			goto error_sampler;
+	}
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+		if (ret)
+			goto error_tdl_1;
+	}
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+		if (ret)
+			goto error_tdl_2;
+	}
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+		if (ret)
+			goto error_compute_extra;
+	}
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+		if (ret)
+			goto error_vme_pipe;
+	}
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+		if (ret)
+			goto error_test_oa;
+	}
 
 	return 0;
 
+error_test_oa:
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+error_vme_pipe:
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+error_compute_extra:
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+error_tdl_2:
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+error_tdl_1:
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+error_sampler:
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+error_rasterizer_and_pixel_backend:
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+error_l3_3:
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+error_l3_2:
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+error_l3_1:
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+error_hdc_and_sf:
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+error_compute_l3_cache:
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+error_compute_extended:
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+error_memory_writes:
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+error_memory_reads:
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+error_render_pipe_profile:
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+error_compute_basic:
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
 error_render_basic:
 	return ret;
 }
@@ -257,4 +3056,38 @@ i915_perf_unregister_sysfs_sklgt4(struct drm_i915_private *dev_priv)
 
 	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
 		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_test_oa);
 }
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v15 08/14] drm/i915/perf: per-gen timebase for checking sample freq
  2017-05-31 12:33   ` [PATCH v15 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                       ` (6 preceding siblings ...)
  2017-05-31 12:33     ` [PATCH v15 07/14] drm/i915/perf: Add more OA configs for BDW, CHV, SKL + BXT Lionel Landwerlin
@ 2017-05-31 12:33     ` Lionel Landwerlin
  2017-05-31 12:33     ` [PATCH v15 09/14] drm/i915/perf: remove perf.hook_lock Lionel Landwerlin
                       ` (5 subsequent siblings)
  13 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-31 12:33 UTC (permalink / raw)
  To: intel-gfx

From: Robert Bragg <robert@sixbynine.org>

An oa_exponent_to_ns() utility and per-gen timebase constants where
recently removed when updating the tail pointer race condition WA, and
this restores those so we can update the _PROP_OA_EXPONENT validation
done in read_properties_unlocked() to not assume we have a 12.5MHz
timebase as we did for Haswell.

Accordingly the oa_sample_rate_hard_limit value that's referenced by
proc_dointvec_minmax defining the absolute limit for the OA sampling
frequency is now initialized to (timestamp_frequency / 2) instead of the
6.25MHz constant for Haswell.

v2:
    Specify frequency of 19.2MHz for BXT (Ville)
    Initialize oa_sample_rate_hard_limit per-gen too (Lionel)

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h  |  1 +
 drivers/gpu/drm/i915/i915_perf.c | 39 ++++++++++++++++++++++++++++-----------
 2 files changed, 29 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 45c86b2ec968..4446d42ef0ab 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2395,6 +2395,7 @@ struct drm_i915_private {
 
 			bool periodic;
 			int period_exponent;
+			int timestamp_frequency;
 
 			int metrics_set;
 
diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 9f524a0ec727..4fab9974b1b8 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -288,10 +288,12 @@ static u32 i915_perf_stream_paranoid = true;
 
 /* For sysctl proc_dointvec_minmax of i915_oa_max_sample_rate
  *
- * 160ns is the smallest sampling period we can theoretically program the OA
- * unit with on Haswell, corresponding to 6.25MHz.
+ * The highest sampling frequency we can theoretically program the OA unit
+ * with is always half the timestamp frequency: E.g. 6.25Mhz for Haswell.
+ *
+ * Initialized just before we register the sysctl parameter.
  */
-static int oa_sample_rate_hard_limit = 6250000;
+static int oa_sample_rate_hard_limit;
 
 /* Theoretically we can program the OA unit to sample every 160ns but don't
  * allow that by default unless root...
@@ -2613,6 +2615,12 @@ i915_perf_open_ioctl_locked(struct drm_i915_private *dev_priv,
 	return ret;
 }
 
+static u64 oa_exponent_to_ns(struct drm_i915_private *dev_priv, int exponent)
+{
+	return div_u64(1000000000ULL * (2ULL << exponent),
+		       dev_priv->perf.oa.timestamp_frequency);
+}
+
 /**
  * read_properties_unlocked - validate + copy userspace stream open properties
  * @dev_priv: i915 device instance
@@ -2709,16 +2717,13 @@ static int read_properties_unlocked(struct drm_i915_private *dev_priv,
 			}
 
 			/* Theoretically we can program the OA unit to sample
-			 * every 160ns but don't allow that by default unless
-			 * root.
-			 *
-			 * On Haswell the period is derived from the exponent
-			 * as:
-			 *
-			 *   period = 80ns * 2^(exponent + 1)
+			 * e.g. every 160ns for HSW, 167ns for BDW/SKL or 104ns
+			 * for BXT. We don't allow such high sampling
+			 * frequencies by default unless root.
 			 */
+
 			BUILD_BUG_ON(sizeof(oa_period) != 8);
-			oa_period = 80ull * (2ull << value);
+			oa_period = oa_exponent_to_ns(dev_priv, value);
 
 			/* This check is primarily to ensure that oa_period <=
 			 * UINT32_MAX (before passing to do_div which only
@@ -2974,6 +2979,8 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
 		dev_priv->perf.oa.ops.oa_hw_tail_read =
 			gen7_oa_hw_tail_read;
 
+		dev_priv->perf.oa.timestamp_frequency = 12500000;
+
 		dev_priv->perf.oa.oa_formats = hsw_oa_formats;
 
 		dev_priv->perf.oa.n_builtin_sets =
@@ -2989,6 +2996,9 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
 		if (IS_GEN8(dev_priv)) {
 			dev_priv->perf.oa.ctx_oactxctrl_offset = 0x120;
 			dev_priv->perf.oa.ctx_flexeu0_offset = 0x2ce;
+
+			dev_priv->perf.oa.timestamp_frequency = 12500000;
+
 			dev_priv->perf.oa.gen8_valid_ctx_bit = (1<<25);
 
 			if (IS_BROADWELL(dev_priv)) {
@@ -3005,6 +3015,9 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
 		} else if (IS_GEN9(dev_priv)) {
 			dev_priv->perf.oa.ctx_oactxctrl_offset = 0x128;
 			dev_priv->perf.oa.ctx_flexeu0_offset = 0x3de;
+
+			dev_priv->perf.oa.timestamp_frequency = 12000000;
+
 			dev_priv->perf.oa.gen8_valid_ctx_bit = (1<<16);
 
 			if (IS_SKL_GT2(dev_priv)) {
@@ -3023,6 +3036,8 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
 				dev_priv->perf.oa.ops.select_metric_set =
 					i915_oa_select_metric_set_sklgt4;
 			} else if (IS_BROXTON(dev_priv)) {
+				dev_priv->perf.oa.timestamp_frequency = 19200000;
+
 				dev_priv->perf.oa.n_builtin_sets =
 					i915_oa_n_builtin_metric_sets_bxt;
 				dev_priv->perf.oa.ops.select_metric_set =
@@ -3057,6 +3072,8 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
 		spin_lock_init(&dev_priv->perf.hook_lock);
 		spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);
 
+		oa_sample_rate_hard_limit =
+			dev_priv->perf.oa.timestamp_frequency / 2;
 		dev_priv->perf.sysctl_header = register_sysctl_table(dev_root);
 
 		dev_priv->perf.initialized = true;
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v15 09/14] drm/i915/perf: remove perf.hook_lock
  2017-05-31 12:33   ` [PATCH v15 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                       ` (7 preceding siblings ...)
  2017-05-31 12:33     ` [PATCH v15 08/14] drm/i915/perf: per-gen timebase for checking sample freq Lionel Landwerlin
@ 2017-05-31 12:33     ` Lionel Landwerlin
  2017-05-31 12:33     ` [PATCH v15 10/14] drm/i915: add KBL GT2/GT3 check macros Lionel Landwerlin
                       ` (4 subsequent siblings)
  13 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-31 12:33 UTC (permalink / raw)
  To: intel-gfx

From: Robert Bragg <robert@sixbynine.org>

In earlier iterations of the i915-perf driver we had a number of
callbacks/hooks from other parts of the i915 driver to e.g. notify us
when a legacy context was pinned and these could run asynchronously with
respect to the stream file operations and might also run in atomic
context.

dev_priv->perf.hook_lock had been for serialising access to state needed
within these callbacks, but as the code has evolved some of the hooks
have gone away or are implemented to avoid needing to lock any state.

The remaining use of this lock was actually redundant considering how
the gen7 oacontrol state used to be updated as part of a context pin
hook.

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h  |  2 --
 drivers/gpu/drm/i915/i915_perf.c | 32 ++++++++++----------------------
 2 files changed, 10 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 4446d42ef0ab..20e9bb14aab4 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2376,8 +2376,6 @@ struct drm_i915_private {
 		struct mutex lock;
 		struct list_head streams;
 
-		spinlock_t hook_lock;
-
 		struct {
 			struct i915_perf_stream *exclusive_stream;
 
diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 4fab9974b1b8..9e5307bbaab1 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -1810,9 +1810,17 @@ static void gen8_disable_metric_set(struct drm_i915_private *dev_priv)
 	gen8_configure_all_contexts(dev_priv, false);
 }
 
-static void gen7_update_oacontrol_locked(struct drm_i915_private *dev_priv)
+static void gen7_oa_enable(struct drm_i915_private *dev_priv)
 {
-	lockdep_assert_held(&dev_priv->perf.hook_lock);
+	/* Reset buf pointers so we don't forward reports from before now.
+	 *
+	 * Think carefully if considering trying to avoid this, since it
+	 * also ensures status flags and the buffer itself are cleared
+	 * in error paths, and we have checks for invalid reports based
+	 * on the assumption that certain fields are written to zeroed
+	 * memory which this helps maintains.
+	 */
+	gen7_init_oa_buffer(dev_priv);
 
 	if (dev_priv->perf.oa.exclusive_stream->enabled) {
 		struct i915_gem_context *ctx =
@@ -1835,25 +1843,6 @@ static void gen7_update_oacontrol_locked(struct drm_i915_private *dev_priv)
 		I915_WRITE(GEN7_OACONTROL, 0);
 }
 
-static void gen7_oa_enable(struct drm_i915_private *dev_priv)
-{
-	unsigned long flags;
-
-	/* Reset buf pointers so we don't forward reports from before now.
-	 *
-	 * Think carefully if considering trying to avoid this, since it
-	 * also ensures status flags and the buffer itself are cleared
-	 * in error paths, and we have checks for invalid reports based
-	 * on the assumption that certain fields are written to zeroed
-	 * memory which this helps maintains.
-	 */
-	gen7_init_oa_buffer(dev_priv);
-
-	spin_lock_irqsave(&dev_priv->perf.hook_lock, flags);
-	gen7_update_oacontrol_locked(dev_priv);
-	spin_unlock_irqrestore(&dev_priv->perf.hook_lock, flags);
-}
-
 static void gen8_oa_enable(struct drm_i915_private *dev_priv)
 {
 	u32 report_format = dev_priv->perf.oa.oa_buffer.format;
@@ -3069,7 +3058,6 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
 
 		INIT_LIST_HEAD(&dev_priv->perf.streams);
 		mutex_init(&dev_priv->perf.lock);
-		spin_lock_init(&dev_priv->perf.hook_lock);
 		spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);
 
 		oa_sample_rate_hard_limit =
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v15 10/14] drm/i915: add KBL GT2/GT3 check macros
  2017-05-31 12:33   ` [PATCH v15 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                       ` (8 preceding siblings ...)
  2017-05-31 12:33     ` [PATCH v15 09/14] drm/i915/perf: remove perf.hook_lock Lionel Landwerlin
@ 2017-05-31 12:33     ` Lionel Landwerlin
  2017-05-31 12:33     ` [PATCH v15 11/14] drm/i915/perf: add KBL support Lionel Landwerlin
                       ` (3 subsequent siblings)
  13 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-31 12:33 UTC (permalink / raw)
  To: intel-gfx

Add macros to detect GT2/GT3 skus so we can apply the proper OA
configuration later.

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 20e9bb14aab4..c3acb0e9eb5d 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2803,6 +2803,10 @@ intel_info(const struct drm_i915_private *dev_priv)
 				 (INTEL_DEVID(dev_priv) & 0x00F0) == 0x0020)
 #define IS_SKL_GT4(dev_priv)	(IS_SKYLAKE(dev_priv) && \
 				 (INTEL_DEVID(dev_priv) & 0x00F0) == 0x0030)
+#define IS_KBL_GT2(dev_priv)	(IS_KABYLAKE(dev_priv) && \
+				 (INTEL_DEVID(dev_priv) & 0x00F0) == 0x0010)
+#define IS_KBL_GT3(dev_priv)	(IS_KABYLAKE(dev_priv) && \
+				 (INTEL_DEVID(dev_priv) & 0x00F0) == 0x0020)
 
 #define IS_ALPHA_SUPPORT(intel_info) ((intel_info)->is_alpha_support)
 
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v15 11/14] drm/i915/perf: add KBL support
  2017-05-31 12:33   ` [PATCH v15 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                       ` (9 preceding siblings ...)
  2017-05-31 12:33     ` [PATCH v15 10/14] drm/i915: add KBL GT2/GT3 check macros Lionel Landwerlin
@ 2017-05-31 12:33     ` Lionel Landwerlin
  2017-05-31 12:33     ` [PATCH v15 12/14] drm/i915/perf: add GLK support Lionel Landwerlin
                       ` (2 subsequent siblings)
  13 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-31 12:33 UTC (permalink / raw)
  To: intel-gfx

Add OA support for Kabylake (pretty much identical to Skylake), and
also add the associated OA configurations.

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/Makefile         |    4 +-
 drivers/gpu/drm/i915/i915_oa_kblgt2.c | 2991 ++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_kblgt2.h |   40 +
 drivers/gpu/drm/i915/i915_oa_kblgt3.c | 3040 +++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_kblgt3.h |   40 +
 drivers/gpu/drm/i915/i915_perf.c      |   30 +-
 6 files changed, 6143 insertions(+), 2 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_oa_kblgt2.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_kblgt2.h
 create mode 100644 drivers/gpu/drm/i915/i915_oa_kblgt3.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_kblgt3.h

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index 49a628cdef9e..033a2df01dbe 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -135,7 +135,9 @@ i915-y += i915_perf.o \
 	  i915_oa_sklgt2.o \
 	  i915_oa_sklgt3.o \
 	  i915_oa_sklgt4.o \
-	  i915_oa_bxt.o
+	  i915_oa_bxt.o \
+	  i915_oa_kblgt2.o \
+	  i915_oa_kblgt3.o
 
 ifeq ($(CONFIG_DRM_I915_GVT),y)
 i915-y += intel_gvt.o
diff --git a/drivers/gpu/drm/i915/i915_oa_kblgt2.c b/drivers/gpu/drm/i915/i915_oa_kblgt2.c
new file mode 100644
index 000000000000..87dbd0a0b076
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_kblgt2.c
@@ -0,0 +1,2991 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/sysfs.h>
+
+#include "i915_drv.h"
+#include "i915_oa_kblgt2.h"
+
+enum metric_set_id {
+	METRIC_SET_ID_RENDER_BASIC = 1,
+	METRIC_SET_ID_COMPUTE_BASIC,
+	METRIC_SET_ID_RENDER_PIPE_PROFILE,
+	METRIC_SET_ID_MEMORY_READS,
+	METRIC_SET_ID_MEMORY_WRITES,
+	METRIC_SET_ID_COMPUTE_EXTENDED,
+	METRIC_SET_ID_COMPUTE_L3_CACHE,
+	METRIC_SET_ID_HDC_AND_SF,
+	METRIC_SET_ID_L3_1,
+	METRIC_SET_ID_L3_2,
+	METRIC_SET_ID_L3_3,
+	METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND,
+	METRIC_SET_ID_SAMPLER,
+	METRIC_SET_ID_TDL_1,
+	METRIC_SET_ID_TDL_2,
+	METRIC_SET_ID_COMPUTE_EXTRA,
+	METRIC_SET_ID_VME_PIPE,
+	METRIC_SET_ID_TEST_OA,
+};
+
+int i915_oa_n_builtin_metric_sets_kblgt2 = 18;
+
+static const struct i915_oa_reg b_counter_config_render_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic[] = {
+	{ _MMIO(0x9888), 0x166c01e0 },
+	{ _MMIO(0x9888), 0x12170280 },
+	{ _MMIO(0x9888), 0x12370280 },
+	{ _MMIO(0x9888), 0x11930317 },
+	{ _MMIO(0x9888), 0x159303df },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x1a4e0080 },
+	{ _MMIO(0x9888), 0x0a6c0053 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x0a1b4000 },
+	{ _MMIO(0x9888), 0x1c1c0001 },
+	{ _MMIO(0x9888), 0x002f1000 },
+	{ _MMIO(0x9888), 0x042f1000 },
+	{ _MMIO(0x9888), 0x004c4000 },
+	{ _MMIO(0x9888), 0x0a4c8400 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0d2000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f6600 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x162c2200 },
+	{ _MMIO(0x9888), 0x062d8000 },
+	{ _MMIO(0x9888), 0x082d8000 },
+	{ _MMIO(0x9888), 0x00133000 },
+	{ _MMIO(0x9888), 0x08133000 },
+	{ _MMIO(0x9888), 0x00170020 },
+	{ _MMIO(0x9888), 0x08170021 },
+	{ _MMIO(0x9888), 0x10170000 },
+	{ _MMIO(0x9888), 0x0633c000 },
+	{ _MMIO(0x9888), 0x0833c000 },
+	{ _MMIO(0x9888), 0x06370800 },
+	{ _MMIO(0x9888), 0x08370840 },
+	{ _MMIO(0x9888), 0x10370000 },
+	{ _MMIO(0x9888), 0x0d933031 },
+	{ _MMIO(0x9888), 0x0f933e3f },
+	{ _MMIO(0x9888), 0x01933d00 },
+	{ _MMIO(0x9888), 0x0393073c },
+	{ _MMIO(0x9888), 0x0593000e },
+	{ _MMIO(0x9888), 0x1d930000 },
+	{ _MMIO(0x9888), 0x19930000 },
+	{ _MMIO(0x9888), 0x1b930000 },
+	{ _MMIO(0x9888), 0x1d900157 },
+	{ _MMIO(0x9888), 0x1f900158 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d908000 },
+	{ _MMIO(0x9888), 0x2f908000 },
+	{ _MMIO(0x9888), 0x31908000 },
+	{ _MMIO(0x9888), 0x15908000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1190001f },
+	{ _MMIO(0x9888), 0x51904400 },
+	{ _MMIO(0x9888), 0x41900020 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c21 },
+	{ _MMIO(0x9888), 0x47900061 },
+	{ _MMIO(0x9888), 0x57904440 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900004 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x53904444 },
+};
+
+static int
+get_render_basic_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_render_basic;
+	lens[n] = ARRAY_SIZE(mux_config_render_basic);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic[] = {
+	{ _MMIO(0x9888), 0x104f00e0 },
+	{ _MMIO(0x9888), 0x124f1c00 },
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x1a4e0820 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x064f0900 },
+	{ _MMIO(0x9888), 0x084f0032 },
+	{ _MMIO(0x9888), 0x0a4f1891 },
+	{ _MMIO(0x9888), 0x0c4f0e00 },
+	{ _MMIO(0x9888), 0x0e4f003c },
+	{ _MMIO(0x9888), 0x004f0d80 },
+	{ _MMIO(0x9888), 0x024f003b },
+	{ _MMIO(0x9888), 0x006c0002 },
+	{ _MMIO(0x9888), 0x086c0100 },
+	{ _MMIO(0x9888), 0x0c6c000c },
+	{ _MMIO(0x9888), 0x0e6c0b00 },
+	{ _MMIO(0x9888), 0x186c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x081b8000 },
+	{ _MMIO(0x9888), 0x0c1b4000 },
+	{ _MMIO(0x9888), 0x0e1b8000 },
+	{ _MMIO(0x9888), 0x101c8000 },
+	{ _MMIO(0x9888), 0x1a1c8000 },
+	{ _MMIO(0x9888), 0x1c1c0024 },
+	{ _MMIO(0x9888), 0x065b8000 },
+	{ _MMIO(0x9888), 0x085b4000 },
+	{ _MMIO(0x9888), 0x0a5bc000 },
+	{ _MMIO(0x9888), 0x0c5b8000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x005b8000 },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x1a5c6000 },
+	{ _MMIO(0x9888), 0x1c5c001b },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x145c8000 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4c2000 },
+	{ _MMIO(0x9888), 0x0c4c0208 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020d2000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5500 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2cc000 },
+	{ _MMIO(0x9888), 0x162cfb00 },
+	{ _MMIO(0x9888), 0x182c00be },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042cc000 },
+	{ _MMIO(0x9888), 0x19900157 },
+	{ _MMIO(0x9888), 0x1b900158 },
+	{ _MMIO(0x9888), 0x1d900105 },
+	{ _MMIO(0x9888), 0x1f900103 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x11900fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900821 },
+	{ _MMIO(0x9888), 0x47900802 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900802 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900002 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900422 },
+	{ _MMIO(0x9888), 0x53904444 },
+};
+
+static int
+get_compute_basic_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_basic;
+	lens[n] = ARRAY_SIZE(mux_config_compute_basic);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_render_pipe_profile[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ffea },
+	{ _MMIO(0x2774), 0x00007ffc },
+	{ _MMIO(0x2778), 0x0007affa },
+	{ _MMIO(0x277c), 0x0000f5fd },
+	{ _MMIO(0x2780), 0x00079ffa },
+	{ _MMIO(0x2784), 0x0000f3fb },
+	{ _MMIO(0x2788), 0x0007bf7a },
+	{ _MMIO(0x278c), 0x0000f7e7 },
+	{ _MMIO(0x2790), 0x0007fefa },
+	{ _MMIO(0x2794), 0x0000f7cf },
+	{ _MMIO(0x2798), 0x00077ffa },
+	{ _MMIO(0x279c), 0x0000efdf },
+	{ _MMIO(0x27a0), 0x0006fffa },
+	{ _MMIO(0x27a4), 0x0000cfbf },
+	{ _MMIO(0x27a8), 0x0003fffa },
+	{ _MMIO(0x27ac), 0x00005f7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_pipe_profile[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile[] = {
+	{ _MMIO(0x9888), 0x0c0e001f },
+	{ _MMIO(0x9888), 0x0a0f0000 },
+	{ _MMIO(0x9888), 0x10116800 },
+	{ _MMIO(0x9888), 0x178a03e0 },
+	{ _MMIO(0x9888), 0x11824c00 },
+	{ _MMIO(0x9888), 0x11830020 },
+	{ _MMIO(0x9888), 0x13840020 },
+	{ _MMIO(0x9888), 0x11850019 },
+	{ _MMIO(0x9888), 0x11860007 },
+	{ _MMIO(0x9888), 0x01870c40 },
+	{ _MMIO(0x9888), 0x17880000 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x0a4c0040 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x040d4000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020e5400 },
+	{ _MMIO(0x9888), 0x000e0000 },
+	{ _MMIO(0x9888), 0x080f0040 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x100f0000 },
+	{ _MMIO(0x9888), 0x0e0f0040 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06110012 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x01898000 },
+	{ _MMIO(0x9888), 0x0d890100 },
+	{ _MMIO(0x9888), 0x03898000 },
+	{ _MMIO(0x9888), 0x09808000 },
+	{ _MMIO(0x9888), 0x0b808000 },
+	{ _MMIO(0x9888), 0x0380c000 },
+	{ _MMIO(0x9888), 0x0f8a0075 },
+	{ _MMIO(0x9888), 0x1d8a0000 },
+	{ _MMIO(0x9888), 0x118a8000 },
+	{ _MMIO(0x9888), 0x1b8a4000 },
+	{ _MMIO(0x9888), 0x138a8000 },
+	{ _MMIO(0x9888), 0x1d81a000 },
+	{ _MMIO(0x9888), 0x15818000 },
+	{ _MMIO(0x9888), 0x17818000 },
+	{ _MMIO(0x9888), 0x0b820030 },
+	{ _MMIO(0x9888), 0x07828000 },
+	{ _MMIO(0x9888), 0x0d824000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x05824000 },
+	{ _MMIO(0x9888), 0x0d830003 },
+	{ _MMIO(0x9888), 0x0583000c },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x03838000 },
+	{ _MMIO(0x9888), 0x07838000 },
+	{ _MMIO(0x9888), 0x0b840980 },
+	{ _MMIO(0x9888), 0x03844d80 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x09848000 },
+	{ _MMIO(0x9888), 0x09850080 },
+	{ _MMIO(0x9888), 0x03850003 },
+	{ _MMIO(0x9888), 0x01850000 },
+	{ _MMIO(0x9888), 0x07860000 },
+	{ _MMIO(0x9888), 0x0f860400 },
+	{ _MMIO(0x9888), 0x09870032 },
+	{ _MMIO(0x9888), 0x01888052 },
+	{ _MMIO(0x9888), 0x11880000 },
+	{ _MMIO(0x9888), 0x09884000 },
+	{ _MMIO(0x9888), 0x1b931001 },
+	{ _MMIO(0x9888), 0x1d930001 },
+	{ _MMIO(0x9888), 0x19934000 },
+	{ _MMIO(0x9888), 0x1b958000 },
+	{ _MMIO(0x9888), 0x1d950094 },
+	{ _MMIO(0x9888), 0x19958000 },
+	{ _MMIO(0x9888), 0x09e58000 },
+	{ _MMIO(0x9888), 0x0be58000 },
+	{ _MMIO(0x9888), 0x03e5c000 },
+	{ _MMIO(0x9888), 0x0592c000 },
+	{ _MMIO(0x9888), 0x0b928000 },
+	{ _MMIO(0x9888), 0x0d924000 },
+	{ _MMIO(0x9888), 0x0f924000 },
+	{ _MMIO(0x9888), 0x11928000 },
+	{ _MMIO(0x9888), 0x1392c000 },
+	{ _MMIO(0x9888), 0x09924000 },
+	{ _MMIO(0x9888), 0x01985000 },
+	{ _MMIO(0x9888), 0x07988000 },
+	{ _MMIO(0x9888), 0x09981000 },
+	{ _MMIO(0x9888), 0x0b982000 },
+	{ _MMIO(0x9888), 0x0d982000 },
+	{ _MMIO(0x9888), 0x0f989000 },
+	{ _MMIO(0x9888), 0x05982000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1190c080 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900440 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900400 },
+	{ _MMIO(0x9888), 0x47900c21 },
+	{ _MMIO(0x9888), 0x57900400 },
+	{ _MMIO(0x9888), 0x49900042 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900024 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900841 },
+	{ _MMIO(0x9888), 0x53900400 },
+};
+
+static int
+get_render_pipe_profile_mux_config(struct drm_i915_private *dev_priv,
+				   const struct i915_oa_reg **regs,
+				   int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_render_pipe_profile;
+	lens[n] = ARRAY_SIZE(mux_config_render_pipe_profile);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_reads[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f872 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_reads[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f900064 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930055 },
+	{ _MMIO(0x9888), 0x03e58000 },
+	{ _MMIO(0x9888), 0x05e5c000 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x13900150 },
+	{ _MMIO(0x9888), 0x21900151 },
+	{ _MMIO(0x9888), 0x23900152 },
+	{ _MMIO(0x9888), 0x25900153 },
+	{ _MMIO(0x9888), 0x27900154 },
+	{ _MMIO(0x9888), 0x29900155 },
+	{ _MMIO(0x9888), 0x2b900156 },
+	{ _MMIO(0x9888), 0x2d900157 },
+	{ _MMIO(0x9888), 0x2f90015f },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c60 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900c63 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c63 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static int
+get_memory_reads_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_memory_reads;
+	lens[n] = ARRAY_SIZE(mux_config_memory_reads);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_writes[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f822 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_writes[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f901000 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930055 },
+	{ _MMIO(0x9888), 0x03e58000 },
+	{ _MMIO(0x9888), 0x05e5c000 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x13900160 },
+	{ _MMIO(0x9888), 0x21900161 },
+	{ _MMIO(0x9888), 0x23900162 },
+	{ _MMIO(0x9888), 0x25900163 },
+	{ _MMIO(0x9888), 0x27900164 },
+	{ _MMIO(0x9888), 0x29900165 },
+	{ _MMIO(0x9888), 0x2b900166 },
+	{ _MMIO(0x9888), 0x2d900167 },
+	{ _MMIO(0x9888), 0x2f900150 },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c60 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900c63 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c63 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static int
+get_memory_writes_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_memory_writes;
+	lens[n] = ARRAY_SIZE(mux_config_memory_writes);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extended[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fc2a },
+	{ _MMIO(0x2774), 0x0000bf00 },
+	{ _MMIO(0x2778), 0x0007fc6a },
+	{ _MMIO(0x277c), 0x0000bf00 },
+	{ _MMIO(0x2780), 0x0007fc92 },
+	{ _MMIO(0x2784), 0x0000bf00 },
+	{ _MMIO(0x2788), 0x0007fca2 },
+	{ _MMIO(0x278c), 0x0000bf00 },
+	{ _MMIO(0x2790), 0x0007fc32 },
+	{ _MMIO(0x2794), 0x0000bf00 },
+	{ _MMIO(0x2798), 0x0007fc9a },
+	{ _MMIO(0x279c), 0x0000bf00 },
+	{ _MMIO(0x27a0), 0x0007fe6a },
+	{ _MMIO(0x27a4), 0x0000bf00 },
+	{ _MMIO(0x27a8), 0x0007fe7a },
+	{ _MMIO(0x27ac), 0x0000bf00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extended[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended[] = {
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x141c8160 },
+	{ _MMIO(0x9888), 0x161c8015 },
+	{ _MMIO(0x9888), 0x181c0120 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4eaaa0 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0e6c0b01 },
+	{ _MMIO(0x9888), 0x006c0200 },
+	{ _MMIO(0x9888), 0x026c000c },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x001b8000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x001c0041 },
+	{ _MMIO(0x9888), 0x061c4200 },
+	{ _MMIO(0x9888), 0x081c4443 },
+	{ _MMIO(0x9888), 0x0a1c4645 },
+	{ _MMIO(0x9888), 0x0c1c7647 },
+	{ _MMIO(0x9888), 0x041c7357 },
+	{ _MMIO(0x9888), 0x1c1c0030 },
+	{ _MMIO(0x9888), 0x101c0000 },
+	{ _MMIO(0x9888), 0x1a1c0000 },
+	{ _MMIO(0x9888), 0x121c8000 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4caa2a },
+	{ _MMIO(0x9888), 0x0c4c02aa },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5515 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162caa00 },
+	{ _MMIO(0x9888), 0x182c00aa },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x11907fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900040 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900802 },
+	{ _MMIO(0x9888), 0x47900842 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900842 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900800 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static int
+get_compute_extended_mux_config(struct drm_i915_private *dev_priv,
+				const struct i915_oa_reg **regs,
+				int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_extended;
+	lens[n] = ARRAY_SIZE(mux_config_compute_extended);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_l3_cache[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fffa },
+	{ _MMIO(0x2774), 0x0000fefe },
+	{ _MMIO(0x2778), 0x0007fffa },
+	{ _MMIO(0x277c), 0x0000fefd },
+	{ _MMIO(0x2790), 0x0007fffa },
+	{ _MMIO(0x2794), 0x0000fbef },
+	{ _MMIO(0x2798), 0x0007fffa },
+	{ _MMIO(0x279c), 0x0000fbdf },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_l3_cache[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00101100 },
+	{ _MMIO(0xe45c), 0x00201200 },
+	{ _MMIO(0xe55c), 0x00301300 },
+	{ _MMIO(0xe65c), 0x00401400 },
+};
+
+static const struct i915_oa_reg mux_config_compute_l3_cache[] = {
+	{ _MMIO(0x9888), 0x166c0760 },
+	{ _MMIO(0x9888), 0x1593001e },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4e8020 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x006c0051 },
+	{ _MMIO(0x9888), 0x066c5000 },
+	{ _MMIO(0x9888), 0x086c5c5d },
+	{ _MMIO(0x9888), 0x0e6c5e5f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x186c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x061b8000 },
+	{ _MMIO(0x9888), 0x081bc000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x101c8000 },
+	{ _MMIO(0x9888), 0x1a1ce000 },
+	{ _MMIO(0x9888), 0x1c1c0030 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4c2a00 },
+	{ _MMIO(0x9888), 0x0c4c0280 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f1500 },
+	{ _MMIO(0x9888), 0x100f0140 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162c0a00 },
+	{ _MMIO(0x9888), 0x182c00a0 },
+	{ _MMIO(0x9888), 0x03933300 },
+	{ _MMIO(0x9888), 0x05930032 },
+	{ _MMIO(0x9888), 0x11930000 },
+	{ _MMIO(0x9888), 0x1b930000 },
+	{ _MMIO(0x9888), 0x1d900157 },
+	{ _MMIO(0x9888), 0x1f900158 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1190030f },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900021 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x53904444 },
+	{ _MMIO(0x9888), 0x43900000 },
+};
+
+static int
+get_compute_l3_cache_mux_config(struct drm_i915_private *dev_priv,
+				const struct i915_oa_reg **regs,
+				int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_l3_cache;
+	lens[n] = ARRAY_SIZE(mux_config_compute_l3_cache);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_hdc_and_sf[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x10800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+};
+
+static const struct i915_oa_reg flex_eu_config_hdc_and_sf[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_hdc_and_sf[] = {
+	{ _MMIO(0x9888), 0x104f0232 },
+	{ _MMIO(0x9888), 0x124f4640 },
+	{ _MMIO(0x9888), 0x106c0232 },
+	{ _MMIO(0x9888), 0x11834400 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x004f1880 },
+	{ _MMIO(0x9888), 0x024f08bb },
+	{ _MMIO(0x9888), 0x044f001b },
+	{ _MMIO(0x9888), 0x046c0100 },
+	{ _MMIO(0x9888), 0x066c000b },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x041b8000 },
+	{ _MMIO(0x9888), 0x061b4000 },
+	{ _MMIO(0x9888), 0x1a1c1800 },
+	{ _MMIO(0x9888), 0x005b8000 },
+	{ _MMIO(0x9888), 0x025bc000 },
+	{ _MMIO(0x9888), 0x045b4000 },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x145c8000 },
+	{ _MMIO(0x9888), 0x165c8000 },
+	{ _MMIO(0x9888), 0x185c8000 },
+	{ _MMIO(0x9888), 0x0a4c00a0 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042cc000 },
+	{ _MMIO(0x9888), 0x062cc000 },
+	{ _MMIO(0x9888), 0x082cc000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x0f8305c0 },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x07830000 },
+	{ _MMIO(0x9888), 0x1d950080 },
+	{ _MMIO(0x9888), 0x13928000 },
+	{ _MMIO(0x9888), 0x0f988000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900040 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_hdc_and_sf_mux_config(struct drm_i915_private *dev_priv,
+			  const struct i915_oa_reg **regs,
+			  int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_hdc_and_sf;
+	lens[n] = ARRAY_SIZE(mux_config_hdc_and_sf);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1[] = {
+	{ _MMIO(0x9888), 0x126c7b40 },
+	{ _MMIO(0x9888), 0x166c0020 },
+	{ _MMIO(0x9888), 0x0a603444 },
+	{ _MMIO(0x9888), 0x0a613400 },
+	{ _MMIO(0x9888), 0x1a4ea800 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x064f4000 },
+	{ _MMIO(0x9888), 0x0c6c5327 },
+	{ _MMIO(0x9888), 0x0e6c5425 },
+	{ _MMIO(0x9888), 0x006c2a00 },
+	{ _MMIO(0x9888), 0x026c285b },
+	{ _MMIO(0x9888), 0x046c005c },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0800 },
+	{ _MMIO(0x9888), 0x0c1bc000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x001b8000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x1c1c003c },
+	{ _MMIO(0x9888), 0x121c8000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c0800 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x1a5c1000 },
+	{ _MMIO(0x9888), 0x10600000 },
+	{ _MMIO(0x9888), 0x04600000 },
+	{ _MMIO(0x9888), 0x0c610044 },
+	{ _MMIO(0x9888), 0x10610000 },
+	{ _MMIO(0x9888), 0x06610000 },
+	{ _MMIO(0x9888), 0x0c4c02a8 },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x0a4c002a },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0154 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x182c00aa },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2cc000 },
+	{ _MMIO(0x9888), 0x1190ffc0 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900420 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900021 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900400 },
+	{ _MMIO(0x9888), 0x43900421 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+};
+
+static int
+get_l3_1_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_1;
+	lens[n] = ARRAY_SIZE(mux_config_l3_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00028002 },
+	{ _MMIO(0x277c), 0x000087ff },
+	{ _MMIO(0x2780), 0x00020002 },
+	{ _MMIO(0x2784), 0x00008fff },
+	{ _MMIO(0x2788), 0x00008002 },
+	{ _MMIO(0x278c), 0x0000a7ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_2[] = {
+	{ _MMIO(0x9888), 0x126c02e0 },
+	{ _MMIO(0x9888), 0x146c0001 },
+	{ _MMIO(0x9888), 0x0a623400 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x064f4000 },
+	{ _MMIO(0x9888), 0x026c3324 },
+	{ _MMIO(0x9888), 0x046c3422 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c0800 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x1a5c1000 },
+	{ _MMIO(0x9888), 0x06614000 },
+	{ _MMIO(0x9888), 0x0c620044 },
+	{ _MMIO(0x9888), 0x10620000 },
+	{ _MMIO(0x9888), 0x06620000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c002a },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f4000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2cc000 },
+	{ _MMIO(0x9888), 0x1190f800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_l3_2_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_2;
+	lens[n] = ARRAY_SIZE(mux_config_l3_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_3[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00028002 },
+	{ _MMIO(0x277c), 0x000087ff },
+	{ _MMIO(0x2780), 0x00020002 },
+	{ _MMIO(0x2784), 0x00008fff },
+	{ _MMIO(0x2788), 0x00008002 },
+	{ _MMIO(0x278c), 0x0000a7ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_3[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_3[] = {
+	{ _MMIO(0x9888), 0x126c4e80 },
+	{ _MMIO(0x9888), 0x146c0000 },
+	{ _MMIO(0x9888), 0x0a633400 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x026c3321 },
+	{ _MMIO(0x9888), 0x046c342f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1a6c2000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x061b4000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c1800 },
+	{ _MMIO(0x9888), 0x06604000 },
+	{ _MMIO(0x9888), 0x0c630044 },
+	{ _MMIO(0x9888), 0x10630000 },
+	{ _MMIO(0x9888), 0x06630000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c00aa },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f4000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x1190f800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900002 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_l3_3_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_3;
+	lens[n] = ARRAY_SIZE(mux_config_l3_3);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000efff },
+	{ _MMIO(0x2778), 0x00006000 },
+	{ _MMIO(0x277c), 0x0000f3ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x9888), 0x102f3800 },
+	{ _MMIO(0x9888), 0x144d0500 },
+	{ _MMIO(0x9888), 0x120d03c0 },
+	{ _MMIO(0x9888), 0x140d03cf },
+	{ _MMIO(0x9888), 0x0c0f0004 },
+	{ _MMIO(0x9888), 0x0c4e4000 },
+	{ _MMIO(0x9888), 0x042f0480 },
+	{ _MMIO(0x9888), 0x082f0000 },
+	{ _MMIO(0x9888), 0x022f0000 },
+	{ _MMIO(0x9888), 0x0a4c0090 },
+	{ _MMIO(0x9888), 0x064d0027 },
+	{ _MMIO(0x9888), 0x004d0000 },
+	{ _MMIO(0x9888), 0x000d0d40 },
+	{ _MMIO(0x9888), 0x020d803f },
+	{ _MMIO(0x9888), 0x040d8023 },
+	{ _MMIO(0x9888), 0x100d0000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020f0010 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x0e0f0050 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41901400 },
+	{ _MMIO(0x9888), 0x43901485 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900001 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_rasterizer_and_pixel_backend_mux_config(struct drm_i915_private *dev_priv,
+					    const struct i915_oa_reg **regs,
+					    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_rasterizer_and_pixel_backend;
+	lens[n] = ARRAY_SIZE(mux_config_rasterizer_and_pixel_backend);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler[] = {
+	{ _MMIO(0x9888), 0x14152c00 },
+	{ _MMIO(0x9888), 0x16150005 },
+	{ _MMIO(0x9888), 0x121600a0 },
+	{ _MMIO(0x9888), 0x14352c00 },
+	{ _MMIO(0x9888), 0x16350005 },
+	{ _MMIO(0x9888), 0x123600a0 },
+	{ _MMIO(0x9888), 0x14552c00 },
+	{ _MMIO(0x9888), 0x16550005 },
+	{ _MMIO(0x9888), 0x125600a0 },
+	{ _MMIO(0x9888), 0x062f6000 },
+	{ _MMIO(0x9888), 0x022f2000 },
+	{ _MMIO(0x9888), 0x0c4c0050 },
+	{ _MMIO(0x9888), 0x0a4c0010 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0350 },
+	{ _MMIO(0x9888), 0x0c0fb000 },
+	{ _MMIO(0x9888), 0x0e0f00da },
+	{ _MMIO(0x9888), 0x182c0028 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x022dc000 },
+	{ _MMIO(0x9888), 0x042d4000 },
+	{ _MMIO(0x9888), 0x0c138000 },
+	{ _MMIO(0x9888), 0x0e132000 },
+	{ _MMIO(0x9888), 0x0413c000 },
+	{ _MMIO(0x9888), 0x1c140018 },
+	{ _MMIO(0x9888), 0x0c157000 },
+	{ _MMIO(0x9888), 0x0e150078 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x04162180 },
+	{ _MMIO(0x9888), 0x02160000 },
+	{ _MMIO(0x9888), 0x04174000 },
+	{ _MMIO(0x9888), 0x0233a000 },
+	{ _MMIO(0x9888), 0x04333000 },
+	{ _MMIO(0x9888), 0x14348000 },
+	{ _MMIO(0x9888), 0x16348000 },
+	{ _MMIO(0x9888), 0x02357870 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x04360043 },
+	{ _MMIO(0x9888), 0x02360000 },
+	{ _MMIO(0x9888), 0x04371000 },
+	{ _MMIO(0x9888), 0x0e538000 },
+	{ _MMIO(0x9888), 0x00538000 },
+	{ _MMIO(0x9888), 0x06533000 },
+	{ _MMIO(0x9888), 0x1c540020 },
+	{ _MMIO(0x9888), 0x12548000 },
+	{ _MMIO(0x9888), 0x0e557000 },
+	{ _MMIO(0x9888), 0x00557800 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x06560043 },
+	{ _MMIO(0x9888), 0x02560000 },
+	{ _MMIO(0x9888), 0x06571000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900060 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900060 },
+};
+
+static int
+get_sampler_mux_config(struct drm_i915_private *dev_priv,
+		       const struct i915_oa_reg **regs,
+		       int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_sampler;
+	lens[n] = ARRAY_SIZE(mux_config_sampler);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x00007fff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x00009fff },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000efff },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000f3ff },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fdff },
+	{ _MMIO(0x2798), 0x00000000 },
+	{ _MMIO(0x279c), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_1[] = {
+	{ _MMIO(0x9888), 0x12120000 },
+	{ _MMIO(0x9888), 0x12320000 },
+	{ _MMIO(0x9888), 0x12520000 },
+	{ _MMIO(0x9888), 0x002f8000 },
+	{ _MMIO(0x9888), 0x022f3000 },
+	{ _MMIO(0x9888), 0x0a4c0015 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f03a0 },
+	{ _MMIO(0x9888), 0x0c0ff000 },
+	{ _MMIO(0x9888), 0x0e0f0095 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2d8000 },
+	{ _MMIO(0x9888), 0x0e2d4000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x02108000 },
+	{ _MMIO(0x9888), 0x0410c000 },
+	{ _MMIO(0x9888), 0x02118000 },
+	{ _MMIO(0x9888), 0x0411c000 },
+	{ _MMIO(0x9888), 0x02121880 },
+	{ _MMIO(0x9888), 0x041219b5 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x02134000 },
+	{ _MMIO(0x9888), 0x04135000 },
+	{ _MMIO(0x9888), 0x0c308000 },
+	{ _MMIO(0x9888), 0x0e304000 },
+	{ _MMIO(0x9888), 0x06304000 },
+	{ _MMIO(0x9888), 0x0c318000 },
+	{ _MMIO(0x9888), 0x0e314000 },
+	{ _MMIO(0x9888), 0x06314000 },
+	{ _MMIO(0x9888), 0x0c321a80 },
+	{ _MMIO(0x9888), 0x0e320033 },
+	{ _MMIO(0x9888), 0x06320031 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x0c334000 },
+	{ _MMIO(0x9888), 0x0e331000 },
+	{ _MMIO(0x9888), 0x06331000 },
+	{ _MMIO(0x9888), 0x0e508000 },
+	{ _MMIO(0x9888), 0x00508000 },
+	{ _MMIO(0x9888), 0x02504000 },
+	{ _MMIO(0x9888), 0x0e518000 },
+	{ _MMIO(0x9888), 0x00518000 },
+	{ _MMIO(0x9888), 0x02514000 },
+	{ _MMIO(0x9888), 0x0e521880 },
+	{ _MMIO(0x9888), 0x00521a80 },
+	{ _MMIO(0x9888), 0x02520033 },
+	{ _MMIO(0x9888), 0x0e534000 },
+	{ _MMIO(0x9888), 0x00534000 },
+	{ _MMIO(0x9888), 0x02531000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900062 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+};
+
+static int
+get_tdl_1_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_1;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_2[] = {
+	{ _MMIO(0x9888), 0x12124d60 },
+	{ _MMIO(0x9888), 0x12322e60 },
+	{ _MMIO(0x9888), 0x12524d60 },
+	{ _MMIO(0x9888), 0x022f3000 },
+	{ _MMIO(0x9888), 0x0a4c0014 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0fe000 },
+	{ _MMIO(0x9888), 0x0e0f0097 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x002d8000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x0410c000 },
+	{ _MMIO(0x9888), 0x0411c000 },
+	{ _MMIO(0x9888), 0x04121fb7 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x04135000 },
+	{ _MMIO(0x9888), 0x00308000 },
+	{ _MMIO(0x9888), 0x06304000 },
+	{ _MMIO(0x9888), 0x00318000 },
+	{ _MMIO(0x9888), 0x06314000 },
+	{ _MMIO(0x9888), 0x00321b80 },
+	{ _MMIO(0x9888), 0x0632003f },
+	{ _MMIO(0x9888), 0x00334000 },
+	{ _MMIO(0x9888), 0x06331000 },
+	{ _MMIO(0x9888), 0x0250c000 },
+	{ _MMIO(0x9888), 0x0251c000 },
+	{ _MMIO(0x9888), 0x02521fb7 },
+	{ _MMIO(0x9888), 0x00520000 },
+	{ _MMIO(0x9888), 0x02535000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x43900063 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_tdl_2_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_2;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extra[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extra[] = {
+	{ _MMIO(0xe458), 0x00001000 },
+	{ _MMIO(0xe558), 0x00003002 },
+	{ _MMIO(0xe658), 0x00005004 },
+	{ _MMIO(0xe758), 0x00011010 },
+	{ _MMIO(0xe45c), 0x00050012 },
+	{ _MMIO(0xe55c), 0x00052051 },
+	{ _MMIO(0xe65c), 0x00000008 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extra[] = {
+	{ _MMIO(0x9888), 0x121203e0 },
+	{ _MMIO(0x9888), 0x123203e0 },
+	{ _MMIO(0x9888), 0x125203e0 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x0a4c0040 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0e0f006c },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x042d8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06114000 },
+	{ _MMIO(0x9888), 0x06120033 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x04308000 },
+	{ _MMIO(0x9888), 0x04318000 },
+	{ _MMIO(0x9888), 0x04321980 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x04334000 },
+	{ _MMIO(0x9888), 0x04504000 },
+	{ _MMIO(0x9888), 0x04514000 },
+	{ _MMIO(0x9888), 0x04520033 },
+	{ _MMIO(0x9888), 0x00520000 },
+	{ _MMIO(0x9888), 0x04531000 },
+	{ _MMIO(0x9888), 0x1190e000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900c00 },
+	{ _MMIO(0x9888), 0x45900002 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_compute_extra_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_extra;
+	lens[n] = ARRAY_SIZE(mux_config_compute_extra);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_vme_pipe[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00100030 },
+	{ _MMIO(0x2774), 0x0000fff9 },
+	{ _MMIO(0x2778), 0x00000002 },
+	{ _MMIO(0x277c), 0x0000fffc },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000fff3 },
+	{ _MMIO(0x2788), 0x00100180 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00000002 },
+	{ _MMIO(0x279c), 0x0000ff3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_vme_pipe[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00008003 },
+};
+
+static const struct i915_oa_reg mux_config_vme_pipe[] = {
+	{ _MMIO(0x9888), 0x141a5800 },
+	{ _MMIO(0x9888), 0x161a00c0 },
+	{ _MMIO(0x9888), 0x12180240 },
+	{ _MMIO(0x9888), 0x14180002 },
+	{ _MMIO(0x9888), 0x143a5800 },
+	{ _MMIO(0x9888), 0x163a00c0 },
+	{ _MMIO(0x9888), 0x12380240 },
+	{ _MMIO(0x9888), 0x14380002 },
+	{ _MMIO(0x9888), 0x002f1000 },
+	{ _MMIO(0x9888), 0x022f8000 },
+	{ _MMIO(0x9888), 0x042f3000 },
+	{ _MMIO(0x9888), 0x004c4000 },
+	{ _MMIO(0x9888), 0x0a4c1500 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f9500 },
+	{ _MMIO(0x9888), 0x100f002a },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162c0a00 },
+	{ _MMIO(0x9888), 0x0a2dc000 },
+	{ _MMIO(0x9888), 0x0c2dc000 },
+	{ _MMIO(0x9888), 0x04193000 },
+	{ _MMIO(0x9888), 0x081a28c1 },
+	{ _MMIO(0x9888), 0x001a0000 },
+	{ _MMIO(0x9888), 0x00133000 },
+	{ _MMIO(0x9888), 0x0613c000 },
+	{ _MMIO(0x9888), 0x0813f000 },
+	{ _MMIO(0x9888), 0x00172000 },
+	{ _MMIO(0x9888), 0x06178000 },
+	{ _MMIO(0x9888), 0x0817a000 },
+	{ _MMIO(0x9888), 0x00180037 },
+	{ _MMIO(0x9888), 0x06180940 },
+	{ _MMIO(0x9888), 0x08180000 },
+	{ _MMIO(0x9888), 0x02180000 },
+	{ _MMIO(0x9888), 0x04183000 },
+	{ _MMIO(0x9888), 0x06393000 },
+	{ _MMIO(0x9888), 0x0c3a28c1 },
+	{ _MMIO(0x9888), 0x003a0000 },
+	{ _MMIO(0x9888), 0x0a33f000 },
+	{ _MMIO(0x9888), 0x0c33f000 },
+	{ _MMIO(0x9888), 0x0a37a000 },
+	{ _MMIO(0x9888), 0x0c37a000 },
+	{ _MMIO(0x9888), 0x0a380977 },
+	{ _MMIO(0x9888), 0x08380000 },
+	{ _MMIO(0x9888), 0x04380000 },
+	{ _MMIO(0x9888), 0x06383000 },
+	{ _MMIO(0x9888), 0x119000ff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900040 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900800 },
+	{ _MMIO(0x9888), 0x47901000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900844 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_vme_pipe_mux_config(struct drm_i915_private *dev_priv,
+			const struct i915_oa_reg **regs,
+			int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_vme_pipe;
+	lens[n] = ARRAY_SIZE(mux_config_vme_pipe);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_test_oa[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2770), 0x00000004 },
+	{ _MMIO(0x2774), 0x00000000 },
+	{ _MMIO(0x2778), 0x00000003 },
+	{ _MMIO(0x277c), 0x00000000 },
+	{ _MMIO(0x2780), 0x00000007 },
+	{ _MMIO(0x2784), 0x00000000 },
+	{ _MMIO(0x2788), 0x00100002 },
+	{ _MMIO(0x278c), 0x0000fff7 },
+	{ _MMIO(0x2790), 0x00100002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00100082 },
+	{ _MMIO(0x279c), 0x0000ffef },
+	{ _MMIO(0x27a0), 0x001000c2 },
+	{ _MMIO(0x27a4), 0x0000ffe7 },
+	{ _MMIO(0x27a8), 0x00100001 },
+	{ _MMIO(0x27ac), 0x0000ffe7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_test_oa[] = {
+};
+
+static const struct i915_oa_reg mux_config_test_oa[] = {
+	{ _MMIO(0x9888), 0x11810000 },
+	{ _MMIO(0x9888), 0x07810013 },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930040 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_test_oa_mux_config(struct drm_i915_private *dev_priv,
+		       const struct i915_oa_reg **regs,
+		       int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_test_oa;
+	lens[n] = ARRAY_SIZE(mux_config_test_oa);
+	n++;
+
+	return n;
+}
+
+int i915_oa_select_metric_set_kblgt2(struct drm_i915_private *dev_priv)
+{
+	dev_priv->perf.oa.n_mux_configs = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_basic_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_basic_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
+		return 0;
+	case METRIC_SET_ID_RENDER_PIPE_PROFILE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_pipe_profile_mux_config(dev_priv,
+							   dev_priv->perf.oa.mux_regs,
+							   dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_PIPE_PROFILE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_pipe_profile;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_pipe_profile);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_pipe_profile;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_pipe_profile);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_READS:
+		dev_priv->perf.oa.n_mux_configs =
+			get_memory_reads_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_READS\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_reads;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_reads);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_reads;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_reads);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_WRITES:
+		dev_priv->perf.oa.n_mux_configs =
+			get_memory_writes_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_WRITES\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_writes;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_writes);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_writes;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_writes);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTENDED:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_extended_mux_config(dev_priv,
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTENDED\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extended;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extended);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extended;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extended);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_L3_CACHE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_l3_cache_mux_config(dev_priv,
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_L3_CACHE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_l3_cache;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_l3_cache);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_l3_cache;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_l3_cache);
+
+		return 0;
+	case METRIC_SET_ID_HDC_AND_SF:
+		dev_priv->perf.oa.n_mux_configs =
+			get_hdc_and_sf_mux_config(dev_priv,
+						  dev_priv->perf.oa.mux_regs,
+						  dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"HDC_AND_SF\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_hdc_and_sf;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_hdc_and_sf);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_hdc_and_sf;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_hdc_and_sf);
+
+		return 0;
+	case METRIC_SET_ID_L3_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_1_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_1);
+
+		return 0;
+	case METRIC_SET_ID_L3_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_2_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_2);
+
+		return 0;
+	case METRIC_SET_ID_L3_3:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_3_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_3\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_3;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_3);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_3;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_3);
+
+		return 0;
+	case METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND:
+		dev_priv->perf.oa.n_mux_configs =
+			get_rasterizer_and_pixel_backend_mux_config(dev_priv,
+								    dev_priv->perf.oa.mux_regs,
+								    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RASTERIZER_AND_PIXEL_BACKEND\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_rasterizer_and_pixel_backend);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_rasterizer_and_pixel_backend);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER:
+		dev_priv->perf.oa.n_mux_configs =
+			get_sampler_mux_config(dev_priv,
+					       dev_priv->perf.oa.mux_regs,
+					       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler);
+
+		return 0;
+	case METRIC_SET_ID_TDL_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_1_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_1);
+
+		return 0;
+	case METRIC_SET_ID_TDL_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_2_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_2);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTRA:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_extra_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTRA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extra;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extra);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extra;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extra);
+
+		return 0;
+	case METRIC_SET_ID_VME_PIPE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_vme_pipe_mux_config(dev_priv,
+						dev_priv->perf.oa.mux_regs,
+						dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"VME_PIPE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_vme_pipe;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_vme_pipe);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_vme_pipe;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_vme_pipe);
+
+		return 0;
+	case METRIC_SET_ID_TEST_OA:
+		dev_priv->perf.oa.n_mux_configs =
+			get_test_oa_mux_config(dev_priv,
+					       dev_priv->perf.oa.mux_regs,
+					       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TEST_OA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_test_oa;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_test_oa);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_test_oa;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_test_oa);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "f8d677e9-ff6f-4df1-9310-0334c6efacce",
+	.attrs =  attrs_render_basic,
+};
+
+static ssize_t
+show_compute_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_BASIC);
+}
+
+static struct device_attribute dev_attr_compute_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_basic[] = {
+	&dev_attr_compute_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_basic = {
+	.name = "e17fc42a-e614-41b6-90c4-1074841a6c77",
+	.attrs =  attrs_compute_basic,
+};
+
+static ssize_t
+show_render_pipe_profile_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_PIPE_PROFILE);
+}
+
+static struct device_attribute dev_attr_render_pipe_profile_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_pipe_profile_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_pipe_profile[] = {
+	&dev_attr_render_pipe_profile_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_pipe_profile = {
+	.name = "d7a17a3a-ca71-40d2-a919-ace80d50633f",
+	.attrs =  attrs_render_pipe_profile,
+};
+
+static ssize_t
+show_memory_reads_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_READS);
+}
+
+static struct device_attribute dev_attr_memory_reads_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_reads_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_reads[] = {
+	&dev_attr_memory_reads_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_reads = {
+	.name = "57b59202-172b-477a-87de-33f85572c589",
+	.attrs =  attrs_memory_reads,
+};
+
+static ssize_t
+show_memory_writes_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_WRITES);
+}
+
+static struct device_attribute dev_attr_memory_writes_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_writes_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_writes[] = {
+	&dev_attr_memory_writes_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_writes = {
+	.name = "3addf8ef-8e9b-40f5-a448-3dbb5d5128b0",
+	.attrs =  attrs_memory_writes,
+};
+
+static ssize_t
+show_compute_extended_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTENDED);
+}
+
+static struct device_attribute dev_attr_compute_extended_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extended_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extended[] = {
+	&dev_attr_compute_extended_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extended = {
+	.name = "4af0400a-81c3-47db-a6b6-deddbd75680e",
+	.attrs =  attrs_compute_extended,
+};
+
+static ssize_t
+show_compute_l3_cache_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_L3_CACHE);
+}
+
+static struct device_attribute dev_attr_compute_l3_cache_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_l3_cache_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_l3_cache[] = {
+	&dev_attr_compute_l3_cache_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_l3_cache = {
+	.name = "0e22f995-79ca-4f67-83ab-e9d9772488d8",
+	.attrs =  attrs_compute_l3_cache,
+};
+
+static ssize_t
+show_hdc_and_sf_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_HDC_AND_SF);
+}
+
+static struct device_attribute dev_attr_hdc_and_sf_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_hdc_and_sf_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_hdc_and_sf[] = {
+	&dev_attr_hdc_and_sf_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_hdc_and_sf = {
+	.name = "bc2a00f7-cb8a-4ff2-8ad0-e241dad16937",
+	.attrs =  attrs_hdc_and_sf,
+};
+
+static ssize_t
+show_l3_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_1);
+}
+
+static struct device_attribute dev_attr_l3_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_1[] = {
+	&dev_attr_l3_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_1 = {
+	.name = "d2bbe790-f058-42d9-81c6-cdedcf655bc2",
+	.attrs =  attrs_l3_1,
+};
+
+static ssize_t
+show_l3_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_2);
+}
+
+static struct device_attribute dev_attr_l3_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_2[] = {
+	&dev_attr_l3_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_2 = {
+	.name = "2f8e32e4-5956-46e2-af31-c8ea95887332",
+	.attrs =  attrs_l3_2,
+};
+
+static ssize_t
+show_l3_3_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_3);
+}
+
+static struct device_attribute dev_attr_l3_3_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_3_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_3[] = {
+	&dev_attr_l3_3_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_3 = {
+	.name = "ca046aad-b5fb-4101-adce-6473ee6e5b14",
+	.attrs =  attrs_l3_3,
+};
+
+static ssize_t
+show_rasterizer_and_pixel_backend_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND);
+}
+
+static struct device_attribute dev_attr_rasterizer_and_pixel_backend_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_rasterizer_and_pixel_backend_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_rasterizer_and_pixel_backend[] = {
+	&dev_attr_rasterizer_and_pixel_backend_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_rasterizer_and_pixel_backend = {
+	.name = "605f388f-24bb-455c-88e3-8d57ae0d7e9f",
+	.attrs =  attrs_rasterizer_and_pixel_backend,
+};
+
+static ssize_t
+show_sampler_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER);
+}
+
+static struct device_attribute dev_attr_sampler_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler[] = {
+	&dev_attr_sampler_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler = {
+	.name = "31dd157c-bf4e-4bab-bf2b-f5c8174af1af",
+	.attrs =  attrs_sampler,
+};
+
+static ssize_t
+show_tdl_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_1);
+}
+
+static struct device_attribute dev_attr_tdl_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_1[] = {
+	&dev_attr_tdl_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_1 = {
+	.name = "105db928-5542-466b-9128-e1f3c91426cb",
+	.attrs =  attrs_tdl_1,
+};
+
+static ssize_t
+show_tdl_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_2);
+}
+
+static struct device_attribute dev_attr_tdl_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_2[] = {
+	&dev_attr_tdl_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_2 = {
+	.name = "03db94d2-b37f-4c58-a791-0d2067b013bb",
+	.attrs =  attrs_tdl_2,
+};
+
+static ssize_t
+show_compute_extra_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTRA);
+}
+
+static struct device_attribute dev_attr_compute_extra_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extra_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extra[] = {
+	&dev_attr_compute_extra_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extra = {
+	.name = "aa7a3fb9-22fb-43ff-a32d-0ab6c13bbd16",
+	.attrs =  attrs_compute_extra,
+};
+
+static ssize_t
+show_vme_pipe_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_VME_PIPE);
+}
+
+static struct device_attribute dev_attr_vme_pipe_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_vme_pipe_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_vme_pipe[] = {
+	&dev_attr_vme_pipe_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_vme_pipe = {
+	.name = "398a4268-ef6f-4ffc-b55f-3c7b5363ce61",
+	.attrs =  attrs_vme_pipe,
+};
+
+static ssize_t
+show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TEST_OA);
+}
+
+static struct device_attribute dev_attr_test_oa_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_test_oa_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_test_oa[] = {
+	&dev_attr_test_oa_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_test_oa = {
+	.name = "baa3c7e4-52b6-4b85-801e-465a94b746dd",
+	.attrs =  attrs_test_oa,
+};
+
+int
+i915_perf_register_sysfs_kblgt2(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+	int ret = 0;
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+		if (ret)
+			goto error_render_basic;
+	}
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+		if (ret)
+			goto error_compute_basic;
+	}
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+		if (ret)
+			goto error_render_pipe_profile;
+	}
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+		if (ret)
+			goto error_memory_reads;
+	}
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+		if (ret)
+			goto error_memory_writes;
+	}
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+		if (ret)
+			goto error_compute_extended;
+	}
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+		if (ret)
+			goto error_compute_l3_cache;
+	}
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+		if (ret)
+			goto error_hdc_and_sf;
+	}
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+		if (ret)
+			goto error_l3_1;
+	}
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+		if (ret)
+			goto error_l3_2;
+	}
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+		if (ret)
+			goto error_l3_3;
+	}
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+		if (ret)
+			goto error_rasterizer_and_pixel_backend;
+	}
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler);
+		if (ret)
+			goto error_sampler;
+	}
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+		if (ret)
+			goto error_tdl_1;
+	}
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+		if (ret)
+			goto error_tdl_2;
+	}
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+		if (ret)
+			goto error_compute_extra;
+	}
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+		if (ret)
+			goto error_vme_pipe;
+	}
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+		if (ret)
+			goto error_test_oa;
+	}
+
+	return 0;
+
+error_test_oa:
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+error_vme_pipe:
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+error_compute_extra:
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+error_tdl_2:
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+error_tdl_1:
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+error_sampler:
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+error_rasterizer_and_pixel_backend:
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+error_l3_3:
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+error_l3_2:
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+error_l3_1:
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+error_hdc_and_sf:
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+error_compute_l3_cache:
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+error_compute_extended:
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+error_memory_writes:
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+error_memory_reads:
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+error_render_pipe_profile:
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+error_compute_basic:
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+error_render_basic:
+	return ret;
+}
+
+void
+i915_perf_unregister_sysfs_kblgt2(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+}
diff --git a/drivers/gpu/drm/i915/i915_oa_kblgt2.h b/drivers/gpu/drm/i915/i915_oa_kblgt2.h
new file mode 100644
index 000000000000..7e61bfc4f9f5
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_kblgt2.h
@@ -0,0 +1,40 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_OA_KBLGT2_H__
+#define __I915_OA_KBLGT2_H__
+
+extern int i915_oa_n_builtin_metric_sets_kblgt2;
+
+extern int i915_oa_select_metric_set_kblgt2(struct drm_i915_private *dev_priv);
+
+extern int i915_perf_register_sysfs_kblgt2(struct drm_i915_private *dev_priv);
+
+extern void i915_perf_unregister_sysfs_kblgt2(struct drm_i915_private *dev_priv);
+
+#endif
diff --git a/drivers/gpu/drm/i915/i915_oa_kblgt3.c b/drivers/gpu/drm/i915/i915_oa_kblgt3.c
new file mode 100644
index 000000000000..6ed092566a32
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_kblgt3.c
@@ -0,0 +1,3040 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/sysfs.h>
+
+#include "i915_drv.h"
+#include "i915_oa_kblgt3.h"
+
+enum metric_set_id {
+	METRIC_SET_ID_RENDER_BASIC = 1,
+	METRIC_SET_ID_COMPUTE_BASIC,
+	METRIC_SET_ID_RENDER_PIPE_PROFILE,
+	METRIC_SET_ID_MEMORY_READS,
+	METRIC_SET_ID_MEMORY_WRITES,
+	METRIC_SET_ID_COMPUTE_EXTENDED,
+	METRIC_SET_ID_COMPUTE_L3_CACHE,
+	METRIC_SET_ID_HDC_AND_SF,
+	METRIC_SET_ID_L3_1,
+	METRIC_SET_ID_L3_2,
+	METRIC_SET_ID_L3_3,
+	METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND,
+	METRIC_SET_ID_SAMPLER,
+	METRIC_SET_ID_TDL_1,
+	METRIC_SET_ID_TDL_2,
+	METRIC_SET_ID_COMPUTE_EXTRA,
+	METRIC_SET_ID_VME_PIPE,
+	METRIC_SET_ID_TEST_OA,
+};
+
+int i915_oa_n_builtin_metric_sets_kblgt3 = 18;
+
+static const struct i915_oa_reg b_counter_config_render_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic[] = {
+	{ _MMIO(0x9888), 0x166c01e0 },
+	{ _MMIO(0x9888), 0x12170280 },
+	{ _MMIO(0x9888), 0x12370280 },
+	{ _MMIO(0x9888), 0x16ec01e0 },
+	{ _MMIO(0x9888), 0x11930317 },
+	{ _MMIO(0x9888), 0x159303df },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x1a4e0380 },
+	{ _MMIO(0x9888), 0x0a6c0053 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x0a1b4000 },
+	{ _MMIO(0x9888), 0x1c1c0001 },
+	{ _MMIO(0x9888), 0x002f1000 },
+	{ _MMIO(0x9888), 0x042f1000 },
+	{ _MMIO(0x9888), 0x004c4000 },
+	{ _MMIO(0x9888), 0x0a4c8400 },
+	{ _MMIO(0x9888), 0x0c4c0002 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f6600 },
+	{ _MMIO(0x9888), 0x100f0001 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x162ca200 },
+	{ _MMIO(0x9888), 0x062d8000 },
+	{ _MMIO(0x9888), 0x082d8000 },
+	{ _MMIO(0x9888), 0x00133000 },
+	{ _MMIO(0x9888), 0x08133000 },
+	{ _MMIO(0x9888), 0x00170020 },
+	{ _MMIO(0x9888), 0x08170021 },
+	{ _MMIO(0x9888), 0x10170000 },
+	{ _MMIO(0x9888), 0x0633c000 },
+	{ _MMIO(0x9888), 0x0833c000 },
+	{ _MMIO(0x9888), 0x06370800 },
+	{ _MMIO(0x9888), 0x08370840 },
+	{ _MMIO(0x9888), 0x10370000 },
+	{ _MMIO(0x9888), 0x1ace0200 },
+	{ _MMIO(0x9888), 0x0aec5300 },
+	{ _MMIO(0x9888), 0x10ec0000 },
+	{ _MMIO(0x9888), 0x1cec0000 },
+	{ _MMIO(0x9888), 0x0a9b8000 },
+	{ _MMIO(0x9888), 0x1c9c0002 },
+	{ _MMIO(0x9888), 0x0ccc0002 },
+	{ _MMIO(0x9888), 0x0a8d8000 },
+	{ _MMIO(0x9888), 0x108f0001 },
+	{ _MMIO(0x9888), 0x16ac8000 },
+	{ _MMIO(0x9888), 0x0d933031 },
+	{ _MMIO(0x9888), 0x0f933e3f },
+	{ _MMIO(0x9888), 0x01933d00 },
+	{ _MMIO(0x9888), 0x0393073c },
+	{ _MMIO(0x9888), 0x0593000e },
+	{ _MMIO(0x9888), 0x1d930000 },
+	{ _MMIO(0x9888), 0x19930000 },
+	{ _MMIO(0x9888), 0x1b930000 },
+	{ _MMIO(0x9888), 0x1d900157 },
+	{ _MMIO(0x9888), 0x1f900158 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d908000 },
+	{ _MMIO(0x9888), 0x2f908000 },
+	{ _MMIO(0x9888), 0x31908000 },
+	{ _MMIO(0x9888), 0x15908000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1190003f },
+	{ _MMIO(0x9888), 0x51902240 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x55900242 },
+	{ _MMIO(0x9888), 0x45900084 },
+	{ _MMIO(0x9888), 0x47901400 },
+	{ _MMIO(0x9888), 0x57902220 },
+	{ _MMIO(0x9888), 0x49900c60 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900002 },
+	{ _MMIO(0x9888), 0x43900c63 },
+	{ _MMIO(0x9888), 0x53902222 },
+};
+
+static int
+get_render_basic_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_render_basic;
+	lens[n] = ARRAY_SIZE(mux_config_render_basic);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic[] = {
+	{ _MMIO(0x9888), 0x104f00e0 },
+	{ _MMIO(0x9888), 0x124f1c00 },
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x1a4e0820 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x064f0900 },
+	{ _MMIO(0x9888), 0x084f0032 },
+	{ _MMIO(0x9888), 0x0a4f1891 },
+	{ _MMIO(0x9888), 0x0c4f0e00 },
+	{ _MMIO(0x9888), 0x0e4f003c },
+	{ _MMIO(0x9888), 0x004f0d80 },
+	{ _MMIO(0x9888), 0x024f003b },
+	{ _MMIO(0x9888), 0x006c0002 },
+	{ _MMIO(0x9888), 0x086c0100 },
+	{ _MMIO(0x9888), 0x0c6c000c },
+	{ _MMIO(0x9888), 0x0e6c0b00 },
+	{ _MMIO(0x9888), 0x186c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x081b8000 },
+	{ _MMIO(0x9888), 0x0c1b4000 },
+	{ _MMIO(0x9888), 0x0e1b8000 },
+	{ _MMIO(0x9888), 0x101c8000 },
+	{ _MMIO(0x9888), 0x1a1c8000 },
+	{ _MMIO(0x9888), 0x1c1c0024 },
+	{ _MMIO(0x9888), 0x065b8000 },
+	{ _MMIO(0x9888), 0x085b4000 },
+	{ _MMIO(0x9888), 0x0a5bc000 },
+	{ _MMIO(0x9888), 0x0c5b8000 },
+	{ _MMIO(0x9888), 0x0e5b4000 },
+	{ _MMIO(0x9888), 0x005b8000 },
+	{ _MMIO(0x9888), 0x025b4000 },
+	{ _MMIO(0x9888), 0x1a5c6000 },
+	{ _MMIO(0x9888), 0x1c5c001b },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x145c8000 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4c2000 },
+	{ _MMIO(0x9888), 0x0c4c0208 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020d2000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5500 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2cc000 },
+	{ _MMIO(0x9888), 0x162cfb00 },
+	{ _MMIO(0x9888), 0x182c00be },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042cc000 },
+	{ _MMIO(0x9888), 0x19900157 },
+	{ _MMIO(0x9888), 0x1b900158 },
+	{ _MMIO(0x9888), 0x1d900105 },
+	{ _MMIO(0x9888), 0x1f900103 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x11900fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900821 },
+	{ _MMIO(0x9888), 0x47900802 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900802 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900002 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900422 },
+	{ _MMIO(0x9888), 0x53904444 },
+};
+
+static int
+get_compute_basic_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_basic;
+	lens[n] = ARRAY_SIZE(mux_config_compute_basic);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_render_pipe_profile[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ffea },
+	{ _MMIO(0x2774), 0x00007ffc },
+	{ _MMIO(0x2778), 0x0007affa },
+	{ _MMIO(0x277c), 0x0000f5fd },
+	{ _MMIO(0x2780), 0x00079ffa },
+	{ _MMIO(0x2784), 0x0000f3fb },
+	{ _MMIO(0x2788), 0x0007bf7a },
+	{ _MMIO(0x278c), 0x0000f7e7 },
+	{ _MMIO(0x2790), 0x0007fefa },
+	{ _MMIO(0x2794), 0x0000f7cf },
+	{ _MMIO(0x2798), 0x00077ffa },
+	{ _MMIO(0x279c), 0x0000efdf },
+	{ _MMIO(0x27a0), 0x0006fffa },
+	{ _MMIO(0x27a4), 0x0000cfbf },
+	{ _MMIO(0x27a8), 0x0003fffa },
+	{ _MMIO(0x27ac), 0x00005f7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_pipe_profile[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile[] = {
+	{ _MMIO(0x9888), 0x0c0e001f },
+	{ _MMIO(0x9888), 0x0a0f0000 },
+	{ _MMIO(0x9888), 0x10116800 },
+	{ _MMIO(0x9888), 0x178a03e0 },
+	{ _MMIO(0x9888), 0x11824c00 },
+	{ _MMIO(0x9888), 0x11830020 },
+	{ _MMIO(0x9888), 0x13840020 },
+	{ _MMIO(0x9888), 0x11850019 },
+	{ _MMIO(0x9888), 0x11860007 },
+	{ _MMIO(0x9888), 0x01870c40 },
+	{ _MMIO(0x9888), 0x17880000 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x0a4c0040 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x040d4000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020e5400 },
+	{ _MMIO(0x9888), 0x000e0000 },
+	{ _MMIO(0x9888), 0x080f0040 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x100f0000 },
+	{ _MMIO(0x9888), 0x0e0f0040 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06110012 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x01898000 },
+	{ _MMIO(0x9888), 0x0d890100 },
+	{ _MMIO(0x9888), 0x03898000 },
+	{ _MMIO(0x9888), 0x09808000 },
+	{ _MMIO(0x9888), 0x0b808000 },
+	{ _MMIO(0x9888), 0x0380c000 },
+	{ _MMIO(0x9888), 0x0f8a0075 },
+	{ _MMIO(0x9888), 0x1d8a0000 },
+	{ _MMIO(0x9888), 0x118a8000 },
+	{ _MMIO(0x9888), 0x1b8a4000 },
+	{ _MMIO(0x9888), 0x138a8000 },
+	{ _MMIO(0x9888), 0x1d81a000 },
+	{ _MMIO(0x9888), 0x15818000 },
+	{ _MMIO(0x9888), 0x17818000 },
+	{ _MMIO(0x9888), 0x0b820030 },
+	{ _MMIO(0x9888), 0x07828000 },
+	{ _MMIO(0x9888), 0x0d824000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x05824000 },
+	{ _MMIO(0x9888), 0x0d830003 },
+	{ _MMIO(0x9888), 0x0583000c },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x03838000 },
+	{ _MMIO(0x9888), 0x07838000 },
+	{ _MMIO(0x9888), 0x0b840980 },
+	{ _MMIO(0x9888), 0x03844d80 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x09848000 },
+	{ _MMIO(0x9888), 0x09850080 },
+	{ _MMIO(0x9888), 0x03850003 },
+	{ _MMIO(0x9888), 0x01850000 },
+	{ _MMIO(0x9888), 0x07860000 },
+	{ _MMIO(0x9888), 0x0f860400 },
+	{ _MMIO(0x9888), 0x09870032 },
+	{ _MMIO(0x9888), 0x01888052 },
+	{ _MMIO(0x9888), 0x11880000 },
+	{ _MMIO(0x9888), 0x09884000 },
+	{ _MMIO(0x9888), 0x1b931001 },
+	{ _MMIO(0x9888), 0x1d930001 },
+	{ _MMIO(0x9888), 0x19934000 },
+	{ _MMIO(0x9888), 0x1b958000 },
+	{ _MMIO(0x9888), 0x1d950094 },
+	{ _MMIO(0x9888), 0x19958000 },
+	{ _MMIO(0x9888), 0x09e58000 },
+	{ _MMIO(0x9888), 0x0be58000 },
+	{ _MMIO(0x9888), 0x03e5c000 },
+	{ _MMIO(0x9888), 0x0592c000 },
+	{ _MMIO(0x9888), 0x0b928000 },
+	{ _MMIO(0x9888), 0x0d924000 },
+	{ _MMIO(0x9888), 0x0f924000 },
+	{ _MMIO(0x9888), 0x11928000 },
+	{ _MMIO(0x9888), 0x1392c000 },
+	{ _MMIO(0x9888), 0x09924000 },
+	{ _MMIO(0x9888), 0x01985000 },
+	{ _MMIO(0x9888), 0x07988000 },
+	{ _MMIO(0x9888), 0x09981000 },
+	{ _MMIO(0x9888), 0x0b982000 },
+	{ _MMIO(0x9888), 0x0d982000 },
+	{ _MMIO(0x9888), 0x0f989000 },
+	{ _MMIO(0x9888), 0x05982000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1190c080 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900440 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900400 },
+	{ _MMIO(0x9888), 0x47900c21 },
+	{ _MMIO(0x9888), 0x57900400 },
+	{ _MMIO(0x9888), 0x49900042 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900024 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900841 },
+	{ _MMIO(0x9888), 0x53900400 },
+};
+
+static int
+get_render_pipe_profile_mux_config(struct drm_i915_private *dev_priv,
+				   const struct i915_oa_reg **regs,
+				   int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_render_pipe_profile;
+	lens[n] = ARRAY_SIZE(mux_config_render_pipe_profile);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_reads[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f872 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_reads[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f900064 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930055 },
+	{ _MMIO(0x9888), 0x03e58000 },
+	{ _MMIO(0x9888), 0x05e5c000 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x13900150 },
+	{ _MMIO(0x9888), 0x21900151 },
+	{ _MMIO(0x9888), 0x23900152 },
+	{ _MMIO(0x9888), 0x25900153 },
+	{ _MMIO(0x9888), 0x27900154 },
+	{ _MMIO(0x9888), 0x29900155 },
+	{ _MMIO(0x9888), 0x2b900156 },
+	{ _MMIO(0x9888), 0x2d900157 },
+	{ _MMIO(0x9888), 0x2f90015f },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c60 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900c63 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c63 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static int
+get_memory_reads_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_memory_reads;
+	lens[n] = ARRAY_SIZE(mux_config_memory_reads);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_writes[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f822 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_writes[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes[] = {
+	{ _MMIO(0x9888), 0x11810c00 },
+	{ _MMIO(0x9888), 0x1381001a },
+	{ _MMIO(0x9888), 0x37906800 },
+	{ _MMIO(0x9888), 0x3f901000 },
+	{ _MMIO(0x9888), 0x03811300 },
+	{ _MMIO(0x9888), 0x05811b12 },
+	{ _MMIO(0x9888), 0x0781001a },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x17810000 },
+	{ _MMIO(0x9888), 0x19810000 },
+	{ _MMIO(0x9888), 0x1b810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930055 },
+	{ _MMIO(0x9888), 0x03e58000 },
+	{ _MMIO(0x9888), 0x05e5c000 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x13900160 },
+	{ _MMIO(0x9888), 0x21900161 },
+	{ _MMIO(0x9888), 0x23900162 },
+	{ _MMIO(0x9888), 0x25900163 },
+	{ _MMIO(0x9888), 0x27900164 },
+	{ _MMIO(0x9888), 0x29900165 },
+	{ _MMIO(0x9888), 0x2b900166 },
+	{ _MMIO(0x9888), 0x2d900167 },
+	{ _MMIO(0x9888), 0x2f900150 },
+	{ _MMIO(0x9888), 0x31900105 },
+	{ _MMIO(0x9888), 0x15900103 },
+	{ _MMIO(0x9888), 0x17900101 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c60 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900c00 },
+	{ _MMIO(0x9888), 0x47900c63 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900c63 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900063 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static int
+get_memory_writes_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_memory_writes;
+	lens[n] = ARRAY_SIZE(mux_config_memory_writes);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extended[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fc2a },
+	{ _MMIO(0x2774), 0x0000bf00 },
+	{ _MMIO(0x2778), 0x0007fc6a },
+	{ _MMIO(0x277c), 0x0000bf00 },
+	{ _MMIO(0x2780), 0x0007fc92 },
+	{ _MMIO(0x2784), 0x0000bf00 },
+	{ _MMIO(0x2788), 0x0007fca2 },
+	{ _MMIO(0x278c), 0x0000bf00 },
+	{ _MMIO(0x2790), 0x0007fc32 },
+	{ _MMIO(0x2794), 0x0000bf00 },
+	{ _MMIO(0x2798), 0x0007fc9a },
+	{ _MMIO(0x279c), 0x0000bf00 },
+	{ _MMIO(0x27a0), 0x0007fe6a },
+	{ _MMIO(0x27a4), 0x0000bf00 },
+	{ _MMIO(0x27a8), 0x0007fe7a },
+	{ _MMIO(0x27ac), 0x0000bf00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extended[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended[] = {
+	{ _MMIO(0x9888), 0x106c00e0 },
+	{ _MMIO(0x9888), 0x141c8160 },
+	{ _MMIO(0x9888), 0x161c8015 },
+	{ _MMIO(0x9888), 0x181c0120 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4eaaa0 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0e6c0b01 },
+	{ _MMIO(0x9888), 0x006c0200 },
+	{ _MMIO(0x9888), 0x026c000c },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x001b8000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x001c0041 },
+	{ _MMIO(0x9888), 0x061c4200 },
+	{ _MMIO(0x9888), 0x081c4443 },
+	{ _MMIO(0x9888), 0x0a1c4645 },
+	{ _MMIO(0x9888), 0x0c1c7647 },
+	{ _MMIO(0x9888), 0x041c7357 },
+	{ _MMIO(0x9888), 0x1c1c0030 },
+	{ _MMIO(0x9888), 0x101c0000 },
+	{ _MMIO(0x9888), 0x1a1c0000 },
+	{ _MMIO(0x9888), 0x121c8000 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4caa2a },
+	{ _MMIO(0x9888), 0x0c4c02aa },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x000da000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x0c0f5400 },
+	{ _MMIO(0x9888), 0x0e0f5515 },
+	{ _MMIO(0x9888), 0x100f0155 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162caa00 },
+	{ _MMIO(0x9888), 0x182c00aa },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x11907fff },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900040 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900802 },
+	{ _MMIO(0x9888), 0x47900842 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900842 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x43900800 },
+	{ _MMIO(0x9888), 0x53900000 },
+};
+
+static int
+get_compute_extended_mux_config(struct drm_i915_private *dev_priv,
+				const struct i915_oa_reg **regs,
+				int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_extended;
+	lens[n] = ARRAY_SIZE(mux_config_compute_extended);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_l3_cache[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fffa },
+	{ _MMIO(0x2774), 0x0000fefe },
+	{ _MMIO(0x2778), 0x0007fffa },
+	{ _MMIO(0x277c), 0x0000fefd },
+	{ _MMIO(0x2790), 0x0007fffa },
+	{ _MMIO(0x2794), 0x0000fbef },
+	{ _MMIO(0x2798), 0x0007fffa },
+	{ _MMIO(0x279c), 0x0000fbdf },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_l3_cache[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00101100 },
+	{ _MMIO(0xe45c), 0x00201200 },
+	{ _MMIO(0xe55c), 0x00301300 },
+	{ _MMIO(0xe65c), 0x00401400 },
+};
+
+static const struct i915_oa_reg mux_config_compute_l3_cache[] = {
+	{ _MMIO(0x9888), 0x166c0760 },
+	{ _MMIO(0x9888), 0x1593001e },
+	{ _MMIO(0x9888), 0x3f900003 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x184e8000 },
+	{ _MMIO(0x9888), 0x1a4e8020 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x006c0051 },
+	{ _MMIO(0x9888), 0x066c5000 },
+	{ _MMIO(0x9888), 0x086c5c5d },
+	{ _MMIO(0x9888), 0x0e6c5e5f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x186c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x061b8000 },
+	{ _MMIO(0x9888), 0x081bc000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x101c8000 },
+	{ _MMIO(0x9888), 0x1a1ce000 },
+	{ _MMIO(0x9888), 0x1c1c0030 },
+	{ _MMIO(0x9888), 0x004c8000 },
+	{ _MMIO(0x9888), 0x0a4c2a00 },
+	{ _MMIO(0x9888), 0x0c4c0280 },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f1500 },
+	{ _MMIO(0x9888), 0x100f0140 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162c0a00 },
+	{ _MMIO(0x9888), 0x182c00a0 },
+	{ _MMIO(0x9888), 0x03933300 },
+	{ _MMIO(0x9888), 0x05930032 },
+	{ _MMIO(0x9888), 0x11930000 },
+	{ _MMIO(0x9888), 0x1b930000 },
+	{ _MMIO(0x9888), 0x1d900157 },
+	{ _MMIO(0x9888), 0x1f900158 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1190030f },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900021 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x53904444 },
+	{ _MMIO(0x9888), 0x43900000 },
+};
+
+static int
+get_compute_l3_cache_mux_config(struct drm_i915_private *dev_priv,
+				const struct i915_oa_reg **regs,
+				int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_l3_cache;
+	lens[n] = ARRAY_SIZE(mux_config_compute_l3_cache);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_hdc_and_sf[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x10800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+};
+
+static const struct i915_oa_reg flex_eu_config_hdc_and_sf[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_hdc_and_sf[] = {
+	{ _MMIO(0x9888), 0x104f0232 },
+	{ _MMIO(0x9888), 0x124f4640 },
+	{ _MMIO(0x9888), 0x106c0232 },
+	{ _MMIO(0x9888), 0x11834400 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x004f1880 },
+	{ _MMIO(0x9888), 0x024f08bb },
+	{ _MMIO(0x9888), 0x044f001b },
+	{ _MMIO(0x9888), 0x046c0100 },
+	{ _MMIO(0x9888), 0x066c000b },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x041b8000 },
+	{ _MMIO(0x9888), 0x061b4000 },
+	{ _MMIO(0x9888), 0x1a1c1800 },
+	{ _MMIO(0x9888), 0x005b8000 },
+	{ _MMIO(0x9888), 0x025bc000 },
+	{ _MMIO(0x9888), 0x045b4000 },
+	{ _MMIO(0x9888), 0x125c8000 },
+	{ _MMIO(0x9888), 0x145c8000 },
+	{ _MMIO(0x9888), 0x165c8000 },
+	{ _MMIO(0x9888), 0x185c8000 },
+	{ _MMIO(0x9888), 0x0a4c00a0 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x022cc000 },
+	{ _MMIO(0x9888), 0x042cc000 },
+	{ _MMIO(0x9888), 0x062cc000 },
+	{ _MMIO(0x9888), 0x082cc000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x0f8305c0 },
+	{ _MMIO(0x9888), 0x09830000 },
+	{ _MMIO(0x9888), 0x07830000 },
+	{ _MMIO(0x9888), 0x1d950080 },
+	{ _MMIO(0x9888), 0x13928000 },
+	{ _MMIO(0x9888), 0x0f988000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900040 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_hdc_and_sf_mux_config(struct drm_i915_private *dev_priv,
+			  const struct i915_oa_reg **regs,
+			  int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_hdc_and_sf;
+	lens[n] = ARRAY_SIZE(mux_config_hdc_and_sf);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1[] = {
+	{ _MMIO(0x9888), 0x126c7b40 },
+	{ _MMIO(0x9888), 0x166c0020 },
+	{ _MMIO(0x9888), 0x0a603444 },
+	{ _MMIO(0x9888), 0x0a613400 },
+	{ _MMIO(0x9888), 0x1a4ea800 },
+	{ _MMIO(0x9888), 0x1c4e0002 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x064f4000 },
+	{ _MMIO(0x9888), 0x0c6c5327 },
+	{ _MMIO(0x9888), 0x0e6c5425 },
+	{ _MMIO(0x9888), 0x006c2a00 },
+	{ _MMIO(0x9888), 0x026c285b },
+	{ _MMIO(0x9888), 0x046c005c },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1e6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0800 },
+	{ _MMIO(0x9888), 0x0c1bc000 },
+	{ _MMIO(0x9888), 0x0e1bc000 },
+	{ _MMIO(0x9888), 0x001b8000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x1c1c003c },
+	{ _MMIO(0x9888), 0x121c8000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c0800 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x1a5c1000 },
+	{ _MMIO(0x9888), 0x10600000 },
+	{ _MMIO(0x9888), 0x04600000 },
+	{ _MMIO(0x9888), 0x0c610044 },
+	{ _MMIO(0x9888), 0x10610000 },
+	{ _MMIO(0x9888), 0x06610000 },
+	{ _MMIO(0x9888), 0x0c4c02a8 },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x0a4c002a },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0154 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x182c00aa },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2cc000 },
+	{ _MMIO(0x9888), 0x1190ffc0 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900420 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900021 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900400 },
+	{ _MMIO(0x9888), 0x43900421 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+};
+
+static int
+get_l3_1_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_1;
+	lens[n] = ARRAY_SIZE(mux_config_l3_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00028002 },
+	{ _MMIO(0x277c), 0x000087ff },
+	{ _MMIO(0x2780), 0x00020002 },
+	{ _MMIO(0x2784), 0x00008fff },
+	{ _MMIO(0x2788), 0x00008002 },
+	{ _MMIO(0x278c), 0x0000a7ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_2[] = {
+	{ _MMIO(0x9888), 0x126c02e0 },
+	{ _MMIO(0x9888), 0x146c0001 },
+	{ _MMIO(0x9888), 0x0a623400 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x064f4000 },
+	{ _MMIO(0x9888), 0x026c3324 },
+	{ _MMIO(0x9888), 0x046c3422 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c0800 },
+	{ _MMIO(0x9888), 0x065b4000 },
+	{ _MMIO(0x9888), 0x1a5c1000 },
+	{ _MMIO(0x9888), 0x06614000 },
+	{ _MMIO(0x9888), 0x0c620044 },
+	{ _MMIO(0x9888), 0x10620000 },
+	{ _MMIO(0x9888), 0x06620000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c002a },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f4000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2cc000 },
+	{ _MMIO(0x9888), 0x1190f800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_l3_2_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_2;
+	lens[n] = ARRAY_SIZE(mux_config_l3_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_3[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00028002 },
+	{ _MMIO(0x277c), 0x000087ff },
+	{ _MMIO(0x2780), 0x00020002 },
+	{ _MMIO(0x2784), 0x00008fff },
+	{ _MMIO(0x2788), 0x00008002 },
+	{ _MMIO(0x278c), 0x0000a7ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_3[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_3[] = {
+	{ _MMIO(0x9888), 0x126c4e80 },
+	{ _MMIO(0x9888), 0x146c0000 },
+	{ _MMIO(0x9888), 0x0a633400 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e8000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x026c3321 },
+	{ _MMIO(0x9888), 0x046c342f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1a6c2000 },
+	{ _MMIO(0x9888), 0x021bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x061b4000 },
+	{ _MMIO(0x9888), 0x141c8000 },
+	{ _MMIO(0x9888), 0x161c8000 },
+	{ _MMIO(0x9888), 0x181c8000 },
+	{ _MMIO(0x9888), 0x1a1c1800 },
+	{ _MMIO(0x9888), 0x06604000 },
+	{ _MMIO(0x9888), 0x0c630044 },
+	{ _MMIO(0x9888), 0x10630000 },
+	{ _MMIO(0x9888), 0x06630000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c00aa },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f4000 },
+	{ _MMIO(0x9888), 0x0e0f0055 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x1190f800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900002 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_l3_3_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_3;
+	lens[n] = ARRAY_SIZE(mux_config_l3_3);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000efff },
+	{ _MMIO(0x2778), 0x00006000 },
+	{ _MMIO(0x277c), 0x0000f3ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x9888), 0x102f3800 },
+	{ _MMIO(0x9888), 0x144d0500 },
+	{ _MMIO(0x9888), 0x120d03c0 },
+	{ _MMIO(0x9888), 0x140d03cf },
+	{ _MMIO(0x9888), 0x0c0f0004 },
+	{ _MMIO(0x9888), 0x0c4e4000 },
+	{ _MMIO(0x9888), 0x042f0480 },
+	{ _MMIO(0x9888), 0x082f0000 },
+	{ _MMIO(0x9888), 0x022f0000 },
+	{ _MMIO(0x9888), 0x0a4c0090 },
+	{ _MMIO(0x9888), 0x064d0027 },
+	{ _MMIO(0x9888), 0x004d0000 },
+	{ _MMIO(0x9888), 0x000d0d40 },
+	{ _MMIO(0x9888), 0x020d803f },
+	{ _MMIO(0x9888), 0x040d8023 },
+	{ _MMIO(0x9888), 0x100d0000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x020f0010 },
+	{ _MMIO(0x9888), 0x000f0000 },
+	{ _MMIO(0x9888), 0x0e0f0050 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41901400 },
+	{ _MMIO(0x9888), 0x43901485 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900001 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_rasterizer_and_pixel_backend_mux_config(struct drm_i915_private *dev_priv,
+					    const struct i915_oa_reg **regs,
+					    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_rasterizer_and_pixel_backend;
+	lens[n] = ARRAY_SIZE(mux_config_rasterizer_and_pixel_backend);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler[] = {
+	{ _MMIO(0x9888), 0x14152c00 },
+	{ _MMIO(0x9888), 0x16150005 },
+	{ _MMIO(0x9888), 0x121600a0 },
+	{ _MMIO(0x9888), 0x14352c00 },
+	{ _MMIO(0x9888), 0x16350005 },
+	{ _MMIO(0x9888), 0x123600a0 },
+	{ _MMIO(0x9888), 0x14552c00 },
+	{ _MMIO(0x9888), 0x16550005 },
+	{ _MMIO(0x9888), 0x125600a0 },
+	{ _MMIO(0x9888), 0x062f6000 },
+	{ _MMIO(0x9888), 0x022f2000 },
+	{ _MMIO(0x9888), 0x0c4c0050 },
+	{ _MMIO(0x9888), 0x0a4c0010 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f0350 },
+	{ _MMIO(0x9888), 0x0c0fb000 },
+	{ _MMIO(0x9888), 0x0e0f00da },
+	{ _MMIO(0x9888), 0x182c0028 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x022dc000 },
+	{ _MMIO(0x9888), 0x042d4000 },
+	{ _MMIO(0x9888), 0x0c138000 },
+	{ _MMIO(0x9888), 0x0e132000 },
+	{ _MMIO(0x9888), 0x0413c000 },
+	{ _MMIO(0x9888), 0x1c140018 },
+	{ _MMIO(0x9888), 0x0c157000 },
+	{ _MMIO(0x9888), 0x0e150078 },
+	{ _MMIO(0x9888), 0x10150000 },
+	{ _MMIO(0x9888), 0x04162180 },
+	{ _MMIO(0x9888), 0x02160000 },
+	{ _MMIO(0x9888), 0x04174000 },
+	{ _MMIO(0x9888), 0x0233a000 },
+	{ _MMIO(0x9888), 0x04333000 },
+	{ _MMIO(0x9888), 0x14348000 },
+	{ _MMIO(0x9888), 0x16348000 },
+	{ _MMIO(0x9888), 0x02357870 },
+	{ _MMIO(0x9888), 0x10350000 },
+	{ _MMIO(0x9888), 0x04360043 },
+	{ _MMIO(0x9888), 0x02360000 },
+	{ _MMIO(0x9888), 0x04371000 },
+	{ _MMIO(0x9888), 0x0e538000 },
+	{ _MMIO(0x9888), 0x00538000 },
+	{ _MMIO(0x9888), 0x06533000 },
+	{ _MMIO(0x9888), 0x1c540020 },
+	{ _MMIO(0x9888), 0x12548000 },
+	{ _MMIO(0x9888), 0x0e557000 },
+	{ _MMIO(0x9888), 0x00557800 },
+	{ _MMIO(0x9888), 0x10550000 },
+	{ _MMIO(0x9888), 0x06560043 },
+	{ _MMIO(0x9888), 0x02560000 },
+	{ _MMIO(0x9888), 0x06571000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900060 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900842 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900060 },
+};
+
+static int
+get_sampler_mux_config(struct drm_i915_private *dev_priv,
+		       const struct i915_oa_reg **regs,
+		       int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_sampler;
+	lens[n] = ARRAY_SIZE(mux_config_sampler);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x00007fff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x00009fff },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000efff },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000f3ff },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fdff },
+	{ _MMIO(0x2798), 0x00000000 },
+	{ _MMIO(0x279c), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_1[] = {
+	{ _MMIO(0x9888), 0x12120000 },
+	{ _MMIO(0x9888), 0x12320000 },
+	{ _MMIO(0x9888), 0x12520000 },
+	{ _MMIO(0x9888), 0x002f8000 },
+	{ _MMIO(0x9888), 0x022f3000 },
+	{ _MMIO(0x9888), 0x0a4c0015 },
+	{ _MMIO(0x9888), 0x0c0d8000 },
+	{ _MMIO(0x9888), 0x0e0da000 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x100f03a0 },
+	{ _MMIO(0x9888), 0x0c0ff000 },
+	{ _MMIO(0x9888), 0x0e0f0095 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x0c2d8000 },
+	{ _MMIO(0x9888), 0x0e2d4000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x02108000 },
+	{ _MMIO(0x9888), 0x0410c000 },
+	{ _MMIO(0x9888), 0x02118000 },
+	{ _MMIO(0x9888), 0x0411c000 },
+	{ _MMIO(0x9888), 0x02121880 },
+	{ _MMIO(0x9888), 0x041219b5 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x02134000 },
+	{ _MMIO(0x9888), 0x04135000 },
+	{ _MMIO(0x9888), 0x0c308000 },
+	{ _MMIO(0x9888), 0x0e304000 },
+	{ _MMIO(0x9888), 0x06304000 },
+	{ _MMIO(0x9888), 0x0c318000 },
+	{ _MMIO(0x9888), 0x0e314000 },
+	{ _MMIO(0x9888), 0x06314000 },
+	{ _MMIO(0x9888), 0x0c321a80 },
+	{ _MMIO(0x9888), 0x0e320033 },
+	{ _MMIO(0x9888), 0x06320031 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x0c334000 },
+	{ _MMIO(0x9888), 0x0e331000 },
+	{ _MMIO(0x9888), 0x06331000 },
+	{ _MMIO(0x9888), 0x0e508000 },
+	{ _MMIO(0x9888), 0x00508000 },
+	{ _MMIO(0x9888), 0x02504000 },
+	{ _MMIO(0x9888), 0x0e518000 },
+	{ _MMIO(0x9888), 0x00518000 },
+	{ _MMIO(0x9888), 0x02514000 },
+	{ _MMIO(0x9888), 0x0e521880 },
+	{ _MMIO(0x9888), 0x00521a80 },
+	{ _MMIO(0x9888), 0x02520033 },
+	{ _MMIO(0x9888), 0x0e534000 },
+	{ _MMIO(0x9888), 0x00534000 },
+	{ _MMIO(0x9888), 0x02531000 },
+	{ _MMIO(0x9888), 0x1190ff80 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900800 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4b900062 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+};
+
+static int
+get_tdl_1_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_1;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_2[] = {
+	{ _MMIO(0x9888), 0x12124d60 },
+	{ _MMIO(0x9888), 0x12322e60 },
+	{ _MMIO(0x9888), 0x12524d60 },
+	{ _MMIO(0x9888), 0x022f3000 },
+	{ _MMIO(0x9888), 0x0a4c0014 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0fe000 },
+	{ _MMIO(0x9888), 0x0e0f0097 },
+	{ _MMIO(0x9888), 0x082c8000 },
+	{ _MMIO(0x9888), 0x0a2c8000 },
+	{ _MMIO(0x9888), 0x002d8000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x0410c000 },
+	{ _MMIO(0x9888), 0x0411c000 },
+	{ _MMIO(0x9888), 0x04121fb7 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x04135000 },
+	{ _MMIO(0x9888), 0x00308000 },
+	{ _MMIO(0x9888), 0x06304000 },
+	{ _MMIO(0x9888), 0x00318000 },
+	{ _MMIO(0x9888), 0x06314000 },
+	{ _MMIO(0x9888), 0x00321b80 },
+	{ _MMIO(0x9888), 0x0632003f },
+	{ _MMIO(0x9888), 0x00334000 },
+	{ _MMIO(0x9888), 0x06331000 },
+	{ _MMIO(0x9888), 0x0250c000 },
+	{ _MMIO(0x9888), 0x0251c000 },
+	{ _MMIO(0x9888), 0x02521fb7 },
+	{ _MMIO(0x9888), 0x00520000 },
+	{ _MMIO(0x9888), 0x02535000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900800 },
+	{ _MMIO(0x9888), 0x43900063 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900040 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_tdl_2_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_2;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extra[] = {
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extra[] = {
+};
+
+static const struct i915_oa_reg mux_config_compute_extra[] = {
+	{ _MMIO(0x9888), 0x121203e0 },
+	{ _MMIO(0x9888), 0x123203e0 },
+	{ _MMIO(0x9888), 0x125203e0 },
+	{ _MMIO(0x9888), 0x129203e0 },
+	{ _MMIO(0x9888), 0x12b203e0 },
+	{ _MMIO(0x9888), 0x12d203e0 },
+	{ _MMIO(0x9888), 0x024ec000 },
+	{ _MMIO(0x9888), 0x044ec000 },
+	{ _MMIO(0x9888), 0x064ec000 },
+	{ _MMIO(0x9888), 0x022f4000 },
+	{ _MMIO(0x9888), 0x084ca000 },
+	{ _MMIO(0x9888), 0x0a4c0042 },
+	{ _MMIO(0x9888), 0x000d8000 },
+	{ _MMIO(0x9888), 0x020da000 },
+	{ _MMIO(0x9888), 0x040da000 },
+	{ _MMIO(0x9888), 0x060d2000 },
+	{ _MMIO(0x9888), 0x0c0f5000 },
+	{ _MMIO(0x9888), 0x0e0f006d },
+	{ _MMIO(0x9888), 0x022c8000 },
+	{ _MMIO(0x9888), 0x042c8000 },
+	{ _MMIO(0x9888), 0x062c8000 },
+	{ _MMIO(0x9888), 0x0c2c8000 },
+	{ _MMIO(0x9888), 0x042d8000 },
+	{ _MMIO(0x9888), 0x06104000 },
+	{ _MMIO(0x9888), 0x06114000 },
+	{ _MMIO(0x9888), 0x06120033 },
+	{ _MMIO(0x9888), 0x00120000 },
+	{ _MMIO(0x9888), 0x06131000 },
+	{ _MMIO(0x9888), 0x04308000 },
+	{ _MMIO(0x9888), 0x04318000 },
+	{ _MMIO(0x9888), 0x04321980 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x04334000 },
+	{ _MMIO(0x9888), 0x04504000 },
+	{ _MMIO(0x9888), 0x04514000 },
+	{ _MMIO(0x9888), 0x04520033 },
+	{ _MMIO(0x9888), 0x00520000 },
+	{ _MMIO(0x9888), 0x04531000 },
+	{ _MMIO(0x9888), 0x00af8000 },
+	{ _MMIO(0x9888), 0x0acc0001 },
+	{ _MMIO(0x9888), 0x008d8000 },
+	{ _MMIO(0x9888), 0x028da000 },
+	{ _MMIO(0x9888), 0x0c8fb000 },
+	{ _MMIO(0x9888), 0x0e8f0001 },
+	{ _MMIO(0x9888), 0x06ac8000 },
+	{ _MMIO(0x9888), 0x02ad4000 },
+	{ _MMIO(0x9888), 0x02908000 },
+	{ _MMIO(0x9888), 0x02918000 },
+	{ _MMIO(0x9888), 0x02921980 },
+	{ _MMIO(0x9888), 0x00920000 },
+	{ _MMIO(0x9888), 0x02934000 },
+	{ _MMIO(0x9888), 0x02b04000 },
+	{ _MMIO(0x9888), 0x02b14000 },
+	{ _MMIO(0x9888), 0x02b20033 },
+	{ _MMIO(0x9888), 0x00b20000 },
+	{ _MMIO(0x9888), 0x02b31000 },
+	{ _MMIO(0x9888), 0x00d08000 },
+	{ _MMIO(0x9888), 0x00d18000 },
+	{ _MMIO(0x9888), 0x00d21980 },
+	{ _MMIO(0x9888), 0x00d34000 },
+	{ _MMIO(0x9888), 0x1190fc00 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x51900000 },
+	{ _MMIO(0x9888), 0x41900c00 },
+	{ _MMIO(0x9888), 0x43900002 },
+	{ _MMIO(0x9888), 0x53900420 },
+	{ _MMIO(0x9888), 0x459000a1 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_compute_extra_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_extra;
+	lens[n] = ARRAY_SIZE(mux_config_compute_extra);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_vme_pipe[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00100030 },
+	{ _MMIO(0x2774), 0x0000fff9 },
+	{ _MMIO(0x2778), 0x00000002 },
+	{ _MMIO(0x277c), 0x0000fffc },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000fff3 },
+	{ _MMIO(0x2788), 0x00100180 },
+	{ _MMIO(0x278c), 0x0000ffcf },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00000002 },
+	{ _MMIO(0x279c), 0x0000ff3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_vme_pipe[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00008003 },
+};
+
+static const struct i915_oa_reg mux_config_vme_pipe[] = {
+	{ _MMIO(0x9888), 0x141a5800 },
+	{ _MMIO(0x9888), 0x161a00c0 },
+	{ _MMIO(0x9888), 0x12180240 },
+	{ _MMIO(0x9888), 0x14180002 },
+	{ _MMIO(0x9888), 0x149a5800 },
+	{ _MMIO(0x9888), 0x169a00c0 },
+	{ _MMIO(0x9888), 0x12980240 },
+	{ _MMIO(0x9888), 0x14980002 },
+	{ _MMIO(0x9888), 0x1a4e3fc0 },
+	{ _MMIO(0x9888), 0x002f1000 },
+	{ _MMIO(0x9888), 0x022f8000 },
+	{ _MMIO(0x9888), 0x042f3000 },
+	{ _MMIO(0x9888), 0x004c4000 },
+	{ _MMIO(0x9888), 0x0a4c9500 },
+	{ _MMIO(0x9888), 0x0c4c002a },
+	{ _MMIO(0x9888), 0x000d2000 },
+	{ _MMIO(0x9888), 0x060d8000 },
+	{ _MMIO(0x9888), 0x080da000 },
+	{ _MMIO(0x9888), 0x0a0da000 },
+	{ _MMIO(0x9888), 0x0c0da000 },
+	{ _MMIO(0x9888), 0x0c0f0400 },
+	{ _MMIO(0x9888), 0x0e0f5500 },
+	{ _MMIO(0x9888), 0x100f0015 },
+	{ _MMIO(0x9888), 0x002c8000 },
+	{ _MMIO(0x9888), 0x0e2c8000 },
+	{ _MMIO(0x9888), 0x162caa00 },
+	{ _MMIO(0x9888), 0x182c000a },
+	{ _MMIO(0x9888), 0x04193000 },
+	{ _MMIO(0x9888), 0x081a28c1 },
+	{ _MMIO(0x9888), 0x001a0000 },
+	{ _MMIO(0x9888), 0x00133000 },
+	{ _MMIO(0x9888), 0x0613c000 },
+	{ _MMIO(0x9888), 0x0813f000 },
+	{ _MMIO(0x9888), 0x00172000 },
+	{ _MMIO(0x9888), 0x06178000 },
+	{ _MMIO(0x9888), 0x0817a000 },
+	{ _MMIO(0x9888), 0x00180037 },
+	{ _MMIO(0x9888), 0x06180940 },
+	{ _MMIO(0x9888), 0x08180000 },
+	{ _MMIO(0x9888), 0x02180000 },
+	{ _MMIO(0x9888), 0x04183000 },
+	{ _MMIO(0x9888), 0x04afc000 },
+	{ _MMIO(0x9888), 0x06af3000 },
+	{ _MMIO(0x9888), 0x0acc4000 },
+	{ _MMIO(0x9888), 0x0ccc0015 },
+	{ _MMIO(0x9888), 0x0a8da000 },
+	{ _MMIO(0x9888), 0x0c8da000 },
+	{ _MMIO(0x9888), 0x0e8f4000 },
+	{ _MMIO(0x9888), 0x108f0015 },
+	{ _MMIO(0x9888), 0x16aca000 },
+	{ _MMIO(0x9888), 0x18ac000a },
+	{ _MMIO(0x9888), 0x06993000 },
+	{ _MMIO(0x9888), 0x0c9a28c1 },
+	{ _MMIO(0x9888), 0x009a0000 },
+	{ _MMIO(0x9888), 0x0a93f000 },
+	{ _MMIO(0x9888), 0x0c93f000 },
+	{ _MMIO(0x9888), 0x0a97a000 },
+	{ _MMIO(0x9888), 0x0c97a000 },
+	{ _MMIO(0x9888), 0x0a980977 },
+	{ _MMIO(0x9888), 0x08980000 },
+	{ _MMIO(0x9888), 0x04980000 },
+	{ _MMIO(0x9888), 0x06983000 },
+	{ _MMIO(0x9888), 0x119000ff },
+	{ _MMIO(0x9888), 0x51900040 },
+	{ _MMIO(0x9888), 0x41900020 },
+	{ _MMIO(0x9888), 0x55900004 },
+	{ _MMIO(0x9888), 0x45900400 },
+	{ _MMIO(0x9888), 0x479008a5 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900002 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_vme_pipe_mux_config(struct drm_i915_private *dev_priv,
+			const struct i915_oa_reg **regs,
+			int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_vme_pipe;
+	lens[n] = ARRAY_SIZE(mux_config_vme_pipe);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_test_oa[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2770), 0x00000004 },
+	{ _MMIO(0x2774), 0x00000000 },
+	{ _MMIO(0x2778), 0x00000003 },
+	{ _MMIO(0x277c), 0x00000000 },
+	{ _MMIO(0x2780), 0x00000007 },
+	{ _MMIO(0x2784), 0x00000000 },
+	{ _MMIO(0x2788), 0x00100002 },
+	{ _MMIO(0x278c), 0x0000fff7 },
+	{ _MMIO(0x2790), 0x00100002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00100082 },
+	{ _MMIO(0x279c), 0x0000ffef },
+	{ _MMIO(0x27a0), 0x001000c2 },
+	{ _MMIO(0x27a4), 0x0000ffe7 },
+	{ _MMIO(0x27a8), 0x00100001 },
+	{ _MMIO(0x27ac), 0x0000ffe7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_test_oa[] = {
+};
+
+static const struct i915_oa_reg mux_config_test_oa[] = {
+	{ _MMIO(0x9888), 0x11810000 },
+	{ _MMIO(0x9888), 0x07810013 },
+	{ _MMIO(0x9888), 0x1f810000 },
+	{ _MMIO(0x9888), 0x1d810000 },
+	{ _MMIO(0x9888), 0x1b930040 },
+	{ _MMIO(0x9888), 0x07e54000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x11900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_test_oa_mux_config(struct drm_i915_private *dev_priv,
+		       const struct i915_oa_reg **regs,
+		       int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_test_oa;
+	lens[n] = ARRAY_SIZE(mux_config_test_oa);
+	n++;
+
+	return n;
+}
+
+int i915_oa_select_metric_set_kblgt3(struct drm_i915_private *dev_priv)
+{
+	dev_priv->perf.oa.n_mux_configs = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_basic_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_basic_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
+		return 0;
+	case METRIC_SET_ID_RENDER_PIPE_PROFILE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_pipe_profile_mux_config(dev_priv,
+							   dev_priv->perf.oa.mux_regs,
+							   dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_PIPE_PROFILE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_pipe_profile;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_pipe_profile);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_pipe_profile;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_pipe_profile);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_READS:
+		dev_priv->perf.oa.n_mux_configs =
+			get_memory_reads_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_READS\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_reads;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_reads);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_reads;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_reads);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_WRITES:
+		dev_priv->perf.oa.n_mux_configs =
+			get_memory_writes_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_WRITES\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_writes;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_writes);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_writes;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_writes);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTENDED:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_extended_mux_config(dev_priv,
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTENDED\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extended;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extended);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extended;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extended);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_L3_CACHE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_l3_cache_mux_config(dev_priv,
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_L3_CACHE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_l3_cache;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_l3_cache);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_l3_cache;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_l3_cache);
+
+		return 0;
+	case METRIC_SET_ID_HDC_AND_SF:
+		dev_priv->perf.oa.n_mux_configs =
+			get_hdc_and_sf_mux_config(dev_priv,
+						  dev_priv->perf.oa.mux_regs,
+						  dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"HDC_AND_SF\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_hdc_and_sf;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_hdc_and_sf);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_hdc_and_sf;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_hdc_and_sf);
+
+		return 0;
+	case METRIC_SET_ID_L3_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_1_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_1);
+
+		return 0;
+	case METRIC_SET_ID_L3_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_2_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_2);
+
+		return 0;
+	case METRIC_SET_ID_L3_3:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_3_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_3\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_3;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_3);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_3;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_3);
+
+		return 0;
+	case METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND:
+		dev_priv->perf.oa.n_mux_configs =
+			get_rasterizer_and_pixel_backend_mux_config(dev_priv,
+								    dev_priv->perf.oa.mux_regs,
+								    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RASTERIZER_AND_PIXEL_BACKEND\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_rasterizer_and_pixel_backend);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_rasterizer_and_pixel_backend);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER:
+		dev_priv->perf.oa.n_mux_configs =
+			get_sampler_mux_config(dev_priv,
+					       dev_priv->perf.oa.mux_regs,
+					       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler);
+
+		return 0;
+	case METRIC_SET_ID_TDL_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_1_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_1);
+
+		return 0;
+	case METRIC_SET_ID_TDL_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_2_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_2);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTRA:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_extra_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTRA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extra;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extra);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extra;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extra);
+
+		return 0;
+	case METRIC_SET_ID_VME_PIPE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_vme_pipe_mux_config(dev_priv,
+						dev_priv->perf.oa.mux_regs,
+						dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"VME_PIPE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_vme_pipe;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_vme_pipe);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_vme_pipe;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_vme_pipe);
+
+		return 0;
+	case METRIC_SET_ID_TEST_OA:
+		dev_priv->perf.oa.n_mux_configs =
+			get_test_oa_mux_config(dev_priv,
+					       dev_priv->perf.oa.mux_regs,
+					       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TEST_OA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_test_oa;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_test_oa);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_test_oa;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_test_oa);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "0286c920-2f6d-493b-b22d-7a5280df43de",
+	.attrs =  attrs_render_basic,
+};
+
+static ssize_t
+show_compute_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_BASIC);
+}
+
+static struct device_attribute dev_attr_compute_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_basic[] = {
+	&dev_attr_compute_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_basic = {
+	.name = "9823aaa1-b06f-40ce-884b-cd798c79f0c2",
+	.attrs =  attrs_compute_basic,
+};
+
+static ssize_t
+show_render_pipe_profile_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_PIPE_PROFILE);
+}
+
+static struct device_attribute dev_attr_render_pipe_profile_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_pipe_profile_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_pipe_profile[] = {
+	&dev_attr_render_pipe_profile_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_pipe_profile = {
+	.name = "c7c735f3-ce58-45cf-aa04-30b183f1faff",
+	.attrs =  attrs_render_pipe_profile,
+};
+
+static ssize_t
+show_memory_reads_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_READS);
+}
+
+static struct device_attribute dev_attr_memory_reads_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_reads_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_reads[] = {
+	&dev_attr_memory_reads_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_reads = {
+	.name = "96ec2219-040b-428a-856a-6bc03363a057",
+	.attrs =  attrs_memory_reads,
+};
+
+static ssize_t
+show_memory_writes_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_WRITES);
+}
+
+static struct device_attribute dev_attr_memory_writes_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_writes_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_writes[] = {
+	&dev_attr_memory_writes_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_writes = {
+	.name = "03372b64-4996-4d3b-aa18-790e75eeb9c2",
+	.attrs =  attrs_memory_writes,
+};
+
+static ssize_t
+show_compute_extended_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTENDED);
+}
+
+static struct device_attribute dev_attr_compute_extended_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extended_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extended[] = {
+	&dev_attr_compute_extended_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extended = {
+	.name = "31b4ce5a-bd61-4c1f-bb5d-f2e731412150",
+	.attrs =  attrs_compute_extended,
+};
+
+static ssize_t
+show_compute_l3_cache_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_L3_CACHE);
+}
+
+static struct device_attribute dev_attr_compute_l3_cache_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_l3_cache_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_l3_cache[] = {
+	&dev_attr_compute_l3_cache_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_l3_cache = {
+	.name = "2ce0911a-27fc-4887-96f0-11084fa807c3",
+	.attrs =  attrs_compute_l3_cache,
+};
+
+static ssize_t
+show_hdc_and_sf_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_HDC_AND_SF);
+}
+
+static struct device_attribute dev_attr_hdc_and_sf_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_hdc_and_sf_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_hdc_and_sf[] = {
+	&dev_attr_hdc_and_sf_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_hdc_and_sf = {
+	.name = "546c4c1d-99b8-42fb-a107-5aaabb5314a8",
+	.attrs =  attrs_hdc_and_sf,
+};
+
+static ssize_t
+show_l3_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_1);
+}
+
+static struct device_attribute dev_attr_l3_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_1[] = {
+	&dev_attr_l3_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_1 = {
+	.name = "4e93d156-9b39-4268-8544-a8e0480806d7",
+	.attrs =  attrs_l3_1,
+};
+
+static ssize_t
+show_l3_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_2);
+}
+
+static struct device_attribute dev_attr_l3_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_2[] = {
+	&dev_attr_l3_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_2 = {
+	.name = "de1bec86-ca92-4b43-89fa-147653221cc0",
+	.attrs =  attrs_l3_2,
+};
+
+static ssize_t
+show_l3_3_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_3);
+}
+
+static struct device_attribute dev_attr_l3_3_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_3_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_3[] = {
+	&dev_attr_l3_3_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_3 = {
+	.name = "e63537bb-10be-4d4a-92c4-c6b0c65e02ef",
+	.attrs =  attrs_l3_3,
+};
+
+static ssize_t
+show_rasterizer_and_pixel_backend_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND);
+}
+
+static struct device_attribute dev_attr_rasterizer_and_pixel_backend_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_rasterizer_and_pixel_backend_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_rasterizer_and_pixel_backend[] = {
+	&dev_attr_rasterizer_and_pixel_backend_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_rasterizer_and_pixel_backend = {
+	.name = "7a03a9f8-ec5e-46bb-8b67-1f0ff1476281",
+	.attrs =  attrs_rasterizer_and_pixel_backend,
+};
+
+static ssize_t
+show_sampler_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER);
+}
+
+static struct device_attribute dev_attr_sampler_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler[] = {
+	&dev_attr_sampler_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler = {
+	.name = "b25d2ebf-a6e0-4b29-96be-a9b010edeeda",
+	.attrs =  attrs_sampler,
+};
+
+static ssize_t
+show_tdl_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_1);
+}
+
+static struct device_attribute dev_attr_tdl_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_1[] = {
+	&dev_attr_tdl_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_1 = {
+	.name = "469a05e5-e299-46f7-9598-7b05f3c34991",
+	.attrs =  attrs_tdl_1,
+};
+
+static ssize_t
+show_tdl_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_2);
+}
+
+static struct device_attribute dev_attr_tdl_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_2[] = {
+	&dev_attr_tdl_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_2 = {
+	.name = "52f925c6-786a-4ec6-86ce-cba85c83453a",
+	.attrs =  attrs_tdl_2,
+};
+
+static ssize_t
+show_compute_extra_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTRA);
+}
+
+static struct device_attribute dev_attr_compute_extra_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extra_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extra[] = {
+	&dev_attr_compute_extra_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extra = {
+	.name = "efc497ac-884e-4ee4-a4a8-15fba22aaf21",
+	.attrs =  attrs_compute_extra,
+};
+
+static ssize_t
+show_vme_pipe_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_VME_PIPE);
+}
+
+static struct device_attribute dev_attr_vme_pipe_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_vme_pipe_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_vme_pipe[] = {
+	&dev_attr_vme_pipe_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_vme_pipe = {
+	.name = "bfd9764d-2c5b-4c16-bfc1-89de3ca10917",
+	.attrs =  attrs_vme_pipe,
+};
+
+static ssize_t
+show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TEST_OA);
+}
+
+static struct device_attribute dev_attr_test_oa_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_test_oa_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_test_oa[] = {
+	&dev_attr_test_oa_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_test_oa = {
+	.name = "f1792f32-6db2-4b50-b4b2-557128f1688d",
+	.attrs =  attrs_test_oa,
+};
+
+int
+i915_perf_register_sysfs_kblgt3(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+	int ret = 0;
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+		if (ret)
+			goto error_render_basic;
+	}
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+		if (ret)
+			goto error_compute_basic;
+	}
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+		if (ret)
+			goto error_render_pipe_profile;
+	}
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+		if (ret)
+			goto error_memory_reads;
+	}
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+		if (ret)
+			goto error_memory_writes;
+	}
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+		if (ret)
+			goto error_compute_extended;
+	}
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+		if (ret)
+			goto error_compute_l3_cache;
+	}
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+		if (ret)
+			goto error_hdc_and_sf;
+	}
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+		if (ret)
+			goto error_l3_1;
+	}
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+		if (ret)
+			goto error_l3_2;
+	}
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+		if (ret)
+			goto error_l3_3;
+	}
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+		if (ret)
+			goto error_rasterizer_and_pixel_backend;
+	}
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler);
+		if (ret)
+			goto error_sampler;
+	}
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+		if (ret)
+			goto error_tdl_1;
+	}
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+		if (ret)
+			goto error_tdl_2;
+	}
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+		if (ret)
+			goto error_compute_extra;
+	}
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+		if (ret)
+			goto error_vme_pipe;
+	}
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+		if (ret)
+			goto error_test_oa;
+	}
+
+	return 0;
+
+error_test_oa:
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+error_vme_pipe:
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+error_compute_extra:
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+error_tdl_2:
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+error_tdl_1:
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+error_sampler:
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+error_rasterizer_and_pixel_backend:
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+error_l3_3:
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+error_l3_2:
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+error_l3_1:
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+error_hdc_and_sf:
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+error_compute_l3_cache:
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+error_compute_extended:
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+error_memory_writes:
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+error_memory_reads:
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+error_render_pipe_profile:
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+error_compute_basic:
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+error_render_basic:
+	return ret;
+}
+
+void
+i915_perf_unregister_sysfs_kblgt3(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+	if (get_l3_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_2);
+	if (get_l3_3_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_3);
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+	if (get_vme_pipe_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_vme_pipe);
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+}
diff --git a/drivers/gpu/drm/i915/i915_oa_kblgt3.h b/drivers/gpu/drm/i915/i915_oa_kblgt3.h
new file mode 100644
index 000000000000..b0ca7f3114d3
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_kblgt3.h
@@ -0,0 +1,40 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_OA_KBLGT3_H__
+#define __I915_OA_KBLGT3_H__
+
+extern int i915_oa_n_builtin_metric_sets_kblgt3;
+
+extern int i915_oa_select_metric_set_kblgt3(struct drm_i915_private *dev_priv);
+
+extern int i915_perf_register_sysfs_kblgt3(struct drm_i915_private *dev_priv);
+
+extern void i915_perf_unregister_sysfs_kblgt3(struct drm_i915_private *dev_priv);
+
+#endif
diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 9e5307bbaab1..d5126faf2a35 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -202,6 +202,8 @@
 #include "i915_oa_sklgt3.h"
 #include "i915_oa_sklgt4.h"
 #include "i915_oa_bxt.h"
+#include "i915_oa_kblgt2.h"
+#include "i915_oa_kblgt3.h"
 
 /* HW requires this to be a power of two, between 128k and 16M, though driver
  * is currently generally designed assuming the largest 16M size is used such
@@ -1778,7 +1780,8 @@ static int gen8_enable_metric_set(struct drm_i915_private *dev_priv)
 	 * be read back from automatically triggered reports, as part of the
 	 * RPT_ID field.
 	 */
-	if (IS_SKYLAKE(dev_priv) || IS_BROXTON(dev_priv)) {
+	if (IS_SKYLAKE(dev_priv) || IS_BROXTON(dev_priv) ||
+	    IS_KABYLAKE(dev_priv)) {
 		I915_WRITE(GEN8_OA_DEBUG,
 			   _MASKED_BIT_ENABLE(GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS |
 					      GEN9_OA_DEBUG_INCLUDE_CLK_RATIO));
@@ -2857,6 +2860,15 @@ void i915_perf_register(struct drm_i915_private *dev_priv)
 	} else if (IS_BROXTON(dev_priv)) {
 		if (i915_perf_register_sysfs_bxt(dev_priv))
 			goto sysfs_error;
+	} else if (IS_KABYLAKE(dev_priv)) {
+		if (IS_KBL_GT2(dev_priv)) {
+			if (i915_perf_register_sysfs_kblgt2(dev_priv))
+				goto sysfs_error;
+		} else if (IS_KBL_GT3(dev_priv)) {
+			if (i915_perf_register_sysfs_kblgt3(dev_priv))
+				goto sysfs_error;
+		} else
+			goto sysfs_error;
 	}
 
 	goto exit;
@@ -2898,6 +2910,12 @@ void i915_perf_unregister(struct drm_i915_private *dev_priv)
 			i915_perf_unregister_sysfs_sklgt4(dev_priv);
 	} else if (IS_BROXTON(dev_priv))
 		i915_perf_unregister_sysfs_bxt(dev_priv);
+	else if (IS_KABYLAKE(dev_priv)) {
+		if (IS_KBL_GT2(dev_priv))
+			i915_perf_unregister_sysfs_kblgt2(dev_priv);
+		else if (IS_KBL_GT3(dev_priv))
+			i915_perf_unregister_sysfs_kblgt3(dev_priv);
+	}
 
 	kobject_put(dev_priv->perf.metrics_kobj);
 	dev_priv->perf.metrics_kobj = NULL;
@@ -3031,6 +3049,16 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
 					i915_oa_n_builtin_metric_sets_bxt;
 				dev_priv->perf.oa.ops.select_metric_set =
 					i915_oa_select_metric_set_bxt;
+			} else if (IS_KBL_GT2(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_kblgt2;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_kblgt2;
+			} else if (IS_KBL_GT3(dev_priv)) {
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_kblgt3;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_kblgt3;
 			}
 		}
 
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v15 12/14] drm/i915/perf: add GLK support
  2017-05-31 12:33   ` [PATCH v15 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                       ` (10 preceding siblings ...)
  2017-05-31 12:33     ` [PATCH v15 11/14] drm/i915/perf: add KBL support Lionel Landwerlin
@ 2017-05-31 12:33     ` Lionel Landwerlin
  2017-05-31 12:33     ` [PATCH v15 13/14] drm/i915/perf: reprogram NOA muxes at the beginning of each workload Lionel Landwerlin
  2017-05-31 12:33     ` [PATCH v15 14/14] drm/i915/perf: notify sseu configuration changes Lionel Landwerlin
  13 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-31 12:33 UTC (permalink / raw)
  To: intel-gfx

Add OA support for Geminilake (pretty much identical to Broxton), and
also add the associated OA configurations.

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/Makefile      |    3 +-
 drivers/gpu/drm/i915/i915_oa_glk.c | 2602 ++++++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_oa_glk.h |   40 +
 drivers/gpu/drm/i915/i915_perf.c   |   17 +-
 4 files changed, 2659 insertions(+), 3 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_oa_glk.c
 create mode 100644 drivers/gpu/drm/i915/i915_oa_glk.h

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index 033a2df01dbe..f8227318dcaf 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -137,7 +137,8 @@ i915-y += i915_perf.o \
 	  i915_oa_sklgt4.o \
 	  i915_oa_bxt.o \
 	  i915_oa_kblgt2.o \
-	  i915_oa_kblgt3.o
+	  i915_oa_kblgt3.o \
+	  i915_oa_glk.o
 
 ifeq ($(CONFIG_DRM_I915_GVT),y)
 i915-y += intel_gvt.o
diff --git a/drivers/gpu/drm/i915/i915_oa_glk.c b/drivers/gpu/drm/i915/i915_oa_glk.c
new file mode 100644
index 000000000000..2f356d51bff8
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_glk.c
@@ -0,0 +1,2602 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/sysfs.h>
+
+#include "i915_drv.h"
+#include "i915_oa_glk.h"
+
+enum metric_set_id {
+	METRIC_SET_ID_RENDER_BASIC = 1,
+	METRIC_SET_ID_COMPUTE_BASIC,
+	METRIC_SET_ID_RENDER_PIPE_PROFILE,
+	METRIC_SET_ID_MEMORY_READS,
+	METRIC_SET_ID_MEMORY_WRITES,
+	METRIC_SET_ID_COMPUTE_EXTENDED,
+	METRIC_SET_ID_COMPUTE_L3_CACHE,
+	METRIC_SET_ID_HDC_AND_SF,
+	METRIC_SET_ID_L3_1,
+	METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND,
+	METRIC_SET_ID_SAMPLER,
+	METRIC_SET_ID_TDL_1,
+	METRIC_SET_ID_TDL_2,
+	METRIC_SET_ID_COMPUTE_EXTRA,
+	METRIC_SET_ID_TEST_OA,
+};
+
+int i915_oa_n_builtin_metric_sets_glk = 15;
+
+static const struct i915_oa_reg b_counter_config_render_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_render_basic[] = {
+	{ _MMIO(0x9888), 0x166c00f0 },
+	{ _MMIO(0x9888), 0x12120280 },
+	{ _MMIO(0x9888), 0x12320280 },
+	{ _MMIO(0x9888), 0x11930317 },
+	{ _MMIO(0x9888), 0x159303df },
+	{ _MMIO(0x9888), 0x3f900c00 },
+	{ _MMIO(0x9888), 0x419000a0 },
+	{ _MMIO(0x9888), 0x002d1000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x082d5000 },
+	{ _MMIO(0x9888), 0x0a2d1000 },
+	{ _MMIO(0x9888), 0x0c2e0800 },
+	{ _MMIO(0x9888), 0x0e2e5900 },
+	{ _MMIO(0x9888), 0x0a4c8000 },
+	{ _MMIO(0x9888), 0x0c4c8000 },
+	{ _MMIO(0x9888), 0x0e4c4000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e2000 },
+	{ _MMIO(0x9888), 0x1c4f0010 },
+	{ _MMIO(0x9888), 0x0a6c0053 },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1a0fcc00 },
+	{ _MMIO(0x9888), 0x1c0f0002 },
+	{ _MMIO(0x9888), 0x1c2c0040 },
+	{ _MMIO(0x9888), 0x00101000 },
+	{ _MMIO(0x9888), 0x04101000 },
+	{ _MMIO(0x9888), 0x00114000 },
+	{ _MMIO(0x9888), 0x08114000 },
+	{ _MMIO(0x9888), 0x00120020 },
+	{ _MMIO(0x9888), 0x08120021 },
+	{ _MMIO(0x9888), 0x00141000 },
+	{ _MMIO(0x9888), 0x08141000 },
+	{ _MMIO(0x9888), 0x02308000 },
+	{ _MMIO(0x9888), 0x04302000 },
+	{ _MMIO(0x9888), 0x06318000 },
+	{ _MMIO(0x9888), 0x08318000 },
+	{ _MMIO(0x9888), 0x06320800 },
+	{ _MMIO(0x9888), 0x08320840 },
+	{ _MMIO(0x9888), 0x00320000 },
+	{ _MMIO(0x9888), 0x06344000 },
+	{ _MMIO(0x9888), 0x08344000 },
+	{ _MMIO(0x9888), 0x0d931831 },
+	{ _MMIO(0x9888), 0x0f939f3f },
+	{ _MMIO(0x9888), 0x01939e80 },
+	{ _MMIO(0x9888), 0x039303bc },
+	{ _MMIO(0x9888), 0x0593000e },
+	{ _MMIO(0x9888), 0x1993002a },
+	{ _MMIO(0x9888), 0x07930000 },
+	{ _MMIO(0x9888), 0x09930000 },
+	{ _MMIO(0x9888), 0x1d900177 },
+	{ _MMIO(0x9888), 0x1f900187 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25904000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x53901110 },
+	{ _MMIO(0x9888), 0x43900423 },
+	{ _MMIO(0x9888), 0x55900111 },
+	{ _MMIO(0x9888), 0x47900c02 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900020 },
+	{ _MMIO(0x9888), 0x59901111 },
+	{ _MMIO(0x9888), 0x4b900421 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900001 },
+	{ _MMIO(0x9888), 0x45900821 },
+};
+
+static int
+get_render_basic_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_render_basic;
+	lens[n] = ARRAY_SIZE(mux_config_render_basic);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_basic[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_basic[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_basic[] = {
+	{ _MMIO(0x9888), 0x104f00e0 },
+	{ _MMIO(0x9888), 0x124f1c00 },
+	{ _MMIO(0x9888), 0x39900340 },
+	{ _MMIO(0x9888), 0x3f900c00 },
+	{ _MMIO(0x9888), 0x41900000 },
+	{ _MMIO(0x9888), 0x002d5000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x082d4000 },
+	{ _MMIO(0x9888), 0x0a2d1000 },
+	{ _MMIO(0x9888), 0x0c2d5000 },
+	{ _MMIO(0x9888), 0x0e2d4000 },
+	{ _MMIO(0x9888), 0x0c2e1400 },
+	{ _MMIO(0x9888), 0x0e2e5100 },
+	{ _MMIO(0x9888), 0x102e0114 },
+	{ _MMIO(0x9888), 0x044cc000 },
+	{ _MMIO(0x9888), 0x0a4c8000 },
+	{ _MMIO(0x9888), 0x0c4c8000 },
+	{ _MMIO(0x9888), 0x0e4c4000 },
+	{ _MMIO(0x9888), 0x104c8000 },
+	{ _MMIO(0x9888), 0x124c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x004ea000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084e8000 },
+	{ _MMIO(0x9888), 0x0a4e2000 },
+	{ _MMIO(0x9888), 0x0c4ea000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x004f6b42 },
+	{ _MMIO(0x9888), 0x064f6200 },
+	{ _MMIO(0x9888), 0x084f4100 },
+	{ _MMIO(0x9888), 0x0a4f0061 },
+	{ _MMIO(0x9888), 0x0c4f6c4c },
+	{ _MMIO(0x9888), 0x0e4f4b00 },
+	{ _MMIO(0x9888), 0x1a4f0000 },
+	{ _MMIO(0x9888), 0x1c4f0000 },
+	{ _MMIO(0x9888), 0x180f5000 },
+	{ _MMIO(0x9888), 0x1a0f8800 },
+	{ _MMIO(0x9888), 0x1c0f08a2 },
+	{ _MMIO(0x9888), 0x182c4000 },
+	{ _MMIO(0x9888), 0x1c2c1451 },
+	{ _MMIO(0x9888), 0x1e2c0001 },
+	{ _MMIO(0x9888), 0x1a2c0010 },
+	{ _MMIO(0x9888), 0x01938000 },
+	{ _MMIO(0x9888), 0x0f938000 },
+	{ _MMIO(0x9888), 0x19938a28 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x19900177 },
+	{ _MMIO(0x9888), 0x1b900178 },
+	{ _MMIO(0x9888), 0x1d900125 },
+	{ _MMIO(0x9888), 0x1f900123 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x25904000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x53901000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x55900111 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+};
+
+static int
+get_compute_basic_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_basic;
+	lens[n] = ARRAY_SIZE(mux_config_compute_basic);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_render_pipe_profile[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007ffea },
+	{ _MMIO(0x2774), 0x00007ffc },
+	{ _MMIO(0x2778), 0x0007affa },
+	{ _MMIO(0x277c), 0x0000f5fd },
+	{ _MMIO(0x2780), 0x00079ffa },
+	{ _MMIO(0x2784), 0x0000f3fb },
+	{ _MMIO(0x2788), 0x0007bf7a },
+	{ _MMIO(0x278c), 0x0000f7e7 },
+	{ _MMIO(0x2790), 0x0007fefa },
+	{ _MMIO(0x2794), 0x0000f7cf },
+	{ _MMIO(0x2798), 0x00077ffa },
+	{ _MMIO(0x279c), 0x0000efdf },
+	{ _MMIO(0x27a0), 0x0006fffa },
+	{ _MMIO(0x27a4), 0x0000cfbf },
+	{ _MMIO(0x27a8), 0x0003fffa },
+	{ _MMIO(0x27ac), 0x00005f7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_render_pipe_profile[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_render_pipe_profile[] = {
+	{ _MMIO(0x9888), 0x0c2e001f },
+	{ _MMIO(0x9888), 0x0a2f0000 },
+	{ _MMIO(0x9888), 0x10186800 },
+	{ _MMIO(0x9888), 0x11810019 },
+	{ _MMIO(0x9888), 0x15810013 },
+	{ _MMIO(0x9888), 0x13820020 },
+	{ _MMIO(0x9888), 0x11830020 },
+	{ _MMIO(0x9888), 0x17840000 },
+	{ _MMIO(0x9888), 0x11860007 },
+	{ _MMIO(0x9888), 0x21860000 },
+	{ _MMIO(0x9888), 0x178703e0 },
+	{ _MMIO(0x9888), 0x0c2d8000 },
+	{ _MMIO(0x9888), 0x042d4000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x022e5400 },
+	{ _MMIO(0x9888), 0x002e0000 },
+	{ _MMIO(0x9888), 0x0e2e0080 },
+	{ _MMIO(0x9888), 0x082f0040 },
+	{ _MMIO(0x9888), 0x002f0000 },
+	{ _MMIO(0x9888), 0x06143000 },
+	{ _MMIO(0x9888), 0x06174000 },
+	{ _MMIO(0x9888), 0x06180012 },
+	{ _MMIO(0x9888), 0x00180000 },
+	{ _MMIO(0x9888), 0x0d804000 },
+	{ _MMIO(0x9888), 0x0f804000 },
+	{ _MMIO(0x9888), 0x05804000 },
+	{ _MMIO(0x9888), 0x09810200 },
+	{ _MMIO(0x9888), 0x0b810030 },
+	{ _MMIO(0x9888), 0x03810003 },
+	{ _MMIO(0x9888), 0x21819140 },
+	{ _MMIO(0x9888), 0x23819050 },
+	{ _MMIO(0x9888), 0x25810018 },
+	{ _MMIO(0x9888), 0x0b820980 },
+	{ _MMIO(0x9888), 0x03820d80 },
+	{ _MMIO(0x9888), 0x11820000 },
+	{ _MMIO(0x9888), 0x0182c000 },
+	{ _MMIO(0x9888), 0x07828000 },
+	{ _MMIO(0x9888), 0x09824000 },
+	{ _MMIO(0x9888), 0x0f828000 },
+	{ _MMIO(0x9888), 0x0d830004 },
+	{ _MMIO(0x9888), 0x0583000c },
+	{ _MMIO(0x9888), 0x0f831000 },
+	{ _MMIO(0x9888), 0x01848072 },
+	{ _MMIO(0x9888), 0x11840000 },
+	{ _MMIO(0x9888), 0x07848000 },
+	{ _MMIO(0x9888), 0x09844000 },
+	{ _MMIO(0x9888), 0x0f848000 },
+	{ _MMIO(0x9888), 0x07860000 },
+	{ _MMIO(0x9888), 0x09860092 },
+	{ _MMIO(0x9888), 0x0f860400 },
+	{ _MMIO(0x9888), 0x01869100 },
+	{ _MMIO(0x9888), 0x0f870065 },
+	{ _MMIO(0x9888), 0x01870000 },
+	{ _MMIO(0x9888), 0x19930800 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x1b952000 },
+	{ _MMIO(0x9888), 0x1d955055 },
+	{ _MMIO(0x9888), 0x1f951455 },
+	{ _MMIO(0x9888), 0x0992a000 },
+	{ _MMIO(0x9888), 0x0f928000 },
+	{ _MMIO(0x9888), 0x1192a800 },
+	{ _MMIO(0x9888), 0x1392028a },
+	{ _MMIO(0x9888), 0x0b92a000 },
+	{ _MMIO(0x9888), 0x0d922000 },
+	{ _MMIO(0x9888), 0x13908000 },
+	{ _MMIO(0x9888), 0x21908000 },
+	{ _MMIO(0x9888), 0x23908000 },
+	{ _MMIO(0x9888), 0x25908000 },
+	{ _MMIO(0x9888), 0x27908000 },
+	{ _MMIO(0x9888), 0x29908000 },
+	{ _MMIO(0x9888), 0x2b908000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f908000 },
+	{ _MMIO(0x9888), 0x31908000 },
+	{ _MMIO(0x9888), 0x15908000 },
+	{ _MMIO(0x9888), 0x17908000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900c01 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900863 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900061 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900000 },
+	{ _MMIO(0x9888), 0x45900c22 },
+};
+
+static int
+get_render_pipe_profile_mux_config(struct drm_i915_private *dev_priv,
+				   const struct i915_oa_reg **regs,
+				   int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_render_pipe_profile;
+	lens[n] = ARRAY_SIZE(mux_config_render_pipe_profile);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_reads[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f872 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_reads[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_reads[] = {
+	{ _MMIO(0x9888), 0x19800343 },
+	{ _MMIO(0x9888), 0x39900340 },
+	{ _MMIO(0x9888), 0x3f901000 },
+	{ _MMIO(0x9888), 0x41900003 },
+	{ _MMIO(0x9888), 0x03803180 },
+	{ _MMIO(0x9888), 0x058035e2 },
+	{ _MMIO(0x9888), 0x0780006a },
+	{ _MMIO(0x9888), 0x11800000 },
+	{ _MMIO(0x9888), 0x2181a000 },
+	{ _MMIO(0x9888), 0x2381000a },
+	{ _MMIO(0x9888), 0x1d950550 },
+	{ _MMIO(0x9888), 0x0b928000 },
+	{ _MMIO(0x9888), 0x0d92a000 },
+	{ _MMIO(0x9888), 0x0f922000 },
+	{ _MMIO(0x9888), 0x13900170 },
+	{ _MMIO(0x9888), 0x21900171 },
+	{ _MMIO(0x9888), 0x23900172 },
+	{ _MMIO(0x9888), 0x25900173 },
+	{ _MMIO(0x9888), 0x27900174 },
+	{ _MMIO(0x9888), 0x29900175 },
+	{ _MMIO(0x9888), 0x2b900176 },
+	{ _MMIO(0x9888), 0x2d900177 },
+	{ _MMIO(0x9888), 0x2f90017f },
+	{ _MMIO(0x9888), 0x31900125 },
+	{ _MMIO(0x9888), 0x15900123 },
+	{ _MMIO(0x9888), 0x17900121 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43901084 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47901080 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49901084 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b901084 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900004 },
+	{ _MMIO(0x9888), 0x45900000 },
+};
+
+static int
+get_memory_reads_mux_config(struct drm_i915_private *dev_priv,
+			    const struct i915_oa_reg **regs,
+			    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_memory_reads;
+	lens[n] = ARRAY_SIZE(mux_config_memory_reads);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_memory_writes[] = {
+	{ _MMIO(0x272c), 0xffffffff },
+	{ _MMIO(0x2728), 0xffffffff },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x271c), 0xffffffff },
+	{ _MMIO(0x2718), 0xffffffff },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x274c), 0x86543210 },
+	{ _MMIO(0x2748), 0x86543210 },
+	{ _MMIO(0x2744), 0x00006667 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x275c), 0x86543210 },
+	{ _MMIO(0x2758), 0x86543210 },
+	{ _MMIO(0x2754), 0x00006465 },
+	{ _MMIO(0x2750), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007f81a },
+	{ _MMIO(0x2774), 0x0000fe00 },
+	{ _MMIO(0x2778), 0x0007f82a },
+	{ _MMIO(0x277c), 0x0000fe00 },
+	{ _MMIO(0x2780), 0x0007f822 },
+	{ _MMIO(0x2784), 0x0000fe00 },
+	{ _MMIO(0x2788), 0x0007f8ba },
+	{ _MMIO(0x278c), 0x0000fe00 },
+	{ _MMIO(0x2790), 0x0007f87a },
+	{ _MMIO(0x2794), 0x0000fe00 },
+	{ _MMIO(0x2798), 0x0007f8ea },
+	{ _MMIO(0x279c), 0x0000fe00 },
+	{ _MMIO(0x27a0), 0x0007f8e2 },
+	{ _MMIO(0x27a4), 0x0000fe00 },
+	{ _MMIO(0x27a8), 0x0007f8f2 },
+	{ _MMIO(0x27ac), 0x0000fe00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_memory_writes[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00015014 },
+	{ _MMIO(0xe658), 0x00025024 },
+	{ _MMIO(0xe758), 0x00035034 },
+	{ _MMIO(0xe45c), 0x00045044 },
+	{ _MMIO(0xe55c), 0x00055054 },
+	{ _MMIO(0xe65c), 0x00065064 },
+};
+
+static const struct i915_oa_reg mux_config_memory_writes[] = {
+	{ _MMIO(0x9888), 0x19800343 },
+	{ _MMIO(0x9888), 0x39900340 },
+	{ _MMIO(0x9888), 0x3f900000 },
+	{ _MMIO(0x9888), 0x41900080 },
+	{ _MMIO(0x9888), 0x03803180 },
+	{ _MMIO(0x9888), 0x058035e2 },
+	{ _MMIO(0x9888), 0x0780006a },
+	{ _MMIO(0x9888), 0x11800000 },
+	{ _MMIO(0x9888), 0x2181a000 },
+	{ _MMIO(0x9888), 0x2381000a },
+	{ _MMIO(0x9888), 0x1d950550 },
+	{ _MMIO(0x9888), 0x0b928000 },
+	{ _MMIO(0x9888), 0x0d92a000 },
+	{ _MMIO(0x9888), 0x0f922000 },
+	{ _MMIO(0x9888), 0x13900180 },
+	{ _MMIO(0x9888), 0x21900181 },
+	{ _MMIO(0x9888), 0x23900182 },
+	{ _MMIO(0x9888), 0x25900183 },
+	{ _MMIO(0x9888), 0x27900184 },
+	{ _MMIO(0x9888), 0x29900185 },
+	{ _MMIO(0x9888), 0x2b900186 },
+	{ _MMIO(0x9888), 0x2d900187 },
+	{ _MMIO(0x9888), 0x2f900170 },
+	{ _MMIO(0x9888), 0x31900125 },
+	{ _MMIO(0x9888), 0x15900123 },
+	{ _MMIO(0x9888), 0x17900121 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x19908000 },
+	{ _MMIO(0x9888), 0x1b908000 },
+	{ _MMIO(0x9888), 0x1d908000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43901084 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47901080 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49901084 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b901084 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900004 },
+	{ _MMIO(0x9888), 0x45900000 },
+};
+
+static int
+get_memory_writes_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_memory_writes;
+	lens[n] = ARRAY_SIZE(mux_config_memory_writes);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extended[] = {
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fc2a },
+	{ _MMIO(0x2774), 0x0000bf00 },
+	{ _MMIO(0x2778), 0x0007fc6a },
+	{ _MMIO(0x277c), 0x0000bf00 },
+	{ _MMIO(0x2780), 0x0007fc92 },
+	{ _MMIO(0x2784), 0x0000bf00 },
+	{ _MMIO(0x2788), 0x0007fca2 },
+	{ _MMIO(0x278c), 0x0000bf00 },
+	{ _MMIO(0x2790), 0x0007fc32 },
+	{ _MMIO(0x2794), 0x0000bf00 },
+	{ _MMIO(0x2798), 0x0007fc9a },
+	{ _MMIO(0x279c), 0x0000bf00 },
+	{ _MMIO(0x27a0), 0x0007fe6a },
+	{ _MMIO(0x27a4), 0x0000bf00 },
+	{ _MMIO(0x27a8), 0x0007fe7a },
+	{ _MMIO(0x27ac), 0x0000bf00 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extended[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00778008 },
+	{ _MMIO(0xe45c), 0x00088078 },
+	{ _MMIO(0xe55c), 0x00808708 },
+	{ _MMIO(0xe65c), 0x00a08908 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extended[] = {
+	{ _MMIO(0x9888), 0x104f00e0 },
+	{ _MMIO(0x9888), 0x141c0160 },
+	{ _MMIO(0x9888), 0x161c0015 },
+	{ _MMIO(0x9888), 0x181c0120 },
+	{ _MMIO(0x9888), 0x002d5000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x082d5000 },
+	{ _MMIO(0x9888), 0x0a2d5000 },
+	{ _MMIO(0x9888), 0x0c2d5000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x0c2e5400 },
+	{ _MMIO(0x9888), 0x0e2e5515 },
+	{ _MMIO(0x9888), 0x102e0155 },
+	{ _MMIO(0x9888), 0x044cc000 },
+	{ _MMIO(0x9888), 0x0a4c8000 },
+	{ _MMIO(0x9888), 0x0c4cc000 },
+	{ _MMIO(0x9888), 0x0e4cc000 },
+	{ _MMIO(0x9888), 0x104c8000 },
+	{ _MMIO(0x9888), 0x124c8000 },
+	{ _MMIO(0x9888), 0x144c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x064cc000 },
+	{ _MMIO(0x9888), 0x084cc000 },
+	{ _MMIO(0x9888), 0x004ea000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084ea000 },
+	{ _MMIO(0x9888), 0x0a4ea000 },
+	{ _MMIO(0x9888), 0x0c4ea000 },
+	{ _MMIO(0x9888), 0x0e4ea000 },
+	{ _MMIO(0x9888), 0x024ea000 },
+	{ _MMIO(0x9888), 0x044ea000 },
+	{ _MMIO(0x9888), 0x0e4f4b41 },
+	{ _MMIO(0x9888), 0x004f4200 },
+	{ _MMIO(0x9888), 0x024f404c },
+	{ _MMIO(0x9888), 0x1c4f0000 },
+	{ _MMIO(0x9888), 0x1a4f0000 },
+	{ _MMIO(0x9888), 0x001b4000 },
+	{ _MMIO(0x9888), 0x061b8000 },
+	{ _MMIO(0x9888), 0x081bc000 },
+	{ _MMIO(0x9888), 0x0a1bc000 },
+	{ _MMIO(0x9888), 0x0c1bc000 },
+	{ _MMIO(0x9888), 0x041bc000 },
+	{ _MMIO(0x9888), 0x001c0031 },
+	{ _MMIO(0x9888), 0x061c1900 },
+	{ _MMIO(0x9888), 0x081c1a33 },
+	{ _MMIO(0x9888), 0x0a1c1b35 },
+	{ _MMIO(0x9888), 0x0c1c3337 },
+	{ _MMIO(0x9888), 0x041c31c7 },
+	{ _MMIO(0x9888), 0x180f5000 },
+	{ _MMIO(0x9888), 0x1a0fa8aa },
+	{ _MMIO(0x9888), 0x1c0f0aaa },
+	{ _MMIO(0x9888), 0x182c8000 },
+	{ _MMIO(0x9888), 0x1c2c6aaa },
+	{ _MMIO(0x9888), 0x1e2c0001 },
+	{ _MMIO(0x9888), 0x1a2c2950 },
+	{ _MMIO(0x9888), 0x01938000 },
+	{ _MMIO(0x9888), 0x0f938000 },
+	{ _MMIO(0x9888), 0x1993aaaa },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25904000 },
+	{ _MMIO(0x9888), 0x27904000 },
+	{ _MMIO(0x9888), 0x29904000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900420 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900400 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900001 },
+	{ _MMIO(0x9888), 0x45900001 },
+};
+
+static int
+get_compute_extended_mux_config(struct drm_i915_private *dev_priv,
+				const struct i915_oa_reg **regs,
+				int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_extended;
+	lens[n] = ARRAY_SIZE(mux_config_compute_extended);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_l3_cache[] = {
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2770), 0x0007fffa },
+	{ _MMIO(0x2774), 0x0000fefe },
+	{ _MMIO(0x2778), 0x0007fffa },
+	{ _MMIO(0x277c), 0x0000fefd },
+	{ _MMIO(0x2790), 0x0007fffa },
+	{ _MMIO(0x2794), 0x0000fbef },
+	{ _MMIO(0x2798), 0x0007fffa },
+	{ _MMIO(0x279c), 0x0000fbdf },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_l3_cache[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00000003 },
+	{ _MMIO(0xe658), 0x00002001 },
+	{ _MMIO(0xe758), 0x00101100 },
+	{ _MMIO(0xe45c), 0x00201200 },
+	{ _MMIO(0xe55c), 0x00301300 },
+	{ _MMIO(0xe65c), 0x00401400 },
+};
+
+static const struct i915_oa_reg mux_config_compute_l3_cache[] = {
+	{ _MMIO(0x9888), 0x166c03b0 },
+	{ _MMIO(0x9888), 0x1593001e },
+	{ _MMIO(0x9888), 0x3f900c00 },
+	{ _MMIO(0x9888), 0x41900000 },
+	{ _MMIO(0x9888), 0x002d1000 },
+	{ _MMIO(0x9888), 0x062d4000 },
+	{ _MMIO(0x9888), 0x082d5000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x0c2e0400 },
+	{ _MMIO(0x9888), 0x0e2e1500 },
+	{ _MMIO(0x9888), 0x102e0140 },
+	{ _MMIO(0x9888), 0x044c4000 },
+	{ _MMIO(0x9888), 0x0a4c8000 },
+	{ _MMIO(0x9888), 0x0c4cc000 },
+	{ _MMIO(0x9888), 0x144c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x004e2000 },
+	{ _MMIO(0x9888), 0x064e8000 },
+	{ _MMIO(0x9888), 0x084ea000 },
+	{ _MMIO(0x9888), 0x0e4ea000 },
+	{ _MMIO(0x9888), 0x1a4f4001 },
+	{ _MMIO(0x9888), 0x1c4f5005 },
+	{ _MMIO(0x9888), 0x006c0051 },
+	{ _MMIO(0x9888), 0x066c5000 },
+	{ _MMIO(0x9888), 0x086c5c5d },
+	{ _MMIO(0x9888), 0x0e6c5e5f },
+	{ _MMIO(0x9888), 0x106c0000 },
+	{ _MMIO(0x9888), 0x146c0000 },
+	{ _MMIO(0x9888), 0x1a6c0000 },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x180f1000 },
+	{ _MMIO(0x9888), 0x1a0fa800 },
+	{ _MMIO(0x9888), 0x1c0f0a00 },
+	{ _MMIO(0x9888), 0x182c4000 },
+	{ _MMIO(0x9888), 0x1c2c4015 },
+	{ _MMIO(0x9888), 0x1e2c0001 },
+	{ _MMIO(0x9888), 0x03931980 },
+	{ _MMIO(0x9888), 0x05930032 },
+	{ _MMIO(0x9888), 0x11930000 },
+	{ _MMIO(0x9888), 0x01938000 },
+	{ _MMIO(0x9888), 0x0f938000 },
+	{ _MMIO(0x9888), 0x1993a00a },
+	{ _MMIO(0x9888), 0x07930000 },
+	{ _MMIO(0x9888), 0x09930000 },
+	{ _MMIO(0x9888), 0x1d900177 },
+	{ _MMIO(0x9888), 0x1f900178 },
+	{ _MMIO(0x9888), 0x35900000 },
+	{ _MMIO(0x9888), 0x13904000 },
+	{ _MMIO(0x9888), 0x21904000 },
+	{ _MMIO(0x9888), 0x23904000 },
+	{ _MMIO(0x9888), 0x25904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x53901000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x55900111 },
+	{ _MMIO(0x9888), 0x47900001 },
+	{ _MMIO(0x9888), 0x57900000 },
+	{ _MMIO(0x9888), 0x49900000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900000 },
+	{ _MMIO(0x9888), 0x4d900000 },
+	{ _MMIO(0x9888), 0x45900400 },
+};
+
+static int
+get_compute_l3_cache_mux_config(struct drm_i915_private *dev_priv,
+				const struct i915_oa_reg **regs,
+				int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_l3_cache;
+	lens[n] = ARRAY_SIZE(mux_config_compute_l3_cache);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_hdc_and_sf[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x10800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000fdff },
+};
+
+static const struct i915_oa_reg flex_eu_config_hdc_and_sf[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_hdc_and_sf[] = {
+	{ _MMIO(0x9888), 0x104f0232 },
+	{ _MMIO(0x9888), 0x124f4640 },
+	{ _MMIO(0x9888), 0x11834400 },
+	{ _MMIO(0x9888), 0x022d4000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x0e2e0055 },
+	{ _MMIO(0x9888), 0x064c8000 },
+	{ _MMIO(0x9888), 0x084cc000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x024e8000 },
+	{ _MMIO(0x9888), 0x044ea000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x024f6100 },
+	{ _MMIO(0x9888), 0x044f416b },
+	{ _MMIO(0x9888), 0x064f004b },
+	{ _MMIO(0x9888), 0x1a4f0000 },
+	{ _MMIO(0x9888), 0x1a0f02a8 },
+	{ _MMIO(0x9888), 0x1a2c5500 },
+	{ _MMIO(0x9888), 0x0f808000 },
+	{ _MMIO(0x9888), 0x25810020 },
+	{ _MMIO(0x9888), 0x0f8305c0 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x1f951000 },
+	{ _MMIO(0x9888), 0x13920200 },
+	{ _MMIO(0x9888), 0x31908000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4d900003 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x45900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_hdc_and_sf_mux_config(struct drm_i915_private *dev_priv,
+			  const struct i915_oa_reg **regs,
+			  int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_hdc_and_sf;
+	lens[n] = ARRAY_SIZE(mux_config_hdc_and_sf);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_l3_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2770), 0x00100070 },
+	{ _MMIO(0x2774), 0x0000fff1 },
+	{ _MMIO(0x2778), 0x00014002 },
+	{ _MMIO(0x277c), 0x0000c3ff },
+	{ _MMIO(0x2780), 0x00010002 },
+	{ _MMIO(0x2784), 0x0000c7ff },
+	{ _MMIO(0x2788), 0x00004002 },
+	{ _MMIO(0x278c), 0x0000d3ff },
+	{ _MMIO(0x2790), 0x00100700 },
+	{ _MMIO(0x2794), 0x0000ff1f },
+	{ _MMIO(0x2798), 0x00001402 },
+	{ _MMIO(0x279c), 0x0000fc3f },
+	{ _MMIO(0x27a0), 0x00001002 },
+	{ _MMIO(0x27a4), 0x0000fc7f },
+	{ _MMIO(0x27a8), 0x00000402 },
+	{ _MMIO(0x27ac), 0x0000fd3f },
+};
+
+static const struct i915_oa_reg flex_eu_config_l3_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_l3_1[] = {
+	{ _MMIO(0x9888), 0x12643400 },
+	{ _MMIO(0x9888), 0x12653400 },
+	{ _MMIO(0x9888), 0x106c6800 },
+	{ _MMIO(0x9888), 0x126c001e },
+	{ _MMIO(0x9888), 0x166c0010 },
+	{ _MMIO(0x9888), 0x0c2d5000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x002d4000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x102e0154 },
+	{ _MMIO(0x9888), 0x0c2e5000 },
+	{ _MMIO(0x9888), 0x0e2e0055 },
+	{ _MMIO(0x9888), 0x104c8000 },
+	{ _MMIO(0x9888), 0x124c8000 },
+	{ _MMIO(0x9888), 0x144c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x044c8000 },
+	{ _MMIO(0x9888), 0x064cc000 },
+	{ _MMIO(0x9888), 0x084cc000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x0c4ea000 },
+	{ _MMIO(0x9888), 0x0e4ea000 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x024ea000 },
+	{ _MMIO(0x9888), 0x044ea000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x1c4f5500 },
+	{ _MMIO(0x9888), 0x1a4f1554 },
+	{ _MMIO(0x9888), 0x0a640024 },
+	{ _MMIO(0x9888), 0x10640000 },
+	{ _MMIO(0x9888), 0x04640000 },
+	{ _MMIO(0x9888), 0x0c650024 },
+	{ _MMIO(0x9888), 0x10650000 },
+	{ _MMIO(0x9888), 0x06650000 },
+	{ _MMIO(0x9888), 0x0c6c5327 },
+	{ _MMIO(0x9888), 0x0e6c5425 },
+	{ _MMIO(0x9888), 0x006c2a00 },
+	{ _MMIO(0x9888), 0x026c285b },
+	{ _MMIO(0x9888), 0x046c005c },
+	{ _MMIO(0x9888), 0x1c6c0000 },
+	{ _MMIO(0x9888), 0x1a6c0900 },
+	{ _MMIO(0x9888), 0x1c0f0aa0 },
+	{ _MMIO(0x9888), 0x180f4000 },
+	{ _MMIO(0x9888), 0x1a0f02aa },
+	{ _MMIO(0x9888), 0x1c2c5400 },
+	{ _MMIO(0x9888), 0x1e2c0001 },
+	{ _MMIO(0x9888), 0x1a2c5550 },
+	{ _MMIO(0x9888), 0x1993aa00 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x2b904000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900421 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900001 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900420 },
+	{ _MMIO(0x9888), 0x45900021 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+};
+
+static int
+get_l3_1_mux_config(struct drm_i915_private *dev_priv,
+		    const struct i915_oa_reg **regs,
+		    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_l3_1;
+	lens[n] = ARRAY_SIZE(mux_config_l3_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x30800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x0000efff },
+	{ _MMIO(0x2778), 0x00006000 },
+	{ _MMIO(0x277c), 0x0000f3ff },
+};
+
+static const struct i915_oa_reg flex_eu_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_rasterizer_and_pixel_backend[] = {
+	{ _MMIO(0x9888), 0x102d7800 },
+	{ _MMIO(0x9888), 0x122d79e0 },
+	{ _MMIO(0x9888), 0x0c2f0004 },
+	{ _MMIO(0x9888), 0x100e3800 },
+	{ _MMIO(0x9888), 0x180f0005 },
+	{ _MMIO(0x9888), 0x002d0940 },
+	{ _MMIO(0x9888), 0x022d802f },
+	{ _MMIO(0x9888), 0x042d4013 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x0e2e0050 },
+	{ _MMIO(0x9888), 0x022f0010 },
+	{ _MMIO(0x9888), 0x002f0000 },
+	{ _MMIO(0x9888), 0x084c8000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x044e8000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x040e0480 },
+	{ _MMIO(0x9888), 0x000e0000 },
+	{ _MMIO(0x9888), 0x060f0027 },
+	{ _MMIO(0x9888), 0x100f0000 },
+	{ _MMIO(0x9888), 0x1a0f0040 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x439014a0 },
+	{ _MMIO(0x9888), 0x459000a4 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900001 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_rasterizer_and_pixel_backend_mux_config(struct drm_i915_private *dev_priv,
+					    const struct i915_oa_reg **regs,
+					    int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_rasterizer_and_pixel_backend;
+	lens[n] = ARRAY_SIZE(mux_config_rasterizer_and_pixel_backend);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_sampler[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x70800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+	{ _MMIO(0x2770), 0x0000c000 },
+	{ _MMIO(0x2774), 0x0000e7ff },
+	{ _MMIO(0x2778), 0x00003000 },
+	{ _MMIO(0x277c), 0x0000f9ff },
+	{ _MMIO(0x2780), 0x00000c00 },
+	{ _MMIO(0x2784), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_sampler[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_sampler[] = {
+	{ _MMIO(0x9888), 0x121300a0 },
+	{ _MMIO(0x9888), 0x141600ab },
+	{ _MMIO(0x9888), 0x123300a0 },
+	{ _MMIO(0x9888), 0x143600ab },
+	{ _MMIO(0x9888), 0x125300a0 },
+	{ _MMIO(0x9888), 0x145600ab },
+	{ _MMIO(0x9888), 0x0c2d4000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x002d4000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x102e01a0 },
+	{ _MMIO(0x9888), 0x0c2e5000 },
+	{ _MMIO(0x9888), 0x0e2e0065 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x044c8000 },
+	{ _MMIO(0x9888), 0x064cc000 },
+	{ _MMIO(0x9888), 0x084c4000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x0e4e8000 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x024ea000 },
+	{ _MMIO(0x9888), 0x044e2000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x1c0f0800 },
+	{ _MMIO(0x9888), 0x180f4000 },
+	{ _MMIO(0x9888), 0x1a0f023f },
+	{ _MMIO(0x9888), 0x1e2c0003 },
+	{ _MMIO(0x9888), 0x1a2cc030 },
+	{ _MMIO(0x9888), 0x04132180 },
+	{ _MMIO(0x9888), 0x02130000 },
+	{ _MMIO(0x9888), 0x0c148000 },
+	{ _MMIO(0x9888), 0x0e142000 },
+	{ _MMIO(0x9888), 0x04148000 },
+	{ _MMIO(0x9888), 0x1e150140 },
+	{ _MMIO(0x9888), 0x1c150040 },
+	{ _MMIO(0x9888), 0x0c163000 },
+	{ _MMIO(0x9888), 0x0e160068 },
+	{ _MMIO(0x9888), 0x10160000 },
+	{ _MMIO(0x9888), 0x18160000 },
+	{ _MMIO(0x9888), 0x0a164000 },
+	{ _MMIO(0x9888), 0x04330043 },
+	{ _MMIO(0x9888), 0x02330000 },
+	{ _MMIO(0x9888), 0x0234a000 },
+	{ _MMIO(0x9888), 0x04342000 },
+	{ _MMIO(0x9888), 0x1c350015 },
+	{ _MMIO(0x9888), 0x02363460 },
+	{ _MMIO(0x9888), 0x10360000 },
+	{ _MMIO(0x9888), 0x04360000 },
+	{ _MMIO(0x9888), 0x06360000 },
+	{ _MMIO(0x9888), 0x08364000 },
+	{ _MMIO(0x9888), 0x06530043 },
+	{ _MMIO(0x9888), 0x02530000 },
+	{ _MMIO(0x9888), 0x0e548000 },
+	{ _MMIO(0x9888), 0x00548000 },
+	{ _MMIO(0x9888), 0x06542000 },
+	{ _MMIO(0x9888), 0x1e550400 },
+	{ _MMIO(0x9888), 0x1a552000 },
+	{ _MMIO(0x9888), 0x1c550100 },
+	{ _MMIO(0x9888), 0x0e563000 },
+	{ _MMIO(0x9888), 0x00563400 },
+	{ _MMIO(0x9888), 0x10560000 },
+	{ _MMIO(0x9888), 0x18560000 },
+	{ _MMIO(0x9888), 0x02560000 },
+	{ _MMIO(0x9888), 0x0c564000 },
+	{ _MMIO(0x9888), 0x1993a800 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b9014a0 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900001 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900820 },
+	{ _MMIO(0x9888), 0x45901022 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+};
+
+static int
+get_sampler_mux_config(struct drm_i915_private *dev_priv,
+		       const struct i915_oa_reg **regs,
+		       int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_sampler;
+	lens[n] = ARRAY_SIZE(mux_config_sampler);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_1[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x30800000 },
+	{ _MMIO(0x2770), 0x00000002 },
+	{ _MMIO(0x2774), 0x00007fff },
+	{ _MMIO(0x2778), 0x00000000 },
+	{ _MMIO(0x277c), 0x00009fff },
+	{ _MMIO(0x2780), 0x00000002 },
+	{ _MMIO(0x2784), 0x0000efff },
+	{ _MMIO(0x2788), 0x00000000 },
+	{ _MMIO(0x278c), 0x0000f3ff },
+	{ _MMIO(0x2790), 0x00000002 },
+	{ _MMIO(0x2794), 0x0000fdff },
+	{ _MMIO(0x2798), 0x00000000 },
+	{ _MMIO(0x279c), 0x0000fe7f },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_1[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_1[] = {
+	{ _MMIO(0x9888), 0x141a0000 },
+	{ _MMIO(0x9888), 0x143a0000 },
+	{ _MMIO(0x9888), 0x145a0000 },
+	{ _MMIO(0x9888), 0x0c2d4000 },
+	{ _MMIO(0x9888), 0x0e2d5000 },
+	{ _MMIO(0x9888), 0x002d4000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x102e0150 },
+	{ _MMIO(0x9888), 0x0c2e5000 },
+	{ _MMIO(0x9888), 0x0e2e006a },
+	{ _MMIO(0x9888), 0x124c8000 },
+	{ _MMIO(0x9888), 0x144c8000 },
+	{ _MMIO(0x9888), 0x164c2000 },
+	{ _MMIO(0x9888), 0x044c8000 },
+	{ _MMIO(0x9888), 0x064c4000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x0c4e8000 },
+	{ _MMIO(0x9888), 0x0e4ea000 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x024e2000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x1c0f0bc0 },
+	{ _MMIO(0x9888), 0x180f4000 },
+	{ _MMIO(0x9888), 0x1a0f0302 },
+	{ _MMIO(0x9888), 0x1e2c0003 },
+	{ _MMIO(0x9888), 0x1a2c00f0 },
+	{ _MMIO(0x9888), 0x021a3080 },
+	{ _MMIO(0x9888), 0x041a31e5 },
+	{ _MMIO(0x9888), 0x02148000 },
+	{ _MMIO(0x9888), 0x0414a000 },
+	{ _MMIO(0x9888), 0x1c150054 },
+	{ _MMIO(0x9888), 0x06168000 },
+	{ _MMIO(0x9888), 0x08168000 },
+	{ _MMIO(0x9888), 0x0a168000 },
+	{ _MMIO(0x9888), 0x0c3a3280 },
+	{ _MMIO(0x9888), 0x0e3a0063 },
+	{ _MMIO(0x9888), 0x063a0061 },
+	{ _MMIO(0x9888), 0x023a0000 },
+	{ _MMIO(0x9888), 0x0c348000 },
+	{ _MMIO(0x9888), 0x0e342000 },
+	{ _MMIO(0x9888), 0x06342000 },
+	{ _MMIO(0x9888), 0x1e350140 },
+	{ _MMIO(0x9888), 0x1c350100 },
+	{ _MMIO(0x9888), 0x18360028 },
+	{ _MMIO(0x9888), 0x0c368000 },
+	{ _MMIO(0x9888), 0x0e5a3080 },
+	{ _MMIO(0x9888), 0x005a3280 },
+	{ _MMIO(0x9888), 0x025a0063 },
+	{ _MMIO(0x9888), 0x0e548000 },
+	{ _MMIO(0x9888), 0x00548000 },
+	{ _MMIO(0x9888), 0x02542000 },
+	{ _MMIO(0x9888), 0x1e550400 },
+	{ _MMIO(0x9888), 0x1a552000 },
+	{ _MMIO(0x9888), 0x1c550001 },
+	{ _MMIO(0x9888), 0x18560080 },
+	{ _MMIO(0x9888), 0x02568000 },
+	{ _MMIO(0x9888), 0x04568000 },
+	{ _MMIO(0x9888), 0x1993a800 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x2d904000 },
+	{ _MMIO(0x9888), 0x2f904000 },
+	{ _MMIO(0x9888), 0x31904000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x59900000 },
+	{ _MMIO(0x9888), 0x4b900420 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+	{ _MMIO(0x9888), 0x4d900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900000 },
+	{ _MMIO(0x9888), 0x45901084 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900001 },
+};
+
+static int
+get_tdl_1_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_1;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_1);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_tdl_2[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_tdl_2[] = {
+	{ _MMIO(0xe458), 0x00005004 },
+	{ _MMIO(0xe558), 0x00010003 },
+	{ _MMIO(0xe658), 0x00012011 },
+	{ _MMIO(0xe758), 0x00015014 },
+	{ _MMIO(0xe45c), 0x00051050 },
+	{ _MMIO(0xe55c), 0x00053052 },
+	{ _MMIO(0xe65c), 0x00055054 },
+};
+
+static const struct i915_oa_reg mux_config_tdl_2[] = {
+	{ _MMIO(0x9888), 0x141a026b },
+	{ _MMIO(0x9888), 0x143a0173 },
+	{ _MMIO(0x9888), 0x145a026b },
+	{ _MMIO(0x9888), 0x002d4000 },
+	{ _MMIO(0x9888), 0x022d5000 },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x0c2e5000 },
+	{ _MMIO(0x9888), 0x0e2e0069 },
+	{ _MMIO(0x9888), 0x044c8000 },
+	{ _MMIO(0x9888), 0x064cc000 },
+	{ _MMIO(0x9888), 0x0a4c4000 },
+	{ _MMIO(0x9888), 0x004e8000 },
+	{ _MMIO(0x9888), 0x024ea000 },
+	{ _MMIO(0x9888), 0x064e2000 },
+	{ _MMIO(0x9888), 0x180f6000 },
+	{ _MMIO(0x9888), 0x1a0f030a },
+	{ _MMIO(0x9888), 0x1a2c03c0 },
+	{ _MMIO(0x9888), 0x041a37e7 },
+	{ _MMIO(0x9888), 0x021a0000 },
+	{ _MMIO(0x9888), 0x0414a000 },
+	{ _MMIO(0x9888), 0x1c150050 },
+	{ _MMIO(0x9888), 0x08168000 },
+	{ _MMIO(0x9888), 0x0a168000 },
+	{ _MMIO(0x9888), 0x003a3380 },
+	{ _MMIO(0x9888), 0x063a006f },
+	{ _MMIO(0x9888), 0x023a0000 },
+	{ _MMIO(0x9888), 0x00348000 },
+	{ _MMIO(0x9888), 0x06342000 },
+	{ _MMIO(0x9888), 0x1a352000 },
+	{ _MMIO(0x9888), 0x1c350100 },
+	{ _MMIO(0x9888), 0x02368000 },
+	{ _MMIO(0x9888), 0x0c368000 },
+	{ _MMIO(0x9888), 0x025a37e7 },
+	{ _MMIO(0x9888), 0x0254a000 },
+	{ _MMIO(0x9888), 0x1c550005 },
+	{ _MMIO(0x9888), 0x04568000 },
+	{ _MMIO(0x9888), 0x06568000 },
+	{ _MMIO(0x9888), 0x03938000 },
+	{ _MMIO(0x9888), 0x05938000 },
+	{ _MMIO(0x9888), 0x07938000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x15904000 },
+	{ _MMIO(0x9888), 0x17904000 },
+	{ _MMIO(0x9888), 0x19904000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x53900000 },
+	{ _MMIO(0x9888), 0x43900020 },
+	{ _MMIO(0x9888), 0x45901080 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900001 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_tdl_2_mux_config(struct drm_i915_private *dev_priv,
+		     const struct i915_oa_reg **regs,
+		     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_tdl_2;
+	lens[n] = ARRAY_SIZE(mux_config_tdl_2);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_compute_extra[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2714), 0x00800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2724), 0x00800000 },
+};
+
+static const struct i915_oa_reg flex_eu_config_compute_extra[] = {
+	{ _MMIO(0xe458), 0x00001000 },
+	{ _MMIO(0xe558), 0x00003002 },
+	{ _MMIO(0xe658), 0x00005004 },
+	{ _MMIO(0xe758), 0x00011010 },
+	{ _MMIO(0xe45c), 0x00050012 },
+	{ _MMIO(0xe55c), 0x00052051 },
+	{ _MMIO(0xe65c), 0x00000008 },
+};
+
+static const struct i915_oa_reg mux_config_compute_extra[] = {
+	{ _MMIO(0x9888), 0x141a001f },
+	{ _MMIO(0x9888), 0x143a001f },
+	{ _MMIO(0x9888), 0x145a001f },
+	{ _MMIO(0x9888), 0x042d5000 },
+	{ _MMIO(0x9888), 0x062d1000 },
+	{ _MMIO(0x9888), 0x0e2e0094 },
+	{ _MMIO(0x9888), 0x084cc000 },
+	{ _MMIO(0x9888), 0x044ea000 },
+	{ _MMIO(0x9888), 0x1a0f00e0 },
+	{ _MMIO(0x9888), 0x1a2c0c00 },
+	{ _MMIO(0x9888), 0x061a0063 },
+	{ _MMIO(0x9888), 0x021a0000 },
+	{ _MMIO(0x9888), 0x06142000 },
+	{ _MMIO(0x9888), 0x1c150100 },
+	{ _MMIO(0x9888), 0x0c168000 },
+	{ _MMIO(0x9888), 0x043a3180 },
+	{ _MMIO(0x9888), 0x023a0000 },
+	{ _MMIO(0x9888), 0x04348000 },
+	{ _MMIO(0x9888), 0x1c350040 },
+	{ _MMIO(0x9888), 0x0a368000 },
+	{ _MMIO(0x9888), 0x045a0063 },
+	{ _MMIO(0x9888), 0x025a0000 },
+	{ _MMIO(0x9888), 0x04542000 },
+	{ _MMIO(0x9888), 0x1c550010 },
+	{ _MMIO(0x9888), 0x08568000 },
+	{ _MMIO(0x9888), 0x09938000 },
+	{ _MMIO(0x9888), 0x0b938000 },
+	{ _MMIO(0x9888), 0x0d938000 },
+	{ _MMIO(0x9888), 0x1b904000 },
+	{ _MMIO(0x9888), 0x1d904000 },
+	{ _MMIO(0x9888), 0x1f904000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x45900400 },
+	{ _MMIO(0x9888), 0x47900004 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_compute_extra_mux_config(struct drm_i915_private *dev_priv,
+			     const struct i915_oa_reg **regs,
+			     int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_compute_extra;
+	lens[n] = ARRAY_SIZE(mux_config_compute_extra);
+	n++;
+
+	return n;
+}
+
+static const struct i915_oa_reg b_counter_config_test_oa[] = {
+	{ _MMIO(0x2740), 0x00000000 },
+	{ _MMIO(0x2744), 0x00800000 },
+	{ _MMIO(0x2714), 0xf0800000 },
+	{ _MMIO(0x2710), 0x00000000 },
+	{ _MMIO(0x2724), 0xf0800000 },
+	{ _MMIO(0x2720), 0x00000000 },
+	{ _MMIO(0x2770), 0x00000004 },
+	{ _MMIO(0x2774), 0x00000000 },
+	{ _MMIO(0x2778), 0x00000003 },
+	{ _MMIO(0x277c), 0x00000000 },
+	{ _MMIO(0x2780), 0x00000007 },
+	{ _MMIO(0x2784), 0x00000000 },
+	{ _MMIO(0x2788), 0x00100002 },
+	{ _MMIO(0x278c), 0x0000fff7 },
+	{ _MMIO(0x2790), 0x00100002 },
+	{ _MMIO(0x2794), 0x0000ffcf },
+	{ _MMIO(0x2798), 0x00100082 },
+	{ _MMIO(0x279c), 0x0000ffef },
+	{ _MMIO(0x27a0), 0x001000c2 },
+	{ _MMIO(0x27a4), 0x0000ffe7 },
+	{ _MMIO(0x27a8), 0x00100001 },
+	{ _MMIO(0x27ac), 0x0000ffe7 },
+};
+
+static const struct i915_oa_reg flex_eu_config_test_oa[] = {
+};
+
+static const struct i915_oa_reg mux_config_test_oa[] = {
+	{ _MMIO(0x9888), 0x19800000 },
+	{ _MMIO(0x9888), 0x07800063 },
+	{ _MMIO(0x9888), 0x11800000 },
+	{ _MMIO(0x9888), 0x23810008 },
+	{ _MMIO(0x9888), 0x1d950400 },
+	{ _MMIO(0x9888), 0x0f922000 },
+	{ _MMIO(0x9888), 0x1f908000 },
+	{ _MMIO(0x9888), 0x37900000 },
+	{ _MMIO(0x9888), 0x55900000 },
+	{ _MMIO(0x9888), 0x47900000 },
+	{ _MMIO(0x9888), 0x33900000 },
+};
+
+static int
+get_test_oa_mux_config(struct drm_i915_private *dev_priv,
+		       const struct i915_oa_reg **regs,
+		       int *lens)
+{
+	int n = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs) < 1);
+	BUILD_BUG_ON(ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens) < 1);
+
+	regs[n] = mux_config_test_oa;
+	lens[n] = ARRAY_SIZE(mux_config_test_oa);
+	n++;
+
+	return n;
+}
+
+int i915_oa_select_metric_set_glk(struct drm_i915_private *dev_priv)
+{
+	dev_priv->perf.oa.n_mux_configs = 0;
+	dev_priv->perf.oa.b_counter_regs = NULL;
+	dev_priv->perf.oa.b_counter_regs_len = 0;
+	dev_priv->perf.oa.flex_regs = NULL;
+	dev_priv->perf.oa.flex_regs_len = 0;
+
+	switch (dev_priv->perf.oa.metrics_set) {
+	case METRIC_SET_ID_RENDER_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_basic_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_basic);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_BASIC:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_basic_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_BASIC\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_basic;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_basic);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_basic;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_basic);
+
+		return 0;
+	case METRIC_SET_ID_RENDER_PIPE_PROFILE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_render_pipe_profile_mux_config(dev_priv,
+							   dev_priv->perf.oa.mux_regs,
+							   dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RENDER_PIPE_PROFILE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_render_pipe_profile;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_render_pipe_profile);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_render_pipe_profile;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_render_pipe_profile);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_READS:
+		dev_priv->perf.oa.n_mux_configs =
+			get_memory_reads_mux_config(dev_priv,
+						    dev_priv->perf.oa.mux_regs,
+						    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_READS\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_reads;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_reads);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_reads;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_reads);
+
+		return 0;
+	case METRIC_SET_ID_MEMORY_WRITES:
+		dev_priv->perf.oa.n_mux_configs =
+			get_memory_writes_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"MEMORY_WRITES\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_memory_writes;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_memory_writes);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_memory_writes;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_memory_writes);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTENDED:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_extended_mux_config(dev_priv,
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTENDED\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extended;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extended);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extended;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extended);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_L3_CACHE:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_l3_cache_mux_config(dev_priv,
+							dev_priv->perf.oa.mux_regs,
+							dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_L3_CACHE\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_l3_cache;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_l3_cache);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_l3_cache;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_l3_cache);
+
+		return 0;
+	case METRIC_SET_ID_HDC_AND_SF:
+		dev_priv->perf.oa.n_mux_configs =
+			get_hdc_and_sf_mux_config(dev_priv,
+						  dev_priv->perf.oa.mux_regs,
+						  dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"HDC_AND_SF\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_hdc_and_sf;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_hdc_and_sf);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_hdc_and_sf;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_hdc_and_sf);
+
+		return 0;
+	case METRIC_SET_ID_L3_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_l3_1_mux_config(dev_priv,
+					    dev_priv->perf.oa.mux_regs,
+					    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"L3_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_l3_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_l3_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_l3_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_l3_1);
+
+		return 0;
+	case METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND:
+		dev_priv->perf.oa.n_mux_configs =
+			get_rasterizer_and_pixel_backend_mux_config(dev_priv,
+								    dev_priv->perf.oa.mux_regs,
+								    dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"RASTERIZER_AND_PIXEL_BACKEND\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_rasterizer_and_pixel_backend);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_rasterizer_and_pixel_backend;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_rasterizer_and_pixel_backend);
+
+		return 0;
+	case METRIC_SET_ID_SAMPLER:
+		dev_priv->perf.oa.n_mux_configs =
+			get_sampler_mux_config(dev_priv,
+					       dev_priv->perf.oa.mux_regs,
+					       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"SAMPLER\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_sampler;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_sampler);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_sampler;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_sampler);
+
+		return 0;
+	case METRIC_SET_ID_TDL_1:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_1_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_1\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_1;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_1);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_1;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_1);
+
+		return 0;
+	case METRIC_SET_ID_TDL_2:
+		dev_priv->perf.oa.n_mux_configs =
+			get_tdl_2_mux_config(dev_priv,
+					     dev_priv->perf.oa.mux_regs,
+					     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TDL_2\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_tdl_2;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_tdl_2);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_tdl_2;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_tdl_2);
+
+		return 0;
+	case METRIC_SET_ID_COMPUTE_EXTRA:
+		dev_priv->perf.oa.n_mux_configs =
+			get_compute_extra_mux_config(dev_priv,
+						     dev_priv->perf.oa.mux_regs,
+						     dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"COMPUTE_EXTRA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_compute_extra;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_compute_extra);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_compute_extra;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_compute_extra);
+
+		return 0;
+	case METRIC_SET_ID_TEST_OA:
+		dev_priv->perf.oa.n_mux_configs =
+			get_test_oa_mux_config(dev_priv,
+					       dev_priv->perf.oa.mux_regs,
+					       dev_priv->perf.oa.mux_regs_lens);
+		if (dev_priv->perf.oa.n_mux_configs == 0) {
+			DRM_DEBUG_DRIVER("No suitable MUX config for \"TEST_OA\" metric set\n");
+
+			/* EINVAL because *_register_sysfs already checked this
+			 * and so it wouldn't have been advertised to userspace and
+			 * so shouldn't have been requested
+			 */
+			return -EINVAL;
+		}
+
+		dev_priv->perf.oa.b_counter_regs =
+			b_counter_config_test_oa;
+		dev_priv->perf.oa.b_counter_regs_len =
+			ARRAY_SIZE(b_counter_config_test_oa);
+
+		dev_priv->perf.oa.flex_regs =
+			flex_eu_config_test_oa;
+		dev_priv->perf.oa.flex_regs_len =
+			ARRAY_SIZE(flex_eu_config_test_oa);
+
+		return 0;
+	default:
+		return -ENODEV;
+	}
+}
+
+static ssize_t
+show_render_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_BASIC);
+}
+
+static struct device_attribute dev_attr_render_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_basic[] = {
+	&dev_attr_render_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_basic = {
+	.name = "d72df5c7-5b4a-4274-a43f-00b0fd51fc68",
+	.attrs =  attrs_render_basic,
+};
+
+static ssize_t
+show_compute_basic_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_BASIC);
+}
+
+static struct device_attribute dev_attr_compute_basic_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_basic_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_basic[] = {
+	&dev_attr_compute_basic_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_basic = {
+	.name = "814285f6-354d-41d2-ba49-e24e622714a0",
+	.attrs =  attrs_compute_basic,
+};
+
+static ssize_t
+show_render_pipe_profile_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RENDER_PIPE_PROFILE);
+}
+
+static struct device_attribute dev_attr_render_pipe_profile_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_render_pipe_profile_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_render_pipe_profile[] = {
+	&dev_attr_render_pipe_profile_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_render_pipe_profile = {
+	.name = "07d397a6-b3e6-49f6-9433-a4f293d55978",
+	.attrs =  attrs_render_pipe_profile,
+};
+
+static ssize_t
+show_memory_reads_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_READS);
+}
+
+static struct device_attribute dev_attr_memory_reads_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_reads_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_reads[] = {
+	&dev_attr_memory_reads_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_reads = {
+	.name = "1a356946-5428-450b-a2f0-89f8783a302d",
+	.attrs =  attrs_memory_reads,
+};
+
+static ssize_t
+show_memory_writes_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_MEMORY_WRITES);
+}
+
+static struct device_attribute dev_attr_memory_writes_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_memory_writes_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_memory_writes[] = {
+	&dev_attr_memory_writes_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_memory_writes = {
+	.name = "5299be9d-7a61-4c99-9f81-f87e6c5aaca9",
+	.attrs =  attrs_memory_writes,
+};
+
+static ssize_t
+show_compute_extended_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTENDED);
+}
+
+static struct device_attribute dev_attr_compute_extended_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extended_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extended[] = {
+	&dev_attr_compute_extended_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extended = {
+	.name = "bc9bcff2-459a-4cbc-986d-a84b077153f3",
+	.attrs =  attrs_compute_extended,
+};
+
+static ssize_t
+show_compute_l3_cache_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_L3_CACHE);
+}
+
+static struct device_attribute dev_attr_compute_l3_cache_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_l3_cache_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_l3_cache[] = {
+	&dev_attr_compute_l3_cache_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_l3_cache = {
+	.name = "88ec931f-5b4a-453a-9db6-a61232b6143d",
+	.attrs =  attrs_compute_l3_cache,
+};
+
+static ssize_t
+show_hdc_and_sf_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_HDC_AND_SF);
+}
+
+static struct device_attribute dev_attr_hdc_and_sf_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_hdc_and_sf_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_hdc_and_sf[] = {
+	&dev_attr_hdc_and_sf_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_hdc_and_sf = {
+	.name = "530d176d-2a18-4014-adf8-1500c6c60835",
+	.attrs =  attrs_hdc_and_sf,
+};
+
+static ssize_t
+show_l3_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_L3_1);
+}
+
+static struct device_attribute dev_attr_l3_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_l3_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_l3_1[] = {
+	&dev_attr_l3_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_l3_1 = {
+	.name = "fdee5a5a-f23c-43d1-aa73-f6257c71671d",
+	.attrs =  attrs_l3_1,
+};
+
+static ssize_t
+show_rasterizer_and_pixel_backend_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_RASTERIZER_AND_PIXEL_BACKEND);
+}
+
+static struct device_attribute dev_attr_rasterizer_and_pixel_backend_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_rasterizer_and_pixel_backend_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_rasterizer_and_pixel_backend[] = {
+	&dev_attr_rasterizer_and_pixel_backend_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_rasterizer_and_pixel_backend = {
+	.name = "6617623e-ca73-4791-b2b7-ddedd0846a0c",
+	.attrs =  attrs_rasterizer_and_pixel_backend,
+};
+
+static ssize_t
+show_sampler_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_SAMPLER);
+}
+
+static struct device_attribute dev_attr_sampler_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_sampler_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_sampler[] = {
+	&dev_attr_sampler_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_sampler = {
+	.name = "f3b2ea63-e82e-4234-b418-44dd20dd34d0",
+	.attrs =  attrs_sampler,
+};
+
+static ssize_t
+show_tdl_1_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_1);
+}
+
+static struct device_attribute dev_attr_tdl_1_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_1_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_1[] = {
+	&dev_attr_tdl_1_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_1 = {
+	.name = "14411d35-cbf6-4f5e-b68b-190faf9a1a83",
+	.attrs =  attrs_tdl_1,
+};
+
+static ssize_t
+show_tdl_2_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TDL_2);
+}
+
+static struct device_attribute dev_attr_tdl_2_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_tdl_2_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_tdl_2[] = {
+	&dev_attr_tdl_2_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_tdl_2 = {
+	.name = "ffa3f263-0478-4724-8c9f-c911c5ec0f1d",
+	.attrs =  attrs_tdl_2,
+};
+
+static ssize_t
+show_compute_extra_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_COMPUTE_EXTRA);
+}
+
+static struct device_attribute dev_attr_compute_extra_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_compute_extra_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_compute_extra[] = {
+	&dev_attr_compute_extra_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_compute_extra = {
+	.name = "15274c82-27d2-4819-876a-7cb1a2c59ba4",
+	.attrs =  attrs_compute_extra,
+};
+
+static ssize_t
+show_test_oa_id(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", METRIC_SET_ID_TEST_OA);
+}
+
+static struct device_attribute dev_attr_test_oa_id = {
+	.attr = { .name = "id", .mode = 0444 },
+	.show = show_test_oa_id,
+	.store = NULL,
+};
+
+static struct attribute *attrs_test_oa[] = {
+	&dev_attr_test_oa_id.attr,
+	NULL,
+};
+
+static struct attribute_group group_test_oa = {
+	.name = "dd3fd789-e783-4204-8cd0-b671bbccb0cf",
+	.attrs =  attrs_test_oa,
+};
+
+int
+i915_perf_register_sysfs_glk(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+	int ret = 0;
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+		if (ret)
+			goto error_render_basic;
+	}
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+		if (ret)
+			goto error_compute_basic;
+	}
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+		if (ret)
+			goto error_render_pipe_profile;
+	}
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+		if (ret)
+			goto error_memory_reads;
+	}
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+		if (ret)
+			goto error_memory_writes;
+	}
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+		if (ret)
+			goto error_compute_extended;
+	}
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+		if (ret)
+			goto error_compute_l3_cache;
+	}
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+		if (ret)
+			goto error_hdc_and_sf;
+	}
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+		if (ret)
+			goto error_l3_1;
+	}
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+		if (ret)
+			goto error_rasterizer_and_pixel_backend;
+	}
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_sampler);
+		if (ret)
+			goto error_sampler;
+	}
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+		if (ret)
+			goto error_tdl_1;
+	}
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+		if (ret)
+			goto error_tdl_2;
+	}
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+		if (ret)
+			goto error_compute_extra;
+	}
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens)) {
+		ret = sysfs_create_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+		if (ret)
+			goto error_test_oa;
+	}
+
+	return 0;
+
+error_test_oa:
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+error_compute_extra:
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+error_tdl_2:
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+error_tdl_1:
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+error_sampler:
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+error_rasterizer_and_pixel_backend:
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+error_l3_1:
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+error_hdc_and_sf:
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+error_compute_l3_cache:
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+error_compute_extended:
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+error_memory_writes:
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+error_memory_reads:
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+error_render_pipe_profile:
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+error_compute_basic:
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+error_render_basic:
+	return ret;
+}
+
+void
+i915_perf_unregister_sysfs_glk(struct drm_i915_private *dev_priv)
+{
+	const struct i915_oa_reg *mux_regs[ARRAY_SIZE(dev_priv->perf.oa.mux_regs)];
+	int mux_lens[ARRAY_SIZE(dev_priv->perf.oa.mux_regs_lens)];
+
+	if (get_render_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_basic);
+	if (get_compute_basic_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_basic);
+	if (get_render_pipe_profile_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_render_pipe_profile);
+	if (get_memory_reads_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_reads);
+	if (get_memory_writes_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_memory_writes);
+	if (get_compute_extended_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extended);
+	if (get_compute_l3_cache_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_l3_cache);
+	if (get_hdc_and_sf_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_hdc_and_sf);
+	if (get_l3_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_l3_1);
+	if (get_rasterizer_and_pixel_backend_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_rasterizer_and_pixel_backend);
+	if (get_sampler_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_sampler);
+	if (get_tdl_1_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_1);
+	if (get_tdl_2_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_tdl_2);
+	if (get_compute_extra_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_compute_extra);
+	if (get_test_oa_mux_config(dev_priv, mux_regs, mux_lens))
+		sysfs_remove_group(dev_priv->perf.metrics_kobj, &group_test_oa);
+}
diff --git a/drivers/gpu/drm/i915/i915_oa_glk.h b/drivers/gpu/drm/i915/i915_oa_glk.h
new file mode 100644
index 000000000000..5511bb1cecf7
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_oa_glk.h
@@ -0,0 +1,40 @@
+/*
+ * Autogenerated file by GPU Top : https://github.com/rib/gputop
+ * DO NOT EDIT manually!
+ *
+ *
+ * Copyright (c) 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __I915_OA_GLK_H__
+#define __I915_OA_GLK_H__
+
+extern int i915_oa_n_builtin_metric_sets_glk;
+
+extern int i915_oa_select_metric_set_glk(struct drm_i915_private *dev_priv);
+
+extern int i915_perf_register_sysfs_glk(struct drm_i915_private *dev_priv);
+
+extern void i915_perf_unregister_sysfs_glk(struct drm_i915_private *dev_priv);
+
+#endif
diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index d5126faf2a35..c281847eb56b 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -204,6 +204,7 @@
 #include "i915_oa_bxt.h"
 #include "i915_oa_kblgt2.h"
 #include "i915_oa_kblgt3.h"
+#include "i915_oa_glk.h"
 
 /* HW requires this to be a power of two, between 128k and 16M, though driver
  * is currently generally designed assuming the largest 16M size is used such
@@ -1781,7 +1782,7 @@ static int gen8_enable_metric_set(struct drm_i915_private *dev_priv)
 	 * RPT_ID field.
 	 */
 	if (IS_SKYLAKE(dev_priv) || IS_BROXTON(dev_priv) ||
-	    IS_KABYLAKE(dev_priv)) {
+	    IS_KABYLAKE(dev_priv) || IS_GEMINILAKE(dev_priv)) {
 		I915_WRITE(GEN8_OA_DEBUG,
 			   _MASKED_BIT_ENABLE(GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS |
 					      GEN9_OA_DEBUG_INCLUDE_CLK_RATIO));
@@ -2869,6 +2870,9 @@ void i915_perf_register(struct drm_i915_private *dev_priv)
 				goto sysfs_error;
 		} else
 			goto sysfs_error;
+	} else if (IS_GEMINILAKE(dev_priv)) {
+		if (i915_perf_register_sysfs_glk(dev_priv))
+			goto sysfs_error;
 	}
 
 	goto exit;
@@ -2915,7 +2919,9 @@ void i915_perf_unregister(struct drm_i915_private *dev_priv)
 			i915_perf_unregister_sysfs_kblgt2(dev_priv);
 		else if (IS_KBL_GT3(dev_priv))
 			i915_perf_unregister_sysfs_kblgt3(dev_priv);
-	}
+	} else if (IS_GEMINILAKE(dev_priv))
+		i915_perf_unregister_sysfs_glk(dev_priv);
+
 
 	kobject_put(dev_priv->perf.metrics_kobj);
 	dev_priv->perf.metrics_kobj = NULL;
@@ -3059,6 +3065,13 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
 					i915_oa_n_builtin_metric_sets_kblgt3;
 				dev_priv->perf.oa.ops.select_metric_set =
 					i915_oa_select_metric_set_kblgt3;
+			} else if (IS_GEMINILAKE(dev_priv)) {
+				dev_priv->perf.oa.timestamp_frequency = 19200000;
+
+				dev_priv->perf.oa.n_builtin_sets =
+					i915_oa_n_builtin_metric_sets_glk;
+				dev_priv->perf.oa.ops.select_metric_set =
+					i915_oa_select_metric_set_glk;
 			}
 		}
 
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v15 13/14] drm/i915/perf: reprogram NOA muxes at the beginning of each workload
  2017-05-31 12:33   ` [PATCH v15 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                       ` (11 preceding siblings ...)
  2017-05-31 12:33     ` [PATCH v15 12/14] drm/i915/perf: add GLK support Lionel Landwerlin
@ 2017-05-31 12:33     ` Lionel Landwerlin
  2017-05-31 12:33     ` [PATCH v15 14/14] drm/i915/perf: notify sseu configuration changes Lionel Landwerlin
  13 siblings, 0 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-31 12:33 UTC (permalink / raw)
  To: intel-gfx

Dynamic slices/subslices shutdown will effectivelly loose the NOA
configuration uploaded in the slices/subslices. When i915 perf is in
use, we therefore need to reprogram it.

v2: Make sure we handle configs with more register writes than the max
    MI_LOAD_REGISTER_IMM can do (Lionel)

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h         |  2 +
 drivers/gpu/drm/i915/i915_perf.c        | 77 +++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/intel_ringbuffer.c |  3 ++
 3 files changed, 82 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index c3acb0e9eb5d..499b2f9aa4be 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2400,6 +2400,7 @@ struct drm_i915_private {
 			const struct i915_oa_reg *mux_regs[6];
 			int mux_regs_lens[6];
 			int n_mux_configs;
+			int total_n_mux_regs;
 
 			const struct i915_oa_reg *b_counter_regs;
 			int b_counter_regs_len;
@@ -3535,6 +3536,7 @@ int i915_perf_open_ioctl(struct drm_device *dev, void *data,
 void i915_oa_init_reg_state(struct intel_engine_cs *engine,
 			    struct i915_gem_context *ctx,
 			    uint32_t *reg_state);
+int i915_oa_emit_noa_config_locked(struct drm_i915_gem_request *req);
 
 /* i915_gem_evict.c */
 int __must_check i915_gem_evict_something(struct i915_address_space *vm,
diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index c281847eb56b..4229c74baa22 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -1438,11 +1438,24 @@ static void config_oa_regs(struct drm_i915_private *dev_priv,
 	}
 }
 
+static void count_total_mux_regs(struct drm_i915_private *dev_priv)
+{
+	int i;
+
+	dev_priv->perf.oa.total_n_mux_regs = 0;
+	for (i = 0; i < dev_priv->perf.oa.n_mux_configs; i++) {
+		dev_priv->perf.oa.total_n_mux_regs +=
+			dev_priv->perf.oa.mux_regs_lens[i];
+	}
+}
+
 static int hsw_enable_metric_set(struct drm_i915_private *dev_priv)
 {
 	int ret = i915_oa_select_metric_set_hsw(dev_priv);
 	int i;
 
+	count_total_mux_regs(dev_priv);
+
 	if (ret)
 		return ret;
 
@@ -1756,6 +1769,8 @@ static int gen8_enable_metric_set(struct drm_i915_private *dev_priv)
 	int ret = dev_priv->perf.oa.ops.select_metric_set(dev_priv);
 	int i;
 
+	count_total_mux_regs(dev_priv);
+
 	if (ret)
 		return ret;
 
@@ -2094,6 +2109,68 @@ void i915_oa_init_reg_state(struct intel_engine_cs *engine,
 	gen8_update_reg_state_unlocked(ctx, reg_state);
 }
 
+int i915_oa_emit_noa_config_locked(struct drm_i915_gem_request *req)
+{
+	struct drm_i915_private *dev_priv = req->i915;
+	int max_loads = 125;
+	int n_load, n_registers, n_loaded_register;
+	int i, j;
+	u32 *cs;
+
+	lockdep_assert_held(&dev_priv->drm.struct_mutex);
+
+	if (!IS_GEN(dev_priv, 8, 9))
+		return 0;
+
+	/* Perf not supported or not enabled. */
+	if (!dev_priv->perf.initialized ||
+	    !dev_priv->perf.oa.exclusive_stream)
+		return 0;
+
+	n_registers = dev_priv->perf.oa.total_n_mux_regs;
+	n_load = (n_registers / max_loads) +
+		 (n_registers % max_loads) == 0;
+
+	cs = intel_ring_begin(req,
+			      3 * 2 + /* MI_LOAD_REGISTER_IMM for chicken registers */
+			      n_load + /* MI_LOAD_REGISTER_IMM for mux registers */
+			      n_registers * 2 + /* offset & value for mux registers*/
+			      1 /* NOOP */);
+	if (IS_ERR(cs))
+		return PTR_ERR(cs);
+
+	*cs++ = MI_LOAD_REGISTER_IMM(1);
+	*cs++ = i915_mmio_reg_offset(GDT_CHICKEN_BITS);
+	*cs++ = 0xA0;
+
+	n_loaded_register = 0;
+	for (i = 0; i < dev_priv->perf.oa.n_mux_configs; i++) {
+		const struct i915_oa_reg *mux_regs =
+			dev_priv->perf.oa.mux_regs[i];
+		const int mux_regs_len = dev_priv->perf.oa.mux_regs_lens[i];
+
+		for (j = 0; j < mux_regs_len; j++) {
+			if ((n_loaded_register % max_loads) == 0) {
+				n_load = min(n_registers - n_loaded_register, max_loads);
+				*cs++ = MI_LOAD_REGISTER_IMM(n_load);
+			}
+
+			*cs++ = i915_mmio_reg_offset(mux_regs[j].addr);
+			*cs++ = mux_regs[j].value;
+			n_loaded_register++;
+		}
+	}
+
+	*cs++ = MI_LOAD_REGISTER_IMM(1);
+	*cs++ = i915_mmio_reg_offset(GDT_CHICKEN_BITS);
+	*cs++ = 0x80;
+
+	*cs++ = MI_NOOP;
+	intel_ring_advance(req, cs);
+
+	return 0;
+}
+
 /**
  * i915_perf_read_locked - &i915_perf_stream_ops->read with error normalisation
  * @stream: An i915 perf stream
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index acd1da9b62a3..67aaaebb194b 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -1874,6 +1874,9 @@ gen8_emit_bb_start(struct drm_i915_gem_request *req,
 			!(dispatch_flags & I915_DISPATCH_SECURE);
 	u32 *cs;
 
+	/* Emit NOA config */
+	i915_oa_emit_noa_config_locked(req);
+
 	cs = intel_ring_begin(req, 4);
 	if (IS_ERR(cs))
 		return PTR_ERR(cs);
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH v15 14/14] drm/i915/perf: notify sseu configuration changes
  2017-05-31 12:33   ` [PATCH v15 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
                       ` (12 preceding siblings ...)
  2017-05-31 12:33     ` [PATCH v15 13/14] drm/i915/perf: reprogram NOA muxes at the beginning of each workload Lionel Landwerlin
@ 2017-05-31 12:33     ` Lionel Landwerlin
  2017-06-01 11:50       ` Chris Wilson
  2017-06-01 11:52       ` Chris Wilson
  13 siblings, 2 replies; 87+ messages in thread
From: Lionel Landwerlin @ 2017-05-31 12:33 UTC (permalink / raw)
  To: intel-gfx

This adds the ability for userspace to request that the kernel track &
record sseu configuration changes. These changes are inserted into the
perf stream so that userspace can interpret the OA reports using the
configuration applied at the time the OA reports where generated.

v2: Handle timestamps wrapping around 32bits (Lionel)

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h         |  45 ++++
 drivers/gpu/drm/i915/i915_gem_context.c |   3 +
 drivers/gpu/drm/i915/i915_gem_context.h |  21 ++
 drivers/gpu/drm/i915/i915_perf.c        | 459 ++++++++++++++++++++++++++++++--
 drivers/gpu/drm/i915/intel_lrc.c        |   7 +-
 include/uapi/drm/i915_drm.h             |  30 +++
 6 files changed, 538 insertions(+), 27 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 499b2f9aa4be..b2e4df1efb12 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1963,6 +1963,12 @@ struct i915_perf_stream {
 	struct i915_gem_context *ctx;
 
 	/**
+	 * @has_sseu: Whether the stream emits SSEU configuration changes
+	 * reports.
+	 */
+	bool has_sseu;
+
+	/**
 	 * @enabled: Whether the stream is currently enabled, considering
 	 * whether the stream was opened in a disabled state and based
 	 * on `I915_PERF_IOCTL_ENABLE` and `I915_PERF_IOCTL_DISABLE` calls.
@@ -2493,6 +2499,44 @@ struct drm_i915_private {
 			const struct i915_oa_format *oa_formats;
 			int n_builtin_sets;
 		} oa;
+
+		struct {
+			/**
+			 * Buffer containing change reports of the SSEU
+			 * configuration.
+			 */
+			struct i915_vma *vma;
+			u8 *vaddr;
+
+			/**
+			 * Scheduler write to the head, and the perf driver
+			 * reads from tail.
+			 */
+			u32 head;
+			u32 tail;
+
+			/**
+			 * Is the sseu buffer enabled.
+			 */
+			bool enabled;
+
+			/**
+			 * Whether the buffer overflown.
+			 */
+			bool overflow;
+
+			/**
+			 * Keeps track of how many times the OA unit has been
+			 * enabled. This number is used to discard stale
+			 * @perf_sseu in @i915_gem_context.
+			 */
+			atomic64_t enable_no;
+
+			/**
+			 * Lock writes & tail pointer updates on this buffer.
+			 */
+			spinlock_t ptr_lock;
+		} sseu_buffer;
 	} perf;
 
 	/* Abstract the submission mechanism (legacy ringbuffer or execlists) away */
@@ -3537,6 +3581,7 @@ void i915_oa_init_reg_state(struct intel_engine_cs *engine,
 			    struct i915_gem_context *ctx,
 			    uint32_t *reg_state);
 int i915_oa_emit_noa_config_locked(struct drm_i915_gem_request *req);
+void i915_perf_emit_sseu_config(struct drm_i915_gem_request *req);
 
 /* i915_gem_evict.c */
 int __must_check i915_gem_evict_something(struct i915_address_space *vm,
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index 60d0bbf0ae2b..368bb527a75a 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -510,6 +510,9 @@ mi_set_context(struct drm_i915_gem_request *req, u32 flags)
 	if (IS_ERR(cs))
 		return PTR_ERR(cs);
 
+	/* Notify perf of possible change in SSEU configuration. */
+	i915_perf_emit_sseu_config(req);
+
 	/* WaProgramMiArbOnOffAroundMiSetContext:ivb,vlv,hsw,bdw,chv */
 	if (INTEL_GEN(dev_priv) >= 7) {
 		*cs++ = MI_ARB_ON_OFF | MI_ARB_DISABLE;
diff --git a/drivers/gpu/drm/i915/i915_gem_context.h b/drivers/gpu/drm/i915/i915_gem_context.h
index e8247e2f6011..9c005be4933a 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.h
+++ b/drivers/gpu/drm/i915/i915_gem_context.h
@@ -194,6 +194,27 @@ struct i915_gem_context {
 
 	/** remap_slice: Bitmask of cache lines that need remapping */
 	u8 remap_slice;
+
+	/**
+	 * @oa_sseu: last SSEU configuration notified to userspace
+	 *
+	 * SSEU configuration changes are notified to userspace through the
+	 * perf infrastructure. We keep track here of the last notified
+	 * configuration for a given context since configuration can change
+	 * per engine.
+	 */
+	struct sseu_dev_info perf_sseu;
+
+	/**
+	 * @perf_enable_no: last perf enable identifier
+	 *
+	 * Userspace can enable/disable the perf infrastructure whenever it
+	 * wants without reconfiguring the OA unit. This number is updated at
+	 * the same time as @oa_sseu by copying
+	 * dev_priv->perf.oa.oa_sseu.enable_no which changes every time the
+	 * user enables the OA unit.
+	 */
+	u64 perf_enable_no;
 };
 
 static inline bool i915_gem_context_is_closed(const struct i915_gem_context *ctx)
diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
index 4229c74baa22..325b9ac54bd5 100644
--- a/drivers/gpu/drm/i915/i915_perf.c
+++ b/drivers/gpu/drm/i915/i915_perf.c
@@ -212,6 +212,14 @@
  */
 #define OA_BUFFER_SIZE		SZ_16M
 
+/* Dimension the SSEU buffer based on the size of the OA buffer to ensure we
+ * don't run out of space before the OA unit notifies us new data is
+ * available.
+ */
+#define SSEU_BUFFER_ITEM_SIZE	sizeof(struct perf_sseu_change)
+#define SSEU_BUFFER_SIZE	(OA_BUFFER_SIZE * SSEU_BUFFER_ITEM_SIZE / 256)
+#define SSEU_BUFFER_NB_ITEM	(SSEU_BUFFER_SIZE / SSEU_BUFFER_ITEM_SIZE)
+
 #define OA_TAKEN(tail, head)	((tail - head) & (OA_BUFFER_SIZE - 1))
 
 /**
@@ -350,6 +358,8 @@ struct perf_open_properties {
 	u64 single_context:1;
 	u64 ctx_handle;
 
+	bool sseu_enable;
+
 	/* OA sampling state */
 	int metrics_set;
 	int oa_format;
@@ -357,6 +367,16 @@ struct perf_open_properties {
 	int oa_period_exponent;
 };
 
+struct perf_sseu_change {
+	u32 timestamp;
+	struct drm_i915_perf_sseu_change change;
+};
+
+static bool timestamp_passed(u32 t0, u32 t1)
+{
+  return (s32)(t0 - t1) >= 0;
+}
+
 static u32 gen8_oa_hw_tail_read(struct drm_i915_private *dev_priv)
 {
 	return I915_READ(GEN8_OATAILPTR) & GEN8_OATAILPTR_MASK;
@@ -569,6 +589,88 @@ static int append_oa_sample(struct i915_perf_stream *stream,
 	return 0;
 }
 
+static int append_sseu_change(struct i915_perf_stream *stream,
+			      char __user *buf,
+			      size_t count,
+			      size_t *offset,
+			      const struct perf_sseu_change *sseu_report)
+{
+	struct drm_i915_perf_record_header header;
+
+	header.type = DRM_I915_PERF_RECORD_SSEU_CHANGE;
+	header.pad = 0;
+	header.size = sizeof(struct drm_i915_perf_record_header) +
+		sizeof(struct drm_i915_perf_sseu_change);
+
+	if ((count - *offset) < header.size)
+		return -ENOSPC;
+
+	buf += *offset;
+	if (copy_to_user(buf, &header, sizeof(header)))
+		return -EFAULT;
+	buf += sizeof(header);
+
+	if (copy_to_user(buf, &sseu_report->change,
+			 sizeof(struct drm_i915_perf_sseu_change)))
+		return -EFAULT;
+
+	(*offset) += header.size;
+
+	return 0;
+}
+
+static struct perf_sseu_change *
+sseu_buffer_peek(struct drm_i915_private *dev_priv, u32 head, u32 tail)
+{
+	if (head == tail)
+		return NULL;
+
+	return (struct perf_sseu_change *) (dev_priv->perf.sseu_buffer.vaddr +
+					    tail * SSEU_BUFFER_ITEM_SIZE);
+}
+
+static struct perf_sseu_change *
+sseu_buffer_advance(struct drm_i915_private *dev_priv, u32 head, u32 *tail)
+{
+	*tail = (*tail + 1) % SSEU_BUFFER_NB_ITEM;
+
+	return sseu_buffer_peek(dev_priv, head, *tail);
+}
+
+static void
+sseu_buffer_read_pointers(struct i915_perf_stream *stream, u32 *head, u32 *tail)
+{
+	struct drm_i915_private *dev_priv = stream->dev_priv;
+
+	if (!stream->has_sseu) {
+		*head = 0;
+		*tail = 0;
+		return;
+	}
+
+	spin_lock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+
+	*head = dev_priv->perf.sseu_buffer.head;
+	*tail = dev_priv->perf.sseu_buffer.tail;
+
+	spin_unlock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+}
+
+static void
+sseu_buffer_write_tail_pointer(struct i915_perf_stream *stream, u32 tail)
+{
+	struct drm_i915_private *dev_priv = stream->dev_priv;
+
+	if (!stream->has_sseu)
+		return;
+
+	spin_lock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+
+	dev_priv->perf.sseu_buffer.tail = tail;
+
+	spin_unlock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+}
+
 /**
  * Copies all buffered OA reports into userspace read() buffer.
  * @stream: An i915-perf stream opened for OA metrics
@@ -604,6 +706,8 @@ static int gen8_append_oa_reports(struct i915_perf_stream *stream,
 	unsigned int aged_tail_idx;
 	u32 head, tail;
 	u32 taken;
+	struct perf_sseu_change *sseu_report;
+	u32 sseu_head, sseu_tail;
 	int ret = 0;
 
 	if (WARN_ON(!stream->enabled))
@@ -641,6 +745,9 @@ static int gen8_append_oa_reports(struct i915_perf_stream *stream,
 		      head, tail))
 		return -EIO;
 
+	/* Extract first SSEU report. */
+	sseu_buffer_read_pointers(stream, &sseu_head, &sseu_tail);
+	sseu_report = sseu_buffer_peek(dev_priv, sseu_head, sseu_tail);
 
 	for (/* none */;
 	     (taken = OA_TAKEN(tail, head));
@@ -679,6 +786,25 @@ static int gen8_append_oa_reports(struct i915_perf_stream *stream,
 			continue;
 		}
 
+		while (sseu_report &&
+		       !timestamp_passed(sseu_report->timestamp, report32[1])) {
+			/* While filtering for a single context we avoid
+			 * leaking the IDs of other contexts.
+			 */
+			if (stream->ctx &&
+			    stream->ctx->hw_id != sseu_report->change.hw_id) {
+				sseu_report->change.hw_id = INVALID_CTX_ID;
+			}
+
+			ret = append_sseu_change(stream, buf, count,
+						 offset, sseu_report);
+			if (ret)
+				break;
+
+			sseu_report = sseu_buffer_advance(dev_priv, sseu_head,
+							  &sseu_tail);
+		}
+
 		/* XXX: Just keep the lower 21 bits for now since I'm not
 		 * entirely sure if the HW touches any of the higher bits in
 		 * this field
@@ -770,6 +896,9 @@ static int gen8_append_oa_reports(struct i915_perf_stream *stream,
 		spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
 	}
 
+	/* Update tail pointer of the sseu buffer. */
+	sseu_buffer_write_tail_pointer(stream, sseu_tail);
+
 	return ret;
 }
 
@@ -799,12 +928,32 @@ static int gen8_oa_read(struct i915_perf_stream *stream,
 			size_t *offset)
 {
 	struct drm_i915_private *dev_priv = stream->dev_priv;
+	bool overflow = false;
 	u32 oastatus;
 	int ret;
 
 	if (WARN_ON(!dev_priv->perf.oa.oa_buffer.vaddr))
 		return -EIO;
 
+	spin_lock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+
+	overflow = dev_priv->perf.sseu_buffer.overflow;
+	if (overflow) {
+		DRM_DEBUG("SSEU buffer overflow\n");
+
+		ret = append_oa_status(stream, buf, count, offset,
+				       DRM_I915_PERF_RECORD_OA_BUFFER_LOST);
+		if (ret) {
+			spin_unlock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+			return ret;
+		}
+
+		dev_priv->perf.oa.ops.oa_disable(dev_priv);
+		dev_priv->perf.oa.ops.oa_enable(dev_priv);
+	}
+
+	spin_unlock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+
 	oastatus = I915_READ(GEN8_OASTATUS);
 
 	/* We treat OABUFFER_OVERFLOW as a significant error:
@@ -821,16 +970,18 @@ static int gen8_oa_read(struct i915_perf_stream *stream,
 	 * that something has gone quite badly wrong.
 	 */
 	if (oastatus & GEN8_OASTATUS_OABUFFER_OVERFLOW) {
-		ret = append_oa_status(stream, buf, count, offset,
-				       DRM_I915_PERF_RECORD_OA_BUFFER_LOST);
-		if (ret)
-			return ret;
-
 		DRM_DEBUG("OA buffer overflow (exponent = %d): force restart\n",
 			  dev_priv->perf.oa.period_exponent);
 
-		dev_priv->perf.oa.ops.oa_disable(dev_priv);
-		dev_priv->perf.oa.ops.oa_enable(dev_priv);
+		if (!overflow) {
+			ret = append_oa_status(stream, buf, count, offset,
+					       DRM_I915_PERF_RECORD_OA_BUFFER_LOST);
+			if (ret)
+				return ret;
+
+			dev_priv->perf.oa.ops.oa_disable(dev_priv);
+			dev_priv->perf.oa.ops.oa_enable(dev_priv);
+		}
 
 		/* Note: .oa_enable() is expected to re-init the oabuffer
 		 * and reset GEN8_OASTATUS for us
@@ -839,10 +990,12 @@ static int gen8_oa_read(struct i915_perf_stream *stream,
 	}
 
 	if (oastatus & GEN8_OASTATUS_REPORT_LOST) {
-		ret = append_oa_status(stream, buf, count, offset,
-				       DRM_I915_PERF_RECORD_OA_REPORT_LOST);
-		if (ret)
-			return ret;
+		if (!overflow) {
+			ret = append_oa_status(stream, buf, count, offset,
+					       DRM_I915_PERF_RECORD_OA_REPORT_LOST);
+			if (ret)
+				return ret;
+		}
 		I915_WRITE(GEN8_OASTATUS,
 			   oastatus & ~GEN8_OASTATUS_REPORT_LOST);
 	}
@@ -885,6 +1038,8 @@ static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 	unsigned int aged_tail_idx;
 	u32 head, tail;
 	u32 taken;
+	struct perf_sseu_change *sseu_report;
+	u32 sseu_head, sseu_tail;
 	int ret = 0;
 
 	if (WARN_ON(!stream->enabled))
@@ -922,6 +1077,9 @@ static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 		      head, tail))
 		return -EIO;
 
+	/* Extract first SSEU report. */
+	sseu_buffer_read_pointers(stream, &sseu_head, &sseu_tail);
+	sseu_report = sseu_buffer_peek(dev_priv, sseu_head, sseu_tail);
 
 	for (/* none */;
 	     (taken = OA_TAKEN(tail, head));
@@ -954,6 +1112,26 @@ static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 			continue;
 		}
 
+		/* Emit any potential sseu report. */
+		while (sseu_report &&
+		       !timestamp_passed(sseu_report->timestamp, report32[1])) {
+			/* While filtering for a single context we avoid
+			 * leaking the IDs of other contexts.
+			 */
+			if (stream->ctx &&
+			    stream->ctx->hw_id != sseu_report->change.hw_id) {
+				sseu_report->change.hw_id = INVALID_CTX_ID;
+			}
+
+			ret = append_sseu_change(stream, buf, count,
+						 offset, sseu_report);
+			if (ret)
+				break;
+
+			sseu_report = sseu_buffer_advance(dev_priv, sseu_head,
+							  &sseu_tail);
+		}
+
 		ret = append_oa_sample(stream, buf, count, offset, report);
 		if (ret)
 			break;
@@ -983,6 +1161,9 @@ static int gen7_append_oa_reports(struct i915_perf_stream *stream,
 		spin_unlock_irqrestore(&dev_priv->perf.oa.oa_buffer.ptr_lock, flags);
 	}
 
+	/* Update tail pointer of the sseu buffer. */
+	sseu_buffer_write_tail_pointer(stream, sseu_tail);
+
 	return ret;
 }
 
@@ -1008,12 +1189,32 @@ static int gen7_oa_read(struct i915_perf_stream *stream,
 			size_t *offset)
 {
 	struct drm_i915_private *dev_priv = stream->dev_priv;
+	bool overflow = false;
 	u32 oastatus1;
 	int ret;
 
 	if (WARN_ON(!dev_priv->perf.oa.oa_buffer.vaddr))
 		return -EIO;
 
+	spin_lock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+
+	overflow = dev_priv->perf.sseu_buffer.overflow;
+	if (overflow) {
+		DRM_DEBUG("SSEU buffer overflow\n");
+
+		ret = append_oa_status(stream, buf, count, offset,
+				       DRM_I915_PERF_RECORD_OA_BUFFER_LOST);
+		if (ret) {
+			spin_unlock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+			return ret;
+		}
+
+		dev_priv->perf.oa.ops.oa_disable(dev_priv);
+		dev_priv->perf.oa.ops.oa_enable(dev_priv);
+	}
+
+	spin_unlock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+
 	oastatus1 = I915_READ(GEN7_OASTATUS1);
 
 	/* XXX: On Haswell we don't have a safe way to clear oastatus1
@@ -1044,25 +1245,30 @@ static int gen7_oa_read(struct i915_perf_stream *stream,
 	 *   now.
 	 */
 	if (unlikely(oastatus1 & GEN7_OASTATUS1_OABUFFER_OVERFLOW)) {
-		ret = append_oa_status(stream, buf, count, offset,
-				       DRM_I915_PERF_RECORD_OA_BUFFER_LOST);
-		if (ret)
-			return ret;
-
 		DRM_DEBUG("OA buffer overflow (exponent = %d): force restart\n",
 			  dev_priv->perf.oa.period_exponent);
 
-		dev_priv->perf.oa.ops.oa_disable(dev_priv);
-		dev_priv->perf.oa.ops.oa_enable(dev_priv);
+		if (!overflow) {
+			ret = append_oa_status(stream, buf, count, offset,
+					       DRM_I915_PERF_RECORD_OA_BUFFER_LOST);
+			if (ret)
+				return ret;
+
+			dev_priv->perf.oa.ops.oa_disable(dev_priv);
+			dev_priv->perf.oa.ops.oa_enable(dev_priv);
+		}
 
 		oastatus1 = I915_READ(GEN7_OASTATUS1);
 	}
 
 	if (unlikely(oastatus1 & GEN7_OASTATUS1_REPORT_LOST)) {
-		ret = append_oa_status(stream, buf, count, offset,
-				       DRM_I915_PERF_RECORD_OA_REPORT_LOST);
-		if (ret)
-			return ret;
+		if (!overflow) {
+			ret = append_oa_status(stream, buf, count, offset,
+					       DRM_I915_PERF_RECORD_OA_REPORT_LOST);
+			if (ret)
+				return ret;
+		}
+
 		dev_priv->perf.oa.gen7_latched_oastatus1 |=
 			GEN7_OASTATUS1_REPORT_LOST;
 	}
@@ -1217,18 +1423,25 @@ static void oa_put_render_ctx_id(struct i915_perf_stream *stream)
 }
 
 static void
-free_oa_buffer(struct drm_i915_private *i915)
+free_sseu_buffer(struct drm_i915_private *i915)
 {
-	mutex_lock(&i915->drm.struct_mutex);
+	i915_gem_object_unpin_map(i915->perf.sseu_buffer.vma->obj);
+	i915_vma_unpin(i915->perf.sseu_buffer.vma);
+	i915_gem_object_put(i915->perf.sseu_buffer.vma->obj);
 
+	i915->perf.sseu_buffer.vma = NULL;
+	i915->perf.sseu_buffer.vaddr = NULL;
+}
+
+static void
+free_oa_buffer(struct drm_i915_private *i915)
+{
 	i915_gem_object_unpin_map(i915->perf.oa.oa_buffer.vma->obj);
 	i915_vma_unpin(i915->perf.oa.oa_buffer.vma);
 	i915_gem_object_put(i915->perf.oa.oa_buffer.vma->obj);
 
 	i915->perf.oa.oa_buffer.vma = NULL;
 	i915->perf.oa.oa_buffer.vaddr = NULL;
-
-	mutex_unlock(&i915->drm.struct_mutex);
 }
 
 static void i915_oa_stream_destroy(struct i915_perf_stream *stream)
@@ -1244,8 +1457,15 @@ static void i915_oa_stream_destroy(struct i915_perf_stream *stream)
 
 	dev_priv->perf.oa.ops.disable_metric_set(dev_priv);
 
+	mutex_lock(&dev_priv->drm.struct_mutex);
+
+	if (stream->has_sseu)
+		free_sseu_buffer(dev_priv);
+
 	free_oa_buffer(dev_priv);
 
+	mutex_unlock(&dev_priv->drm.struct_mutex);
+
 	intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
 	intel_runtime_pm_put(dev_priv);
 
@@ -1258,6 +1478,97 @@ static void i915_oa_stream_destroy(struct i915_perf_stream *stream)
 	}
 }
 
+static void init_sseu_buffer(struct drm_i915_private *dev_priv,
+			     bool need_lock)
+{
+	/* Cleanup auxilliary buffer. */
+	memset(dev_priv->perf.sseu_buffer.vaddr, 0, SSEU_BUFFER_SIZE);
+
+	atomic64_inc(&dev_priv->perf.sseu_buffer.enable_no);
+
+	if (need_lock)
+		spin_lock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+
+	/* Initialize head & tail pointers. */
+	dev_priv->perf.sseu_buffer.head = 0;
+	dev_priv->perf.sseu_buffer.tail = 0;
+	dev_priv->perf.sseu_buffer.overflow = false;
+
+	if (need_lock)
+		spin_unlock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+}
+
+static int alloc_sseu_buffer(struct drm_i915_private *dev_priv)
+{
+	struct drm_i915_gem_object *bo;
+	struct i915_vma *vma;
+	int ret = 0;
+
+	bo = i915_gem_object_create(dev_priv, SSEU_BUFFER_SIZE);
+	if (IS_ERR(bo)) {
+		DRM_ERROR("Failed to allocate SSEU buffer\n");
+		ret = PTR_ERR(bo);
+		goto out;
+	}
+
+	vma = i915_gem_object_ggtt_pin(bo, NULL, 0, SSEU_BUFFER_SIZE, 0);
+	if (IS_ERR(vma)) {
+		ret = PTR_ERR(vma);
+		goto err_unref;
+	}
+	dev_priv->perf.sseu_buffer.vma = vma;
+
+	dev_priv->perf.sseu_buffer.vaddr =
+		i915_gem_object_pin_map(bo, I915_MAP_WB);
+	if (IS_ERR(dev_priv->perf.sseu_buffer.vaddr)) {
+		ret = PTR_ERR(dev_priv->perf.sseu_buffer.vaddr);
+		goto err_unpin;
+	}
+
+	atomic64_set(&dev_priv->perf.sseu_buffer.enable_no, 0);
+
+	init_sseu_buffer(dev_priv, true);
+
+	DRM_DEBUG_DRIVER("OA SSEU Buffer initialized, gtt offset = 0x%x, vaddr = %p\n",
+			 i915_ggtt_offset(dev_priv->perf.sseu_buffer.vma),
+			 dev_priv->perf.sseu_buffer.vaddr);
+
+	goto out;
+
+err_unpin:
+	__i915_vma_unpin(vma);
+
+err_unref:
+	i915_gem_object_put(bo);
+
+	dev_priv->perf.sseu_buffer.vaddr = NULL;
+	dev_priv->perf.sseu_buffer.vma = NULL;
+
+out:
+	return ret;
+}
+
+static void sseu_enable(struct drm_i915_private *dev_priv)
+{
+	spin_lock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+
+	init_sseu_buffer(dev_priv, false);
+
+	dev_priv->perf.sseu_buffer.enabled = true;
+
+	spin_unlock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+}
+
+static void sseu_disable(struct drm_i915_private *dev_priv)
+{
+	spin_lock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+
+	dev_priv->perf.sseu_buffer.enabled = false;
+	dev_priv->perf.sseu_buffer.overflow = false;
+
+	spin_unlock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+}
+
 static void gen7_init_oa_buffer(struct drm_i915_private *dev_priv)
 {
 	u32 gtt_offset = i915_ggtt_offset(dev_priv->perf.oa.oa_buffer.vma);
@@ -1740,6 +2051,9 @@ static int gen8_configure_all_contexts(struct drm_i915_private *dev_priv,
 		struct intel_context *ce = &ctx->engine[RCS];
 		u32 *regs;
 
+		memset(&ctx->perf_sseu, 0, sizeof(ctx->perf_sseu));
+		ctx->perf_enable_no = 0;
+
 		/* OA settings will be set upon first use */
 		if (!ce->state)
 			continue;
@@ -1898,6 +2212,9 @@ static void i915_oa_stream_enable(struct i915_perf_stream *stream)
 {
 	struct drm_i915_private *dev_priv = stream->dev_priv;
 
+	if (stream->has_sseu)
+		sseu_enable(dev_priv);
+
 	dev_priv->perf.oa.ops.oa_enable(dev_priv);
 
 	if (dev_priv->perf.oa.periodic)
@@ -1928,6 +2245,9 @@ static void i915_oa_stream_disable(struct i915_perf_stream *stream)
 {
 	struct drm_i915_private *dev_priv = stream->dev_priv;
 
+	if (stream->has_sseu)
+		sseu_disable(dev_priv);
+
 	dev_priv->perf.oa.ops.oa_disable(dev_priv);
 
 	if (dev_priv->perf.oa.periodic)
@@ -2053,6 +2373,14 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream,
 			return ret;
 	}
 
+	if (props->sseu_enable) {
+		ret = alloc_sseu_buffer(dev_priv);
+		if (ret)
+			goto err_sseu_buf_alloc;
+
+		stream->has_sseu = true;
+	}
+
 	ret = alloc_oa_buffer(dev_priv);
 	if (ret)
 		goto err_oa_buf_alloc;
@@ -2085,6 +2413,9 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream,
 err_enable:
 	intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
 	intel_runtime_pm_put(dev_priv);
+	free_sseu_buffer(dev_priv);
+
+err_sseu_buf_alloc:
 	free_oa_buffer(dev_priv);
 
 err_oa_buf_alloc:
@@ -2172,6 +2503,75 @@ int i915_oa_emit_noa_config_locked(struct drm_i915_gem_request *req)
 }
 
 /**
+ * i915_perf_emit_sseu_config - emit the SSEU configuration of a gem request
+ * @req: the request
+ *
+ * If the OA unit is enabled, the write the SSEU configuration of a
+ * given request into a circular buffer. The buffer is read at the
+ * same time as the OA buffer and its elements are inserted into the
+ * perf stream to notify the userspace of SSEU configuration changes.
+ * This is needed to interpret values from the OA reports.
+ */
+void i915_perf_emit_sseu_config(struct drm_i915_gem_request *req)
+{
+	struct drm_i915_private *dev_priv = req->i915;
+	struct perf_sseu_change *report;
+	const struct sseu_dev_info *sseu;
+	u32 head, timestamp;
+	u64 perf_enable_no;
+
+	/* Perf not supported. */
+	if (!dev_priv->perf.initialized)
+		return;
+
+	/* Nothing's changed, no need to signal userspace. */
+	sseu = &req->ctx->engine[req->engine->id].sseu;
+	perf_enable_no = atomic64_read(&dev_priv->perf.sseu_buffer.enable_no);
+	if (perf_enable_no == req->ctx->perf_enable_no &&
+	    memcmp(&req->ctx->perf_sseu, sseu, sizeof(*sseu)) == 0)
+		return;
+
+	memcpy(&req->ctx->perf_sseu, sseu, sizeof(*sseu));
+	req->ctx->perf_enable_no = perf_enable_no;
+
+	/* Read the timestamp outside the spin lock. */
+	timestamp = I915_READ(RING_TIMESTAMP(RENDER_RING_BASE));
+
+	spin_lock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+
+	if (!dev_priv->perf.sseu_buffer.enabled)
+		goto out;
+
+	head = dev_priv->perf.sseu_buffer.head;
+	dev_priv->perf.sseu_buffer.head =
+		(dev_priv->perf.sseu_buffer.head + 1) % SSEU_BUFFER_NB_ITEM;
+
+	/* This should never happen, unless we've wrongly dimensioned the SSEU
+	 * buffer. */
+	if (dev_priv->perf.sseu_buffer.head == dev_priv->perf.sseu_buffer.tail) {
+		dev_priv->perf.sseu_buffer.overflow = true;
+		goto out;
+	}
+
+	report = (struct perf_sseu_change *) (dev_priv->perf.sseu_buffer.vaddr +
+					      head * SSEU_BUFFER_ITEM_SIZE);
+	memset(report, 0, sizeof(*report));
+
+	report->timestamp = I915_READ(RING_TIMESTAMP(RENDER_RING_BASE));
+	report->change.hw_id = req->ctx->hw_id;
+
+	report->change.sseu.packed.slice_mask = sseu->slice_mask;
+	report->change.sseu.packed.subslice_mask = sseu->subslice_mask;
+	report->change.sseu.packed.min_eu_per_subslice =
+		sseu->min_eu_per_subslice;
+	report->change.sseu.packed.max_eu_per_subslice =
+		sseu->max_eu_per_subslice;
+
+out:
+	spin_unlock_irq(&dev_priv->perf.sseu_buffer.ptr_lock);
+}
+
+/**
  * i915_perf_read_locked - &i915_perf_stream_ops->read with error normalisation
  * @stream: An i915 perf stream
  * @file: An i915 perf stream file
@@ -2818,6 +3218,9 @@ static int read_properties_unlocked(struct drm_i915_private *dev_priv,
 			props->oa_periodic = true;
 			props->oa_period_exponent = value;
 			break;
+		case DRM_I915_PERF_PROP_SSEU_CHANGE:
+			props->sseu_enable = true;
+			break;
 		case DRM_I915_PERF_PROP_MAX:
 			MISSING_CASE(id);
 			return -EINVAL;
@@ -3178,6 +3581,10 @@ void i915_perf_init(struct drm_i915_private *dev_priv)
 		mutex_init(&dev_priv->perf.lock);
 		spin_lock_init(&dev_priv->perf.oa.oa_buffer.ptr_lock);
 
+		spin_lock_init(&dev_priv->perf.sseu_buffer.ptr_lock);
+		dev_priv->perf.sseu_buffer.enabled = false;
+		atomic64_set(&dev_priv->perf.sseu_buffer.enable_no, 0);
+
 		oa_sample_rate_hard_limit =
 			dev_priv->perf.oa.timestamp_frequency / 2;
 		dev_priv->perf.sysctl_header = register_sysctl_table(dev_root);
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index ad37e2b236b0..e2f83a12337b 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -350,8 +350,10 @@ static void execlists_submit_ports(struct intel_engine_cs *engine)
 		rq = port_unpack(&port[n], &count);
 		if (rq) {
 			GEM_BUG_ON(count > !n);
-			if (!count++)
+			if (!count++) {
+				i915_perf_emit_sseu_config(rq);
 				execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_IN);
+			}
 			port_set(&port[n], port_pack(rq, count));
 			desc = execlists_update_context(rq);
 			GEM_DEBUG_EXEC(port[n].context_id = upper_32_bits(desc));
@@ -1406,6 +1408,9 @@ static int gen8_emit_bb_start(struct drm_i915_gem_request *req,
 		req->ctx->ppgtt->pd_dirty_rings &= ~intel_engine_flag(req->engine);
 	}
 
+	/* Emit NOA config */
+	i915_oa_emit_noa_config_locked(req);
+
 	cs = intel_ring_begin(req, 4);
 	if (IS_ERR(cs))
 		return PTR_ERR(cs);
diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
index d561b1f98b20..ca06d46a01af 100644
--- a/include/uapi/drm/i915_drm.h
+++ b/include/uapi/drm/i915_drm.h
@@ -439,6 +439,16 @@ typedef struct drm_i915_setparam {
 	int value;
 } drm_i915_setparam_t;
 
+union drm_i915_gem_param_sseu {
+	struct {
+		__u8 slice_mask;
+		__u8 subslice_mask;
+		__u8 min_eu_per_subslice;
+		__u8 max_eu_per_subslice;
+	} packed;
+	__u64 value;
+};
+
 /* A memory manager for regions of shared memory:
  */
 #define I915_MEM_REGION_AGP 1
@@ -1358,6 +1368,11 @@ enum drm_i915_perf_property_id {
 	 */
 	DRM_I915_PERF_PROP_OA_EXPONENT,
 
+	/**
+	 * Configure the stream to monitor sseu configuration changes.
+	 */
+	DRM_I915_PERF_PROP_SSEU_CHANGE,
+
 	DRM_I915_PERF_PROP_MAX /* non-ABI */
 };
 
@@ -1441,9 +1456,24 @@ enum drm_i915_perf_record_type {
 	 */
 	DRM_I915_PERF_RECORD_OA_BUFFER_LOST = 3,
 
+	/**
+	 * A change in the sseu configuration happened.
+	 *
+	 * struct {
+	 *     struct drm_i915_perf_record_header header;
+	 *     { struct drm_i915_perf_sseu_change change; } && DRM_I915_PERF_PROP_SSEU_CHANGE
+	 * };
+	 */
+	DRM_I915_PERF_RECORD_SSEU_CHANGE = 4,
+
 	DRM_I915_PERF_RECORD_MAX /* non-ABI */
 };
 
+struct drm_i915_perf_sseu_change {
+	__u32 hw_id;
+	union drm_i915_gem_param_sseu sseu;
+};
+
 #if defined(__cplusplus)
 }
 #endif
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* Re: [PATCH v15 06/14] drm/i915/perf: Add OA unit support for Gen 8+
  2017-05-31 12:33     ` [PATCH v15 06/14] drm/i915/perf: Add OA unit support for Gen 8+ Lionel Landwerlin
@ 2017-06-01 11:48       ` Chris Wilson
  0 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2017-06-01 11:48 UTC (permalink / raw)
  To: Lionel Landwerlin; +Cc: intel-gfx

On Wed, May 31, 2017 at 01:33:47PM +0100, Lionel Landwerlin wrote:
>  			u32 gen7_latched_oastatus1;
> +			u32 ctx_oactxctrl_offset;
> +			u32 ctx_flexeu0_offset;
> +
> +			/* The RPT_ID/reason field for Gen8+ includes a bit
> +			 * to determine if the CTX ID in the report is valid
> +			 * but the specific bit differs between Gen 8 and 9
> +			 */

This is completely minor, but a coding-style nit nevertheless, can you
go through and make sure all block commments are
/*
 * Hello World!
 */
i.e. a blank first line (for symmetry with the blank last apparently),
but it does pay off when mixing normal comment blocks with kerneldoc.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v15 14/14] drm/i915/perf: notify sseu configuration changes
  2017-05-31 12:33     ` [PATCH v15 14/14] drm/i915/perf: notify sseu configuration changes Lionel Landwerlin
@ 2017-06-01 11:50       ` Chris Wilson
  2017-06-01 11:52       ` Chris Wilson
  1 sibling, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2017-06-01 11:50 UTC (permalink / raw)
  To: Lionel Landwerlin; +Cc: intel-gfx

On Wed, May 31, 2017 at 01:33:55PM +0100, Lionel Landwerlin wrote:
> @@ -2493,6 +2499,44 @@ struct drm_i915_private {
>  			const struct i915_oa_format *oa_formats;
>  			int n_builtin_sets;
>  		} oa;
> +
> +		struct {
> +			/**
> +			 * Buffer containing change reports of the SSEU
> +			 * configuration.
> +			 */
> +			struct i915_vma *vma;
> +			u8 *vaddr;

Make me void *. Kernel uses gcc so arthimetic on void * is valid, and
this avoid lots of casts around the place.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH v15 14/14] drm/i915/perf: notify sseu configuration changes
  2017-05-31 12:33     ` [PATCH v15 14/14] drm/i915/perf: notify sseu configuration changes Lionel Landwerlin
  2017-06-01 11:50       ` Chris Wilson
@ 2017-06-01 11:52       ` Chris Wilson
  1 sibling, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2017-06-01 11:52 UTC (permalink / raw)
  To: Lionel Landwerlin; +Cc: intel-gfx

On Wed, May 31, 2017 at 01:33:55PM +0100, Lionel Landwerlin wrote:
>  static void
> -free_oa_buffer(struct drm_i915_private *i915)
> +free_sseu_buffer(struct drm_i915_private *i915)
>  {
> -	mutex_lock(&i915->drm.struct_mutex);
> +	i915_gem_object_unpin_map(i915->perf.sseu_buffer.vma->obj);
> +	i915_vma_unpin(i915->perf.sseu_buffer.vma);
> +	i915_gem_object_put(i915->perf.sseu_buffer.vma->obj);

unpin followed by put is common enough that it is wrapped into
i915_vam_unpin_and_release().
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

end of thread, other threads:[~2017-06-01 11:52 UTC | newest]

Thread overview: 87+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-12 15:55 [PATCH v4 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Robert Bragg
2017-04-12 15:55 ` [PATCH v4 01/15] drm/i915/perf: fix gen7_append_oa_reports comment Robert Bragg
2017-04-12 15:55 ` [PATCH v4 02/15] drm/i915/perf: avoid poll, read, EAGAIN busy loops Robert Bragg
2017-04-12 15:55 ` [PATCH v4 03/15] drm/i915/perf: avoid read back of head register Robert Bragg
2017-04-12 15:55 ` [PATCH v4 04/15] drm/i915/perf: no head/tail ref in gen7_oa_read Robert Bragg
2017-04-12 15:55 ` [PATCH v4 05/15] drm/i915/perf: improve tail race workaround Robert Bragg
2017-04-12 15:55 ` [PATCH v4 06/15] drm/i915/perf: improve invalid OA format debug message Robert Bragg
2017-04-12 15:55 ` [PATCH v4 07/15] drm/i915/perf: better pipeline aged/aging tail updates Robert Bragg
2017-04-12 15:55 ` [PATCH v4 08/15] drm/i915/perf: rate limit spurious oa report notice Robert Bragg
2017-04-12 15:55 ` [PATCH v4 09/15] drm/i915: expose _SLICE_MASK GETPARM Robert Bragg
2017-04-12 15:55 ` [PATCH v4 10/15] drm/i915: expose _SUBSLICE_MASK GETPARM Robert Bragg
2017-04-12 15:55 ` [PATCH v4 11/15] drm/i915/perf: Add 'render basic' Gen8+ OA unit configs Robert Bragg
2017-04-12 15:55 ` [PATCH v4 12/15] drm/i915/perf: Add OA unit support for Gen 8+ Robert Bragg
2017-04-12 17:47   ` Matthew Auld
2017-04-12 15:55 ` [PATCH v4 13/15] drm/i915/perf: Add more OA configs for BDW, CHV, SKL + BXT Robert Bragg
2017-04-12 15:55 ` [PATCH v4 14/15] drm/i915/perf: per-gen timebase for checking sample freq Robert Bragg
2017-04-12 15:55 ` [PATCH v4 15/15] drm/i915/perf: remove perf.hook_lock Robert Bragg
2017-04-12 17:33 ` ✓ Fi.CI.BAT: success for Enable OA unit for Gen 8 and 9 in i915 perf (rev6) Patchwork
2017-04-24  7:17 ` [PATCH 00/15] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
2017-04-24  7:17   ` [PATCH v5 01/15] drm/i915/perf: fix gen7_append_oa_reports comment Lionel Landwerlin
2017-04-24  7:17   ` [PATCH v5 02/15] drm/i915/perf: avoid poll, read, EAGAIN busy loops Lionel Landwerlin
2017-04-24  7:17   ` [PATCH v5 03/15] drm/i915/perf: avoid read back of head register Lionel Landwerlin
2017-04-24  7:17   ` [PATCH v5 04/15] drm/i915/perf: no head/tail ref in gen7_oa_read Lionel Landwerlin
2017-04-24  7:17   ` [PATCH v5 05/15] drm/i915/perf: improve tail race workaround Lionel Landwerlin
2017-04-24  7:17   ` [PATCH v5 06/15] drm/i915/perf: improve invalid OA format debug message Lionel Landwerlin
2017-04-24  7:17   ` [PATCH v5 07/15] drm/i915/perf: better pipeline aged/aging tail updates Lionel Landwerlin
2017-04-24  7:17   ` [PATCH v5 08/15] drm/i915/perf: rate limit spurious oa report notice Lionel Landwerlin
2017-04-24  7:17   ` [PATCH v5 09/15] drm/i915: expose _SLICE_MASK GETPARM Lionel Landwerlin
2017-04-24  7:17   ` [PATCH v5 10/15] drm/i915: expose _SUBSLICE_MASK GETPARM Lionel Landwerlin
2017-04-24  7:17   ` [PATCH v5 11/15] drm/i915/perf: Add 'render basic' Gen8+ OA unit configs Lionel Landwerlin
2017-04-24  7:17   ` [PATCH v5 12/15] drm/i915/perf: Add OA unit support for Gen 8+ Lionel Landwerlin
2017-04-24 10:35     ` Chris Wilson
2017-04-24 18:49       ` [PATCH v6 " Lionel Landwerlin
2017-04-25 16:20         ` Lionel Landwerlin
2017-04-25 16:42         ` Matthew Auld
2017-04-25 17:10           ` Lionel Landwerlin
2017-04-25 17:47             ` Matthew Auld
2017-04-25 17:30           ` [PATCH v7 " Lionel Landwerlin
2017-04-27  8:53             ` Chris Wilson
2017-05-02  1:17               ` [PATCH v9 " Lionel Landwerlin
2017-05-02 20:14                 ` Chris Wilson
2017-05-02 21:44                   ` [PATCH v10 " Lionel Landwerlin
2017-05-03 10:31                     ` Chris Wilson
2017-04-24  7:17   ` [PATCH v5 13/15] drm/i915/perf: Add more OA configs for BDW, CHV, SKL + BXT Lionel Landwerlin
2017-04-24  7:17   ` [PATCH v5 14/15] drm/i915/perf: per-gen timebase for checking sample freq Lionel Landwerlin
2017-04-24  7:17   ` [PATCH v5 15/15] drm/i915/perf: remove perf.hook_lock Lionel Landwerlin
2017-05-03 16:23   ` [PATCH v2 9/15] drm/i915: expose _SLICE_MASK GETPARM Lionel Landwerlin
2017-05-03 16:23     ` [PATCH v2 9/15] drm/i915: expose _SUBSLICE_MASK GETPARM Lionel Landwerlin
2017-05-03 16:33     ` [PATCH v2 9/15] drm/i915: expose _SLICE_MASK GETPARM Chris Wilson
2017-05-03 18:14       ` Lionel Landwerlin
2017-05-26 11:56   ` [PATCH v14 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
2017-05-26 11:56     ` [PATCH v14 01/14] drm/i915: Record both min/max eu_per_subslice in sseu_dev_info Lionel Landwerlin
2017-05-26 11:56     ` [PATCH v14 02/14] drm/i915: Program RPCS for Broadwell Lionel Landwerlin
2017-05-26 11:56     ` [PATCH v14 03/14] drm/i915: Record the sseu configuration per-context & engine Lionel Landwerlin
2017-05-26 11:56     ` [PATCH v14 04/14] drm/i915/perf: rework mux configurations queries Lionel Landwerlin
2017-05-26 11:56     ` [PATCH v14 05/14] drm/i915/perf: Add 'render basic' Gen8+ OA unit configs Lionel Landwerlin
2017-05-26 11:56     ` [PATCH v14 06/14] drm/i915/perf: Add OA unit support for Gen 8+ Lionel Landwerlin
2017-05-26 11:56     ` [PATCH v14 07/14] drm/i915/perf: Add more OA configs for BDW, CHV, SKL + BXT Lionel Landwerlin
2017-05-26 11:56     ` [PATCH v14 08/14] drm/i915/perf: per-gen timebase for checking sample freq Lionel Landwerlin
2017-05-26 11:56     ` [PATCH v14 09/14] drm/i915/perf: remove perf.hook_lock Lionel Landwerlin
2017-05-26 11:56     ` [PATCH v14 10/14] drm/i915: add KBL GT2/GT3 check macros Lionel Landwerlin
2017-05-26 11:56     ` [PATCH v14 11/14] drm/i915/perf: add KBL support Lionel Landwerlin
2017-05-26 11:56     ` [PATCH v14 12/14] drm/i915/perf: add GLK support Lionel Landwerlin
2017-05-26 11:56     ` [PATCH v14 13/14] drm/i915/perf: reprogram NOA muxes at the beginning of each workload Lionel Landwerlin
2017-05-30 18:51       ` Lionel Landwerlin
2017-05-26 11:56     ` [PATCH v14 14/14] drm/i915/perf: notify sseu configuration changes Lionel Landwerlin
2017-05-26 12:28   ` ✓ Fi.CI.BAT: success for series starting with [v2,9/15] drm/i915: expose _SLICE_MASK GETPARM (rev2) Patchwork
2017-05-31 12:33   ` [PATCH v15 00/14] Enable OA unit for Gen 8 and 9 in i915 perf Lionel Landwerlin
2017-05-31 12:33     ` [PATCH v15 01/14] drm/i915: Record both min/max eu_per_subslice in sseu_dev_info Lionel Landwerlin
2017-05-31 12:33     ` [PATCH v15 02/14] drm/i915: Program RPCS for Broadwell Lionel Landwerlin
2017-05-31 12:33     ` [PATCH v15 03/14] drm/i915: Record the sseu configuration per-context & engine Lionel Landwerlin
2017-05-31 12:33     ` [PATCH v15 04/14] drm/i915/perf: rework mux configurations queries Lionel Landwerlin
2017-05-31 12:33     ` [PATCH v15 05/14] drm/i915/perf: Add 'render basic' Gen8+ OA unit configs Lionel Landwerlin
2017-05-31 12:33     ` [PATCH v15 06/14] drm/i915/perf: Add OA unit support for Gen 8+ Lionel Landwerlin
2017-06-01 11:48       ` Chris Wilson
2017-05-31 12:33     ` [PATCH v15 07/14] drm/i915/perf: Add more OA configs for BDW, CHV, SKL + BXT Lionel Landwerlin
2017-05-31 12:33     ` [PATCH v15 08/14] drm/i915/perf: per-gen timebase for checking sample freq Lionel Landwerlin
2017-05-31 12:33     ` [PATCH v15 09/14] drm/i915/perf: remove perf.hook_lock Lionel Landwerlin
2017-05-31 12:33     ` [PATCH v15 10/14] drm/i915: add KBL GT2/GT3 check macros Lionel Landwerlin
2017-05-31 12:33     ` [PATCH v15 11/14] drm/i915/perf: add KBL support Lionel Landwerlin
2017-05-31 12:33     ` [PATCH v15 12/14] drm/i915/perf: add GLK support Lionel Landwerlin
2017-05-31 12:33     ` [PATCH v15 13/14] drm/i915/perf: reprogram NOA muxes at the beginning of each workload Lionel Landwerlin
2017-05-31 12:33     ` [PATCH v15 14/14] drm/i915/perf: notify sseu configuration changes Lionel Landwerlin
2017-06-01 11:50       ` Chris Wilson
2017-06-01 11:52       ` Chris Wilson
2017-04-24 18:52 ` ✗ Fi.CI.BAT: failure for Enable OA unit for Gen 8 and 9 in i915 perf (rev7) Patchwork
2017-04-25 17:35 ` ✗ Fi.CI.BAT: failure for Enable OA unit for Gen 8 and 9 in i915 perf (rev8) Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.