All of lore.kernel.org
 help / color / mirror / Atom feed
* [Intel-gfx] [PATCH 0/2] Fix live busy stats selftest failure
@ 2022-11-05  0:32 Umesh Nerlige Ramappa
  2022-11-05  0:32 ` [Intel-gfx] [PATCH 1/2] i915/uncore: Acquire fw before loop in intel_uncore_read64_2x32 Umesh Nerlige Ramappa
                   ` (4 more replies)
  0 siblings, 5 replies; 14+ messages in thread
From: Umesh Nerlige Ramappa @ 2022-11-05  0:32 UTC (permalink / raw)
  To: intel-gfx

Engine busyness samples around a 10ms period is failing with busyness
ranging approx. from 87% to 115%. The expected range is +/- 5% of the
sample period.

When determining busyness of active engine, the GuC based engine
busyness implementation relies on a 64 bit timestamp register read. The
latency incurred by this register read causes the failure.

On DG1, when the test fails, the observed latencies range from 900us -
1.5ms.

In order to make the selftest more robust and account for such
latencies, increase the sample period to 100 ms.

v2: (Tvrtko)
In addition refactor intel_uncore_read64_2x32 to obtain the forcewake
once before reading upper and lower register dwords.

Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>

Umesh Nerlige Ramappa (2):
  i915/uncore: Acquire fw before loop in intel_uncore_read64_2x32
  drm/i915/selftest: Bump up sample period for busy stats selftest

 drivers/gpu/drm/i915/gt/selftest_engine_pm.c |  2 +-
 drivers/gpu/drm/i915/intel_uncore.h          | 44 +++++++++++++-------
 2 files changed, 31 insertions(+), 15 deletions(-)

-- 
2.36.1


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [Intel-gfx] [PATCH 1/2] i915/uncore: Acquire fw before loop in intel_uncore_read64_2x32
  2022-11-05  0:32 [Intel-gfx] [PATCH 0/2] Fix live busy stats selftest failure Umesh Nerlige Ramappa
@ 2022-11-05  0:32 ` Umesh Nerlige Ramappa
  2022-11-07 10:13   ` Tvrtko Ursulin
  2022-11-05  0:32 ` [Intel-gfx] [PATCH 2/2] drm/i915/selftest: Bump up sample period for busy stats selftest Umesh Nerlige Ramappa
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 14+ messages in thread
From: Umesh Nerlige Ramappa @ 2022-11-05  0:32 UTC (permalink / raw)
  To: intel-gfx

PMU reads the GT timestamp as a 2x32 mmio read and since upper and lower
32 bit registers are read in a loop, there is a latency involved between
getting the GT timestamp and the CPU timestamp. As part of the
resolution, refactor intel_uncore_read64_2x32 to acquire forcewake and
uncore lock prior to reading upper and lower regs.

Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
---
 drivers/gpu/drm/i915/intel_uncore.h | 44 ++++++++++++++++++++---------
 1 file changed, 30 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_uncore.h b/drivers/gpu/drm/i915/intel_uncore.h
index 5449146a0624..e9e38490815d 100644
--- a/drivers/gpu/drm/i915/intel_uncore.h
+++ b/drivers/gpu/drm/i915/intel_uncore.h
@@ -382,20 +382,6 @@ __uncore_write(write_notrace, 32, l, false)
  */
 __uncore_read(read64, 64, q, true)
 
-static inline u64
-intel_uncore_read64_2x32(struct intel_uncore *uncore,
-			 i915_reg_t lower_reg, i915_reg_t upper_reg)
-{
-	u32 upper, lower, old_upper, loop = 0;
-	upper = intel_uncore_read(uncore, upper_reg);
-	do {
-		old_upper = upper;
-		lower = intel_uncore_read(uncore, lower_reg);
-		upper = intel_uncore_read(uncore, upper_reg);
-	} while (upper != old_upper && loop++ < 2);
-	return (u64)upper << 32 | lower;
-}
-
 #define intel_uncore_posting_read(...) ((void)intel_uncore_read_notrace(__VA_ARGS__))
 #define intel_uncore_posting_read16(...) ((void)intel_uncore_read16_notrace(__VA_ARGS__))
 
@@ -455,6 +441,36 @@ static inline void intel_uncore_rmw_fw(struct intel_uncore *uncore,
 		intel_uncore_write_fw(uncore, reg, val);
 }
 
+static inline u64
+intel_uncore_read64_2x32(struct intel_uncore *uncore,
+			 i915_reg_t lower_reg, i915_reg_t upper_reg)
+{
+	u32 upper, lower, old_upper, loop = 0;
+	enum forcewake_domains fw_domains;
+	unsigned long flags;
+
+	fw_domains = intel_uncore_forcewake_for_reg(uncore, lower_reg,
+						    FW_REG_READ);
+
+	fw_domains |= intel_uncore_forcewake_for_reg(uncore, upper_reg,
+						    FW_REG_READ);
+
+	spin_lock_irqsave(&uncore->lock, flags);
+	intel_uncore_forcewake_get__locked(uncore, fw_domains);
+
+	upper = intel_uncore_read_fw(uncore, upper_reg);
+	do {
+		old_upper = upper;
+		lower = intel_uncore_read_fw(uncore, lower_reg);
+		upper = intel_uncore_read_fw(uncore, upper_reg);
+	} while (upper != old_upper && loop++ < 2);
+
+	intel_uncore_forcewake_put__locked(uncore, fw_domains);
+	spin_unlock_irqrestore(&uncore->lock, flags);
+
+	return (u64)upper << 32 | lower;
+}
+
 static inline int intel_uncore_write_and_verify(struct intel_uncore *uncore,
 						i915_reg_t reg, u32 val,
 						u32 mask, u32 expected_val)
-- 
2.36.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [Intel-gfx] [PATCH 2/2] drm/i915/selftest: Bump up sample period for busy stats selftest
  2022-11-05  0:32 [Intel-gfx] [PATCH 0/2] Fix live busy stats selftest failure Umesh Nerlige Ramappa
  2022-11-05  0:32 ` [Intel-gfx] [PATCH 1/2] i915/uncore: Acquire fw before loop in intel_uncore_read64_2x32 Umesh Nerlige Ramappa
@ 2022-11-05  0:32 ` Umesh Nerlige Ramappa
  2022-11-07 10:16   ` Tvrtko Ursulin
  2022-11-07 23:33   ` Dixit, Ashutosh
  2022-11-05  0:57 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for Fix live busy stats selftest failure Patchwork
                   ` (2 subsequent siblings)
  4 siblings, 2 replies; 14+ messages in thread
From: Umesh Nerlige Ramappa @ 2022-11-05  0:32 UTC (permalink / raw)
  To: intel-gfx

Engine busyness samples around a 10ms period is failing with busyness
ranging approx. from 87% to 115%. The expected range is +/- 5% of the
sample period.

When determining busyness of active engine, the GuC based engine
busyness implementation relies on a 64 bit timestamp register read. The
latency incurred by this register read causes the failure.

On DG1, when the test fails, the observed latencies range from 900us -
1.5ms.

One solution tried was to reduce the latency between reg read and
CPU timestamp capture, but such optimization does not add value to user
since the CPU timestamp obtained here is only used for (1) selftest and
(2) i915 rps implementation specific to execlist scheduler. Also, this
solution only reduces the frequency of failure and does not eliminate
it.

In order to make the selftest more robust and account for such
latencies, increase the sample period to 100 ms.

Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
---
 drivers/gpu/drm/i915/gt/selftest_engine_pm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/selftest_engine_pm.c b/drivers/gpu/drm/i915/gt/selftest_engine_pm.c
index 0dcb3ed44a73..87c94314cf67 100644
--- a/drivers/gpu/drm/i915/gt/selftest_engine_pm.c
+++ b/drivers/gpu/drm/i915/gt/selftest_engine_pm.c
@@ -317,7 +317,7 @@ static int live_engine_busy_stats(void *arg)
 		ENGINE_TRACE(engine, "measuring busy time\n");
 		preempt_disable();
 		de = intel_engine_get_busy_time(engine, &t[0]);
-		mdelay(10);
+		mdelay(100);
 		de = ktime_sub(intel_engine_get_busy_time(engine, &t[1]), de);
 		preempt_enable();
 		dt = ktime_sub(t[1], t[0]);
-- 
2.36.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [Intel-gfx] ✗ Fi.CI.SPARSE: warning for Fix live busy stats selftest failure
  2022-11-05  0:32 [Intel-gfx] [PATCH 0/2] Fix live busy stats selftest failure Umesh Nerlige Ramappa
  2022-11-05  0:32 ` [Intel-gfx] [PATCH 1/2] i915/uncore: Acquire fw before loop in intel_uncore_read64_2x32 Umesh Nerlige Ramappa
  2022-11-05  0:32 ` [Intel-gfx] [PATCH 2/2] drm/i915/selftest: Bump up sample period for busy stats selftest Umesh Nerlige Ramappa
@ 2022-11-05  0:57 ` Patchwork
  2022-11-05  1:19 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
  2022-11-05 13:59 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
  4 siblings, 0 replies; 14+ messages in thread
From: Patchwork @ 2022-11-05  0:57 UTC (permalink / raw)
  To: Umesh Nerlige Ramappa; +Cc: intel-gfx

== Series Details ==

Series: Fix live busy stats selftest failure
URL   : https://patchwork.freedesktop.org/series/110557/
State : warning

== Summary ==

Error: dim sparse failed
Sparse version: v0.6.2
Fast mode used, each commit won't be checked separately.



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [Intel-gfx] ✓ Fi.CI.BAT: success for Fix live busy stats selftest failure
  2022-11-05  0:32 [Intel-gfx] [PATCH 0/2] Fix live busy stats selftest failure Umesh Nerlige Ramappa
                   ` (2 preceding siblings ...)
  2022-11-05  0:57 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for Fix live busy stats selftest failure Patchwork
@ 2022-11-05  1:19 ` Patchwork
  2022-11-05 13:59 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
  4 siblings, 0 replies; 14+ messages in thread
From: Patchwork @ 2022-11-05  1:19 UTC (permalink / raw)
  To: Umesh Nerlige Ramappa; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 2436 bytes --]

== Series Details ==

Series: Fix live busy stats selftest failure
URL   : https://patchwork.freedesktop.org/series/110557/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_12346 -> Patchwork_110557v1
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/index.html

Participating hosts (39 -> 27)
------------------------------

  Missing    (12): fi-bdw-samus bat-dg2-8 bat-dg2-9 bat-adlp-6 bat-adlp-4 fi-ctg-p8600 bat-adln-1 bat-rplp-1 bat-rpls-1 bat-rpls-2 bat-dg2-11 bat-jsl-1 

Known issues
------------

  Here are the changes found in Patchwork_110557v1 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_exec_gttfill@basic:
    - fi-pnv-d510:        [PASS][1] -> [FAIL][2] ([i915#7229])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/fi-pnv-d510/igt@gem_exec_gttfill@basic.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/fi-pnv-d510/igt@gem_exec_gttfill@basic.html

  
#### Possible fixes ####

  * igt@i915_selftest@live@hangcheck:
    - {fi-ehl-2}:         [INCOMPLETE][3] -> [PASS][4]
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/fi-ehl-2/igt@i915_selftest@live@hangcheck.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/fi-ehl-2/igt@i915_selftest@live@hangcheck.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
  [i915#7229]: https://gitlab.freedesktop.org/drm/intel/issues/7229


Build changes
-------------

  * Linux: CI_DRM_12346 -> Patchwork_110557v1

  CI-20190529: 20190529
  CI_DRM_12346: 7b32ba9462baa932abf6cbe2f1a8ecb79e922a6e @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_7044: dbeb6f92720292f8303182a0e649284cea5b11a6 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  Patchwork_110557v1: 7b32ba9462baa932abf6cbe2f1a8ecb79e922a6e @ git://anongit.freedesktop.org/gfx-ci/linux


### Linux commits

231612650801 drm/i915/selftest: Bump up sample period for busy stats selftest
18b0e07b0348 i915/uncore: Acquire fw before loop in intel_uncore_read64_2x32

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/index.html

[-- Attachment #2: Type: text/html, Size: 3008 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [Intel-gfx] ✗ Fi.CI.IGT: failure for Fix live busy stats selftest failure
  2022-11-05  0:32 [Intel-gfx] [PATCH 0/2] Fix live busy stats selftest failure Umesh Nerlige Ramappa
                   ` (3 preceding siblings ...)
  2022-11-05  1:19 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
@ 2022-11-05 13:59 ` Patchwork
  4 siblings, 0 replies; 14+ messages in thread
From: Patchwork @ 2022-11-05 13:59 UTC (permalink / raw)
  To: Umesh Nerlige Ramappa; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 30047 bytes --]

== Series Details ==

Series: Fix live busy stats selftest failure
URL   : https://patchwork.freedesktop.org/series/110557/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_12346_full -> Patchwork_110557v1_full
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with Patchwork_110557v1_full absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_110557v1_full, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Participating hosts (10 -> 9)
------------------------------

  Missing    (1): shard-dg1 

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_110557v1_full:

### IGT changes ###

#### Possible regressions ####

  * igt@api_intel_allocator@fork-simple-stress:
    - shard-tglb:         [PASS][1] -> [INCOMPLETE][2]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-tglb7/igt@api_intel_allocator@fork-simple-stress.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-tglb5/igt@api_intel_allocator@fork-simple-stress.html

  * igt@gem_eio@reset-stress:
    - shard-snb:          [PASS][3] -> [INCOMPLETE][4]
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-snb7/igt@gem_eio@reset-stress.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-snb6/igt@gem_eio@reset-stress.html

  * igt@kms_plane_scaling@plane-upscale-with-modifiers-20x20@pipe-a-dp-1:
    - shard-apl:          [PASS][5] -> [DMESG-WARN][6]
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-apl7/igt@kms_plane_scaling@plane-upscale-with-modifiers-20x20@pipe-a-dp-1.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-apl3/igt@kms_plane_scaling@plane-upscale-with-modifiers-20x20@pipe-a-dp-1.html

  
Known issues
------------

  Here are the changes found in Patchwork_110557v1_full that come from known issues:

### CI changes ###

#### Possible fixes ####

  * boot:
    - shard-glk:          ([PASS][7], [PASS][8], [FAIL][9], [PASS][10], [PASS][11], [PASS][12], [PASS][13], [PASS][14], [PASS][15], [PASS][16], [PASS][17], [PASS][18], [PASS][19], [PASS][20], [PASS][21], [PASS][22], [PASS][23], [PASS][24], [PASS][25], [PASS][26], [PASS][27], [PASS][28], [PASS][29], [PASS][30], [PASS][31]) ([i915#4392]) -> ([PASS][32], [PASS][33], [PASS][34], [PASS][35], [PASS][36], [PASS][37], [PASS][38], [PASS][39], [PASS][40], [PASS][41], [PASS][42], [PASS][43], [PASS][44], [PASS][45], [PASS][46], [PASS][47], [PASS][48], [PASS][49], [PASS][50], [PASS][51], [PASS][52], [PASS][53], [PASS][54], [PASS][55], [PASS][56])
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-glk9/boot.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-glk9/boot.html
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-glk9/boot.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-glk9/boot.html
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-glk1/boot.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-glk1/boot.html
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-glk1/boot.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-glk2/boot.html
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-glk2/boot.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-glk2/boot.html
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-glk3/boot.html
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-glk3/boot.html
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-glk3/boot.html
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-glk5/boot.html
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-glk5/boot.html
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-glk5/boot.html
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-glk6/boot.html
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-glk6/boot.html
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-glk7/boot.html
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-glk7/boot.html
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-glk7/boot.html
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-glk8/boot.html
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-glk8/boot.html
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-glk8/boot.html
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-glk8/boot.html
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk9/boot.html
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk1/boot.html
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk1/boot.html
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk1/boot.html
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk2/boot.html
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk2/boot.html
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk2/boot.html
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk3/boot.html
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk3/boot.html
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk3/boot.html
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk5/boot.html
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk5/boot.html
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk5/boot.html
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk6/boot.html
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk6/boot.html
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk6/boot.html
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk7/boot.html
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk7/boot.html
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk7/boot.html
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk7/boot.html
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk8/boot.html
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk8/boot.html
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk8/boot.html
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk9/boot.html
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk9/boot.html

  

### IGT changes ###

#### Issues hit ####

  * igt@gem_create@create-massive:
    - shard-skl:          NOTRUN -> [DMESG-WARN][57] ([i915#4991])
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-skl6/igt@gem_create@create-massive.html

  * igt@gem_exec_balancer@parallel-keep-in-fence:
    - shard-iclb:         [PASS][58] -> [SKIP][59] ([i915#4525]) +1 similar issue
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-iclb1/igt@gem_exec_balancer@parallel-keep-in-fence.html
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-iclb5/igt@gem_exec_balancer@parallel-keep-in-fence.html

  * igt@gem_exec_fair@basic-deadline:
    - shard-skl:          NOTRUN -> [FAIL][60] ([i915#2846])
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-skl6/igt@gem_exec_fair@basic-deadline.html

  * igt@gem_exec_fair@basic-none-solo@rcs0:
    - shard-apl:          NOTRUN -> [FAIL][61] ([i915#2842])
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-apl3/igt@gem_exec_fair@basic-none-solo@rcs0.html

  * igt@gem_exec_fair@basic-none@rcs0:
    - shard-glk:          NOTRUN -> [FAIL][62] ([i915#2842])
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk2/igt@gem_exec_fair@basic-none@rcs0.html

  * igt@gem_lmem_swapping@basic:
    - shard-skl:          NOTRUN -> [SKIP][63] ([fdo#109271] / [i915#4613]) +3 similar issues
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-skl10/igt@gem_lmem_swapping@basic.html
    - shard-glk:          NOTRUN -> [SKIP][64] ([fdo#109271] / [i915#4613])
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk2/igt@gem_lmem_swapping@basic.html

  * igt@gem_pwrite@basic-exhaustion:
    - shard-skl:          NOTRUN -> [INCOMPLETE][65] ([i915#7248])
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-skl9/igt@gem_pwrite@basic-exhaustion.html

  * igt@gem_userptr_blits@dmabuf-sync:
    - shard-skl:          NOTRUN -> [SKIP][66] ([fdo#109271] / [i915#3323])
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-skl6/igt@gem_userptr_blits@dmabuf-sync.html

  * igt@i915_pipe_stress@stress-xrgb8888-untiled:
    - shard-skl:          NOTRUN -> [FAIL][67] ([i915#7036])
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-skl6/igt@i915_pipe_stress@stress-xrgb8888-untiled.html

  * igt@i915_pm_dc@dc6-dpms:
    - shard-skl:          NOTRUN -> [FAIL][68] ([i915#3989] / [i915#454])
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-skl6/igt@i915_pm_dc@dc6-dpms.html

  * igt@i915_selftest@live@gt_heartbeat:
    - shard-apl:          [PASS][69] -> [DMESG-FAIL][70] ([i915#5334])
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-apl2/igt@i915_selftest@live@gt_heartbeat.html
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-apl2/igt@i915_selftest@live@gt_heartbeat.html

  * igt@kms_async_flips@alternate-sync-async-flip@pipe-a-edp-1:
    - shard-skl:          NOTRUN -> [FAIL][71] ([i915#2521]) +1 similar issue
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-skl6/igt@kms_async_flips@alternate-sync-async-flip@pipe-a-edp-1.html

  * igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180-async-flip:
    - shard-skl:          NOTRUN -> [FAIL][72] ([i915#3763])
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-skl10/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180-async-flip.html

  * igt@kms_ccs@pipe-a-missing-ccs-buffer-y_tiled_gen12_mc_ccs:
    - shard-glk:          NOTRUN -> [SKIP][73] ([fdo#109271] / [i915#3886]) +1 similar issue
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk2/igt@kms_ccs@pipe-a-missing-ccs-buffer-y_tiled_gen12_mc_ccs.html

  * igt@kms_ccs@pipe-c-random-ccs-data-y_tiled_gen12_mc_ccs:
    - shard-skl:          NOTRUN -> [SKIP][74] ([fdo#109271] / [i915#3886]) +14 similar issues
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-skl4/igt@kms_ccs@pipe-c-random-ccs-data-y_tiled_gen12_mc_ccs.html

  * igt@kms_chamelium@dp-edid-change-during-suspend:
    - shard-glk:          NOTRUN -> [SKIP][75] ([fdo#109271] / [fdo#111827]) +2 similar issues
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk2/igt@kms_chamelium@dp-edid-change-during-suspend.html
    - shard-skl:          NOTRUN -> [SKIP][76] ([fdo#109271] / [fdo#111827]) +9 similar issues
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-skl10/igt@kms_chamelium@dp-edid-change-during-suspend.html

  * igt@kms_chamelium@dp-hpd-after-suspend:
    - shard-apl:          NOTRUN -> [SKIP][77] ([fdo#109271] / [fdo#111827])
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-apl3/igt@kms_chamelium@dp-hpd-after-suspend.html

  * igt@kms_color_chamelium@ctm-negative:
    - shard-snb:          NOTRUN -> [SKIP][78] ([fdo#109271] / [fdo#111827]) +1 similar issue
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-snb2/igt@kms_color_chamelium@ctm-negative.html

  * igt@kms_cursor_crc@cursor-suspend@pipe-b-dp-1:
    - shard-apl:          [PASS][79] -> [DMESG-WARN][80] ([i915#180])
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-apl2/igt@kms_cursor_crc@cursor-suspend@pipe-b-dp-1.html
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-apl3/igt@kms_cursor_crc@cursor-suspend@pipe-b-dp-1.html

  * igt@kms_cursor_legacy@flip-vs-cursor@atomic-transitions:
    - shard-glk:          [PASS][81] -> [FAIL][82] ([i915#2346])
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-glk5/igt@kms_cursor_legacy@flip-vs-cursor@atomic-transitions.html
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk6/igt@kms_cursor_legacy@flip-vs-cursor@atomic-transitions.html

  * igt@kms_cursor_legacy@flip-vs-cursor@varying-size:
    - shard-iclb:         [PASS][83] -> [FAIL][84] ([i915#2346]) +1 similar issue
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-iclb2/igt@kms_cursor_legacy@flip-vs-cursor@varying-size.html
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-iclb7/igt@kms_cursor_legacy@flip-vs-cursor@varying-size.html

  * igt@kms_dsc@dsc-with-bpc-formats:
    - shard-glk:          NOTRUN -> [SKIP][85] ([fdo#109271] / [i915#7205])
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk2/igt@kms_dsc@dsc-with-bpc-formats.html

  * igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@ac-hdmi-a1-hdmi-a2:
    - shard-glk:          [PASS][86] -> [FAIL][87] ([i915#79]) +1 similar issue
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-glk1/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@ac-hdmi-a1-hdmi-a2.html
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk3/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@ac-hdmi-a1-hdmi-a2.html

  * igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-16bpp-4tile-downscaling@pipe-a-default-mode:
    - shard-iclb:         NOTRUN -> [SKIP][88] ([i915#2672]) +3 similar issues
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-iclb2/igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-16bpp-4tile-downscaling@pipe-a-default-mode.html

  * igt@kms_flip_scaled_crc@flip-64bpp-xtile-to-16bpp-xtile-downscaling@pipe-a-default-mode:
    - shard-iclb:         NOTRUN -> [SKIP][89] ([i915#3555])
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-iclb2/igt@kms_flip_scaled_crc@flip-64bpp-xtile-to-16bpp-xtile-downscaling@pipe-a-default-mode.html

  * igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-16bpp-yftile-downscaling@pipe-a-valid-mode:
    - shard-iclb:         NOTRUN -> [SKIP][90] ([i915#2587] / [i915#2672]) +4 similar issues
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-iclb5/igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-16bpp-yftile-downscaling@pipe-a-valid-mode.html

  * igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilercccs-downscaling@pipe-a-default-mode:
    - shard-iclb:         NOTRUN -> [SKIP][91] ([i915#2672] / [i915#3555])
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-iclb2/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilercccs-downscaling@pipe-a-default-mode.html

  * igt@kms_frontbuffer_tracking@fbc-1p-offscren-pri-indfb-draw-mmap-gtt:
    - shard-skl:          NOTRUN -> [SKIP][92] ([fdo#109271]) +246 similar issues
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-skl9/igt@kms_frontbuffer_tracking@fbc-1p-offscren-pri-indfb-draw-mmap-gtt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-spr-indfb-draw-blt:
    - shard-glk:          NOTRUN -> [SKIP][93] ([fdo#109271]) +34 similar issues
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk2/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-spr-indfb-draw-blt.html

  * igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-indfb-draw-mmap-cpu:
    - shard-snb:          NOTRUN -> [SKIP][94] ([fdo#109271]) +17 similar issues
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-snb2/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-indfb-draw-mmap-cpu.html

  * igt@kms_frontbuffer_tracking@psr-suspend:
    - shard-skl:          NOTRUN -> [INCOMPLETE][95] ([i915#7255])
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-skl6/igt@kms_frontbuffer_tracking@psr-suspend.html

  * igt@kms_plane_alpha_blend@alpha-opaque-fb@pipe-b-edp-1:
    - shard-skl:          NOTRUN -> [FAIL][96] ([i915#4573]) +2 similar issues
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-skl6/igt@kms_plane_alpha_blend@alpha-opaque-fb@pipe-b-edp-1.html

  * igt@kms_plane_scaling@invalid-num-scalers@pipe-a-edp-1-invalid-num-scalers:
    - shard-skl:          NOTRUN -> [SKIP][97] ([fdo#109271] / [i915#5776]) +2 similar issues
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-skl6/igt@kms_plane_scaling@invalid-num-scalers@pipe-a-edp-1-invalid-num-scalers.html

  * igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area:
    - shard-glk:          NOTRUN -> [SKIP][98] ([fdo#109271] / [i915#658]) +1 similar issue
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk2/igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area.html

  * igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-big-fb:
    - shard-skl:          NOTRUN -> [SKIP][99] ([fdo#109271] / [i915#658]) +4 similar issues
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-skl6/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-big-fb.html

  * igt@kms_psr2_su@page_flip-p010@pipe-b-edp-1:
    - shard-iclb:         NOTRUN -> [FAIL][100] ([i915#5939]) +2 similar issues
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-iclb2/igt@kms_psr2_su@page_flip-p010@pipe-b-edp-1.html

  * igt@kms_psr@psr2_no_drrs:
    - shard-iclb:         [PASS][101] -> [SKIP][102] ([fdo#109441]) +3 similar issues
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-iclb2/igt@kms_psr@psr2_no_drrs.html
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-iclb7/igt@kms_psr@psr2_no_drrs.html

  * igt@kms_vblank@pipe-d-ts-continuation-dpms-rpm:
    - shard-apl:          NOTRUN -> [SKIP][103] ([fdo#109271]) +38 similar issues
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-apl3/igt@kms_vblank@pipe-d-ts-continuation-dpms-rpm.html

  * igt@kms_writeback@writeback-fb-id:
    - shard-skl:          NOTRUN -> [SKIP][104] ([fdo#109271] / [i915#2437]) +1 similar issue
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-skl6/igt@kms_writeback@writeback-fb-id.html

  * igt@sysfs_clients@create:
    - shard-glk:          NOTRUN -> [SKIP][105] ([fdo#109271] / [i915#2994]) +1 similar issue
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk2/igt@sysfs_clients@create.html

  * igt@sysfs_clients@pidname:
    - shard-apl:          NOTRUN -> [SKIP][106] ([fdo#109271] / [i915#2994])
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-apl6/igt@sysfs_clients@pidname.html

  * igt@sysfs_clients@split-50:
    - shard-skl:          NOTRUN -> [SKIP][107] ([fdo#109271] / [i915#2994]) +2 similar issues
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-skl6/igt@sysfs_clients@split-50.html

  
#### Possible fixes ####

  * igt@gem_exec_balancer@parallel-bb-first:
    - shard-iclb:         [SKIP][108] ([i915#4525]) -> [PASS][109] +2 similar issues
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-iclb6/igt@gem_exec_balancer@parallel-bb-first.html
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-iclb2/igt@gem_exec_balancer@parallel-bb-first.html

  * igt@gem_exec_fair@basic-pace-share@rcs0:
    - shard-glk:          [FAIL][110] ([i915#2842]) -> [PASS][111] +1 similar issue
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-glk8/igt@gem_exec_fair@basic-pace-share@rcs0.html
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk2/igt@gem_exec_fair@basic-pace-share@rcs0.html

  * igt@i915_module_load@reload-no-display:
    - shard-snb:          [DMESG-WARN][112] ([i915#4528]) -> [PASS][113]
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-snb4/igt@i915_module_load@reload-no-display.html
   [113]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-snb2/igt@i915_module_load@reload-no-display.html

  * igt@kms_cursor_legacy@flip-vs-cursor@atomic-transitions-varying-size:
    - shard-glk:          [FAIL][114] ([i915#2346]) -> [PASS][115]
   [114]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-glk5/igt@kms_cursor_legacy@flip-vs-cursor@atomic-transitions-varying-size.html
   [115]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk6/igt@kms_cursor_legacy@flip-vs-cursor@atomic-transitions-varying-size.html

  * igt@kms_flip@flip-vs-expired-vblank-interruptible@c-edp1:
    - shard-iclb:         [FAIL][116] ([i915#79]) -> [PASS][117]
   [116]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-iclb7/igt@kms_flip@flip-vs-expired-vblank-interruptible@c-edp1.html
   [117]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-iclb6/igt@kms_flip@flip-vs-expired-vblank-interruptible@c-edp1.html

  * igt@kms_flip@flip-vs-expired-vblank@a-hdmi-a1:
    - shard-glk:          [FAIL][118] ([i915#2122]) -> [PASS][119]
   [118]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-glk1/igt@kms_flip@flip-vs-expired-vblank@a-hdmi-a1.html
   [119]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk3/igt@kms_flip@flip-vs-expired-vblank@a-hdmi-a1.html

  * igt@kms_flip@flip-vs-suspend-interruptible@c-dp1:
    - shard-apl:          [DMESG-WARN][120] ([i915#180]) -> [PASS][121] +1 similar issue
   [120]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-apl3/igt@kms_flip@flip-vs-suspend-interruptible@c-dp1.html
   [121]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-apl3/igt@kms_flip@flip-vs-suspend-interruptible@c-dp1.html

  * igt@kms_plane_multiple@tiling-none@pipe-a-edp-1:
    - shard-iclb:         [DMESG-WARN][122] ([i915#4391]) -> [PASS][123]
   [122]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-iclb7/igt@kms_plane_multiple@tiling-none@pipe-a-edp-1.html
   [123]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-iclb6/igt@kms_plane_multiple@tiling-none@pipe-a-edp-1.html

  * igt@kms_plane_scaling@planes-upscale-20x20-downscale-factor-0-5@pipe-b-edp-1:
    - shard-iclb:         [SKIP][124] ([i915#5235]) -> [PASS][125] +2 similar issues
   [124]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-iclb2/igt@kms_plane_scaling@planes-upscale-20x20-downscale-factor-0-5@pipe-b-edp-1.html
   [125]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-iclb7/igt@kms_plane_scaling@planes-upscale-20x20-downscale-factor-0-5@pipe-b-edp-1.html

  * igt@kms_psr@psr2_dpms:
    - shard-iclb:         [SKIP][126] ([fdo#109441]) -> [PASS][127] +1 similar issue
   [126]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-iclb1/igt@kms_psr@psr2_dpms.html
   [127]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-iclb2/igt@kms_psr@psr2_dpms.html

  
#### Warnings ####

  * igt@gem_pread@exhaustion:
    - shard-apl:          [INCOMPLETE][128] ([i915#7248]) -> [WARN][129] ([i915#2658]) +1 similar issue
   [128]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-apl8/igt@gem_pread@exhaustion.html
   [129]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-apl1/igt@gem_pread@exhaustion.html
    - shard-glk:          [INCOMPLETE][130] ([i915#7248]) -> [WARN][131] ([i915#2658])
   [130]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-glk8/igt@gem_pread@exhaustion.html
   [131]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-glk9/igt@gem_pread@exhaustion.html

  * igt@gem_pwrite@basic-exhaustion:
    - shard-tglb:         [INCOMPLETE][132] ([i915#7248]) -> [WARN][133] ([i915#2658])
   [132]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-tglb7/igt@gem_pwrite@basic-exhaustion.html
   [133]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-tglb5/igt@gem_pwrite@basic-exhaustion.html

  * igt@i915_pm_rc6_residency@rc6-idle@rcs0:
    - shard-iclb:         [WARN][134] ([i915#2684]) -> [FAIL][135] ([i915#2684])
   [134]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-iclb1/igt@i915_pm_rc6_residency@rc6-idle@rcs0.html
   [135]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-iclb5/igt@i915_pm_rc6_residency@rc6-idle@rcs0.html

  * igt@kms_psr2_sf@primary-plane-update-sf-dmg-area:
    - shard-iclb:         [SKIP][136] ([i915#2920]) -> [SKIP][137] ([fdo#111068] / [i915#658])
   [136]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-iclb2/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area.html
   [137]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-iclb7/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area.html

  * igt@runner@aborted:
    - shard-apl:          ([FAIL][138], [FAIL][139], [FAIL][140], [FAIL][141]) ([i915#180] / [i915#3002] / [i915#4312]) -> ([FAIL][142], [FAIL][143], [FAIL][144]) ([i915#3002] / [i915#4312])
   [138]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-apl8/igt@runner@aborted.html
   [139]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-apl7/igt@runner@aborted.html
   [140]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-apl3/igt@runner@aborted.html
   [141]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12346/shard-apl2/igt@runner@aborted.html
   [142]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-apl7/igt@runner@aborted.html
   [143]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-apl8/igt@runner@aborted.html
   [144]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/shard-apl3/igt@runner@aborted.html

  
  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109441]: https://bugs.freedesktop.org/show_bug.cgi?id=109441
  [fdo#111068]: https://bugs.freedesktop.org/show_bug.cgi?id=111068
  [fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
  [i915#180]: https://gitlab.freedesktop.org/drm/intel/issues/180
  [i915#2122]: https://gitlab.freedesktop.org/drm/intel/issues/2122
  [i915#2346]: https://gitlab.freedesktop.org/drm/intel/issues/2346
  [i915#2437]: https://gitlab.freedesktop.org/drm/intel/issues/2437
  [i915#2521]: https://gitlab.freedesktop.org/drm/intel/issues/2521
  [i915#2587]: https://gitlab.freedesktop.org/drm/intel/issues/2587
  [i915#2658]: https://gitlab.freedesktop.org/drm/intel/issues/2658
  [i915#2672]: https://gitlab.freedesktop.org/drm/intel/issues/2672
  [i915#2684]: https://gitlab.freedesktop.org/drm/intel/issues/2684
  [i915#2842]: https://gitlab.freedesktop.org/drm/intel/issues/2842
  [i915#2846]: https://gitlab.freedesktop.org/drm/intel/issues/2846
  [i915#2920]: https://gitlab.freedesktop.org/drm/intel/issues/2920
  [i915#2994]: https://gitlab.freedesktop.org/drm/intel/issues/2994
  [i915#3002]: https://gitlab.freedesktop.org/drm/intel/issues/3002
  [i915#3323]: https://gitlab.freedesktop.org/drm/intel/issues/3323
  [i915#3555]: https://gitlab.freedesktop.org/drm/intel/issues/3555
  [i915#3763]: https://gitlab.freedesktop.org/drm/intel/issues/3763
  [i915#3886]: https://gitlab.freedesktop.org/drm/intel/issues/3886
  [i915#3989]: https://gitlab.freedesktop.org/drm/intel/issues/3989
  [i915#4312]: https://gitlab.freedesktop.org/drm/intel/issues/4312
  [i915#4391]: https://gitlab.freedesktop.org/drm/intel/issues/4391
  [i915#4392]: https://gitlab.freedesktop.org/drm/intel/issues/4392
  [i915#4525]: https://gitlab.freedesktop.org/drm/intel/issues/4525
  [i915#4528]: https://gitlab.freedesktop.org/drm/intel/issues/4528
  [i915#454]: https://gitlab.freedesktop.org/drm/intel/issues/454
  [i915#4573]: https://gitlab.freedesktop.org/drm/intel/issues/4573
  [i915#4613]: https://gitlab.freedesktop.org/drm/intel/issues/4613
  [i915#4991]: https://gitlab.freedesktop.org/drm/intel/issues/4991
  [i915#5235]: https://gitlab.freedesktop.org/drm/intel/issues/5235
  [i915#5334]: https://gitlab.freedesktop.org/drm/intel/issues/5334
  [i915#5776]: https://gitlab.freedesktop.org/drm/intel/issues/5776
  [i915#5939]: https://gitlab.freedesktop.org/drm/intel/issues/5939
  [i915#658]: https://gitlab.freedesktop.org/drm/intel/issues/658
  [i915#7036]: https://gitlab.freedesktop.org/drm/intel/issues/7036
  [i915#7205]: https://gitlab.freedesktop.org/drm/intel/issues/7205
  [i915#7248]: https://gitlab.freedesktop.org/drm/intel/issues/7248
  [i915#7255]: https://gitlab.freedesktop.org/drm/intel/issues/7255
  [i915#79]: https://gitlab.freedesktop.org/drm/intel/issues/79


Build changes
-------------

  * Linux: CI_DRM_12346 -> Patchwork_110557v1

  CI-20190529: 20190529
  CI_DRM_12346: 7b32ba9462baa932abf6cbe2f1a8ecb79e922a6e @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_7044: dbeb6f92720292f8303182a0e649284cea5b11a6 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  Patchwork_110557v1: 7b32ba9462baa932abf6cbe2f1a8ecb79e922a6e @ git://anongit.freedesktop.org/gfx-ci/linux
  piglit_4509: fdc5a4ca11124ab8413c7988896eec4c97336694 @ git://anongit.freedesktop.org/piglit

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110557v1/index.html

[-- Attachment #2: Type: text/html, Size: 35800 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Intel-gfx] [PATCH 1/2] i915/uncore: Acquire fw before loop in intel_uncore_read64_2x32
  2022-11-05  0:32 ` [Intel-gfx] [PATCH 1/2] i915/uncore: Acquire fw before loop in intel_uncore_read64_2x32 Umesh Nerlige Ramappa
@ 2022-11-07 10:13   ` Tvrtko Ursulin
  2022-11-07 21:23     ` Dixit, Ashutosh
  0 siblings, 1 reply; 14+ messages in thread
From: Tvrtko Ursulin @ 2022-11-07 10:13 UTC (permalink / raw)
  To: Umesh Nerlige Ramappa, intel-gfx


On 05/11/2022 00:32, Umesh Nerlige Ramappa wrote:
> PMU reads the GT timestamp as a 2x32 mmio read and since upper and lower
> 32 bit registers are read in a loop, there is a latency involved between
> getting the GT timestamp and the CPU timestamp. As part of the
> resolution, refactor intel_uncore_read64_2x32 to acquire forcewake and
> uncore lock prior to reading upper and lower regs.
> 
> Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
> ---
>   drivers/gpu/drm/i915/intel_uncore.h | 44 ++++++++++++++++++++---------
>   1 file changed, 30 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/intel_uncore.h b/drivers/gpu/drm/i915/intel_uncore.h
> index 5449146a0624..e9e38490815d 100644
> --- a/drivers/gpu/drm/i915/intel_uncore.h
> +++ b/drivers/gpu/drm/i915/intel_uncore.h
> @@ -382,20 +382,6 @@ __uncore_write(write_notrace, 32, l, false)
>    */
>   __uncore_read(read64, 64, q, true)
>   
> -static inline u64
> -intel_uncore_read64_2x32(struct intel_uncore *uncore,
> -			 i915_reg_t lower_reg, i915_reg_t upper_reg)
> -{
> -	u32 upper, lower, old_upper, loop = 0;
> -	upper = intel_uncore_read(uncore, upper_reg);
> -	do {
> -		old_upper = upper;
> -		lower = intel_uncore_read(uncore, lower_reg);
> -		upper = intel_uncore_read(uncore, upper_reg);
> -	} while (upper != old_upper && loop++ < 2);
> -	return (u64)upper << 32 | lower;
> -}
> -
>   #define intel_uncore_posting_read(...) ((void)intel_uncore_read_notrace(__VA_ARGS__))
>   #define intel_uncore_posting_read16(...) ((void)intel_uncore_read16_notrace(__VA_ARGS__))
>   
> @@ -455,6 +441,36 @@ static inline void intel_uncore_rmw_fw(struct intel_uncore *uncore,
>   		intel_uncore_write_fw(uncore, reg, val);
>   }
>   
> +static inline u64
> +intel_uncore_read64_2x32(struct intel_uncore *uncore,
> +			 i915_reg_t lower_reg, i915_reg_t upper_reg)
> +{
> +	u32 upper, lower, old_upper, loop = 0;
> +	enum forcewake_domains fw_domains;
> +	unsigned long flags;
> +
> +	fw_domains = intel_uncore_forcewake_for_reg(uncore, lower_reg,
> +						    FW_REG_READ);
> +
> +	fw_domains |= intel_uncore_forcewake_for_reg(uncore, upper_reg,
> +						    FW_REG_READ);
> +
> +	spin_lock_irqsave(&uncore->lock, flags);
> +	intel_uncore_forcewake_get__locked(uncore, fw_domains);
> +
> +	upper = intel_uncore_read_fw(uncore, upper_reg);
> +	do {
> +		old_upper = upper;
> +		lower = intel_uncore_read_fw(uncore, lower_reg);
> +		upper = intel_uncore_read_fw(uncore, upper_reg);
> +	} while (upper != old_upper && loop++ < 2);
> +
> +	intel_uncore_forcewake_put__locked(uncore, fw_domains);

I mulled over the fact this no longer applies the put hysteresis, but 
then I saw GuC busyness is essentially the only current caller so 
thought it doesn't really warrant adding a super long named 
intel_uncore_forcewake_put_delayed__locked helper.

Perhaps it would make sense to move this out of static inline, in which 
case it would also be easier to have the hysteresis without needing to 
export any new helpers, but mostly because it does not feel the static 
inline is justified. Sounds an attractive option but it is passable as is.

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Regards,

Tvrtko

> +	spin_unlock_irqrestore(&uncore->lock, flags);
> +
> +	return (u64)upper << 32 | lower;
> +}
> +
>   static inline int intel_uncore_write_and_verify(struct intel_uncore *uncore,
>   						i915_reg_t reg, u32 val,
>   						u32 mask, u32 expected_val)

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Intel-gfx] [PATCH 2/2] drm/i915/selftest: Bump up sample period for busy stats selftest
  2022-11-05  0:32 ` [Intel-gfx] [PATCH 2/2] drm/i915/selftest: Bump up sample period for busy stats selftest Umesh Nerlige Ramappa
@ 2022-11-07 10:16   ` Tvrtko Ursulin
  2022-11-07 19:01     ` Umesh Nerlige Ramappa
  2022-11-07 23:33   ` Dixit, Ashutosh
  1 sibling, 1 reply; 14+ messages in thread
From: Tvrtko Ursulin @ 2022-11-07 10:16 UTC (permalink / raw)
  To: Umesh Nerlige Ramappa, intel-gfx


On 05/11/2022 00:32, Umesh Nerlige Ramappa wrote:
> Engine busyness samples around a 10ms period is failing with busyness
> ranging approx. from 87% to 115%. The expected range is +/- 5% of the
> sample period.
> 
> When determining busyness of active engine, the GuC based engine
> busyness implementation relies on a 64 bit timestamp register read. The
> latency incurred by this register read causes the failure.
> 
> On DG1, when the test fails, the observed latencies range from 900us -
> 1.5ms.

Is it at all faster with the locked 2x32 or still the same unexplained 
display related latencies can happen?

> One solution tried was to reduce the latency between reg read and
> CPU timestamp capture, but such optimization does not add value to user
> since the CPU timestamp obtained here is only used for (1) selftest and
> (2) i915 rps implementation specific to execlist scheduler. Also, this
> solution only reduces the frequency of failure and does not eliminate
> it.
> 
> In order to make the selftest more robust and account for such
> latencies, increase the sample period to 100 ms.
> 
> Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
> ---
>   drivers/gpu/drm/i915/gt/selftest_engine_pm.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/selftest_engine_pm.c b/drivers/gpu/drm/i915/gt/selftest_engine_pm.c
> index 0dcb3ed44a73..87c94314cf67 100644
> --- a/drivers/gpu/drm/i915/gt/selftest_engine_pm.c
> +++ b/drivers/gpu/drm/i915/gt/selftest_engine_pm.c
> @@ -317,7 +317,7 @@ static int live_engine_busy_stats(void *arg)
>   		ENGINE_TRACE(engine, "measuring busy time\n");
>   		preempt_disable();
>   		de = intel_engine_get_busy_time(engine, &t[0]);
> -		mdelay(10);
> +		mdelay(100);
>   		de = ktime_sub(intel_engine_get_busy_time(engine, &t[1]), de);
>   		preempt_enable();
>   		dt = ktime_sub(t[1], t[0]);

Acked-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Regards,

Tvrtko

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Intel-gfx] [PATCH 2/2] drm/i915/selftest: Bump up sample period for busy stats selftest
  2022-11-07 10:16   ` Tvrtko Ursulin
@ 2022-11-07 19:01     ` Umesh Nerlige Ramappa
  0 siblings, 0 replies; 14+ messages in thread
From: Umesh Nerlige Ramappa @ 2022-11-07 19:01 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

On Mon, Nov 07, 2022 at 10:16:20AM +0000, Tvrtko Ursulin wrote:
>
>On 05/11/2022 00:32, Umesh Nerlige Ramappa wrote:
>>Engine busyness samples around a 10ms period is failing with busyness
>>ranging approx. from 87% to 115%. The expected range is +/- 5% of the
>>sample period.
>>
>>When determining busyness of active engine, the GuC based engine
>>busyness implementation relies on a 64 bit timestamp register read. The
>>latency incurred by this register read causes the failure.
>>
>>On DG1, when the test fails, the observed latencies range from 900us -
>>1.5ms.
>
>Is it at all faster with the locked 2x32 or still the same unexplained 
>display related latencies can happen?

Considering that originally this failed 1 in 10 runs,

The locked 2x32 patch in this series reduces failure rate to 1 in 50.

What really helps is - if the CPU timestamp is taken within the 
forcewake block, then the correlation between GPU/CPU times is very good 
and that reduces the selftest failure frequency (1 in 200).  More like 
this:

uncore_lock
fw_get
read 64-bit GPU time
read CPU timestamp
fw_put
uncore_unlock.

I recall we had arrived at this sequence in the past when implementing 
query_cs_cycles 
- https://patchwork.freedesktop.org/patch/432041/?series=89766&rev=1

I still included the locked 2x32 patch here because 1 in 50 is still 
better than 1 in 10.

For now, 100 ms sample period is the only promising solution I see. No 
failures for 1000 runs.

Thanks,
Umesh

>
>>One solution tried was to reduce the latency between reg read and
>>CPU timestamp capture, but such optimization does not add value to user
>>since the CPU timestamp obtained here is only used for (1) selftest and
>>(2) i915 rps implementation specific to execlist scheduler. Also, this
>>solution only reduces the frequency of failure and does not eliminate
>>it.
>>
>>In order to make the selftest more robust and account for such
>>latencies, increase the sample period to 100 ms.
>>
>>Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
>>---
>>  drivers/gpu/drm/i915/gt/selftest_engine_pm.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>>diff --git a/drivers/gpu/drm/i915/gt/selftest_engine_pm.c b/drivers/gpu/drm/i915/gt/selftest_engine_pm.c
>>index 0dcb3ed44a73..87c94314cf67 100644
>>--- a/drivers/gpu/drm/i915/gt/selftest_engine_pm.c
>>+++ b/drivers/gpu/drm/i915/gt/selftest_engine_pm.c
>>@@ -317,7 +317,7 @@ static int live_engine_busy_stats(void *arg)
>>  		ENGINE_TRACE(engine, "measuring busy time\n");
>>  		preempt_disable();
>>  		de = intel_engine_get_busy_time(engine, &t[0]);
>>-		mdelay(10);
>>+		mdelay(100);
>>  		de = ktime_sub(intel_engine_get_busy_time(engine, &t[1]), de);
>>  		preempt_enable();
>>  		dt = ktime_sub(t[1], t[0]);
>
>Acked-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>
>Regards,
>
>Tvrtko

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Intel-gfx] [PATCH 1/2] i915/uncore: Acquire fw before loop in intel_uncore_read64_2x32
  2022-11-07 10:13   ` Tvrtko Ursulin
@ 2022-11-07 21:23     ` Dixit, Ashutosh
  2022-11-08  0:11       ` Umesh Nerlige Ramappa
  0 siblings, 1 reply; 14+ messages in thread
From: Dixit, Ashutosh @ 2022-11-07 21:23 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

On Mon, 07 Nov 2022 02:13:46 -0800, Tvrtko Ursulin wrote:
>
> On 05/11/2022 00:32, Umesh Nerlige Ramappa wrote:
> > PMU reads the GT timestamp as a 2x32 mmio read and since upper and lower
> > 32 bit registers are read in a loop, there is a latency involved between
> > getting the GT timestamp and the CPU timestamp. As part of the
> > resolution, refactor intel_uncore_read64_2x32 to acquire forcewake and
> > uncore lock prior to reading upper and lower regs.
> >
> > Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
> > ---
> >   drivers/gpu/drm/i915/intel_uncore.h | 44 ++++++++++++++++++++---------
> >   1 file changed, 30 insertions(+), 14 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/intel_uncore.h b/drivers/gpu/drm/i915/intel_uncore.h
> > index 5449146a0624..e9e38490815d 100644
> > --- a/drivers/gpu/drm/i915/intel_uncore.h
> > +++ b/drivers/gpu/drm/i915/intel_uncore.h
> > @@ -382,20 +382,6 @@ __uncore_write(write_notrace, 32, l, false)
> >    */
> >   __uncore_read(read64, 64, q, true)
> >   -static inline u64
> > -intel_uncore_read64_2x32(struct intel_uncore *uncore,
> > -			 i915_reg_t lower_reg, i915_reg_t upper_reg)
> > -{
> > -	u32 upper, lower, old_upper, loop = 0;
> > -	upper = intel_uncore_read(uncore, upper_reg);
> > -	do {
> > -		old_upper = upper;
> > -		lower = intel_uncore_read(uncore, lower_reg);
> > -		upper = intel_uncore_read(uncore, upper_reg);
> > -	} while (upper != old_upper && loop++ < 2);
> > -	return (u64)upper << 32 | lower;
> > -}
> > -
> >   #define intel_uncore_posting_read(...) ((void)intel_uncore_read_notrace(__VA_ARGS__))
> >   #define intel_uncore_posting_read16(...) ((void)intel_uncore_read16_notrace(__VA_ARGS__))
> >   @@ -455,6 +441,36 @@ static inline void intel_uncore_rmw_fw(struct
> > intel_uncore *uncore,
> >		intel_uncore_write_fw(uncore, reg, val);
> >   }
> >   +static inline u64
> > +intel_uncore_read64_2x32(struct intel_uncore *uncore,
> > +			 i915_reg_t lower_reg, i915_reg_t upper_reg)
> > +{
> > +	u32 upper, lower, old_upper, loop = 0;
> > +	enum forcewake_domains fw_domains;
> > +	unsigned long flags;
> > +
> > +	fw_domains = intel_uncore_forcewake_for_reg(uncore, lower_reg,
> > +						    FW_REG_READ);
> > +
> > +	fw_domains |= intel_uncore_forcewake_for_reg(uncore, upper_reg,
> > +						    FW_REG_READ);
> > +
> > +	spin_lock_irqsave(&uncore->lock, flags);
> > +	intel_uncore_forcewake_get__locked(uncore, fw_domains);
> > +
> > +	upper = intel_uncore_read_fw(uncore, upper_reg);
> > +	do {
> > +		old_upper = upper;
> > +		lower = intel_uncore_read_fw(uncore, lower_reg);
> > +		upper = intel_uncore_read_fw(uncore, upper_reg);
> > +	} while (upper != old_upper && loop++ < 2);
> > +
> > +	intel_uncore_forcewake_put__locked(uncore, fw_domains);
>
> I mulled over the fact this no longer applies the put hysteresis, but then
> I saw GuC busyness is essentially the only current caller so thought it
> doesn't really warrant adding a super long named
> intel_uncore_forcewake_put_delayed__locked helper.
>
> Perhaps it would make sense to move this out of static inline, in which
> case it would also be easier to have the hysteresis without needing to
> export any new helpers, but mostly because it does not feel the static
> inline is justified. Sounds an attractive option but it is passable as is.

Yup, copy that. Also see now how this reduces the read latency. And also it
would increase the latency a bit for a different thread trying to do an
uncore read/write since we hold uncore->lock longer but should be ok I
think.

> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Copy that too:

Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

>
> > +	spin_unlock_irqrestore(&uncore->lock, flags);
> > +
> > +	return (u64)upper << 32 | lower;
> > +}
> > +
> >   static inline int intel_uncore_write_and_verify(struct intel_uncore *uncore,
> >						i915_reg_t reg, u32 val,
> >						u32 mask, u32 expected_val)

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Intel-gfx] [PATCH 2/2] drm/i915/selftest: Bump up sample period for busy stats selftest
  2022-11-05  0:32 ` [Intel-gfx] [PATCH 2/2] drm/i915/selftest: Bump up sample period for busy stats selftest Umesh Nerlige Ramappa
  2022-11-07 10:16   ` Tvrtko Ursulin
@ 2022-11-07 23:33   ` Dixit, Ashutosh
  1 sibling, 0 replies; 14+ messages in thread
From: Dixit, Ashutosh @ 2022-11-07 23:33 UTC (permalink / raw)
  To: Umesh Nerlige Ramappa; +Cc: intel-gfx

On Fri, 04 Nov 2022 17:32:35 -0700, Umesh Nerlige Ramappa wrote:
>
> Engine busyness samples around a 10ms period is failing with busyness
> ranging approx. from 87% to 115%. The expected range is +/- 5% of the
> sample period.
>
> When determining busyness of active engine, the GuC based engine
> busyness implementation relies on a 64 bit timestamp register read. The
> latency incurred by this register read causes the failure.
>
> On DG1, when the test fails, the observed latencies range from 900us -
> 1.5ms.
>
> One solution tried was to reduce the latency between reg read and
> CPU timestamp capture, but such optimization does not add value to user
> since the CPU timestamp obtained here is only used for (1) selftest and
> (2) i915 rps implementation specific to execlist scheduler. Also, this
> solution only reduces the frequency of failure and does not eliminate
> it.
>
> In order to make the selftest more robust and account for such
> latencies, increase the sample period to 100 ms.

Hi Umesh,

I think it would be good to add to the commit message:

* Gitlab bug number if any
* Paste of the actual dmesg error in the commit message
* Also adapt the above commit message to the fact that we've now added the
  optimized 64 bit read

With that this is:

Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

If you want me to review the new commit message I can do that too.

Thanks.
--
Ashutosh


>
> Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
> ---
>  drivers/gpu/drm/i915/gt/selftest_engine_pm.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/i915/gt/selftest_engine_pm.c b/drivers/gpu/drm/i915/gt/selftest_engine_pm.c
> index 0dcb3ed44a73..87c94314cf67 100644
> --- a/drivers/gpu/drm/i915/gt/selftest_engine_pm.c
> +++ b/drivers/gpu/drm/i915/gt/selftest_engine_pm.c
> @@ -317,7 +317,7 @@ static int live_engine_busy_stats(void *arg)
>		ENGINE_TRACE(engine, "measuring busy time\n");
>		preempt_disable();
>		de = intel_engine_get_busy_time(engine, &t[0]);
> -		mdelay(10);
> +		mdelay(100);
>		de = ktime_sub(intel_engine_get_busy_time(engine, &t[1]), de);
>		preempt_enable();
>		dt = ktime_sub(t[1], t[0]);
> --
> 2.36.1
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Intel-gfx] [PATCH 1/2] i915/uncore: Acquire fw before loop in intel_uncore_read64_2x32
  2022-11-07 21:23     ` Dixit, Ashutosh
@ 2022-11-08  0:11       ` Umesh Nerlige Ramappa
  2022-11-08  0:45         ` Dixit, Ashutosh
  0 siblings, 1 reply; 14+ messages in thread
From: Umesh Nerlige Ramappa @ 2022-11-08  0:11 UTC (permalink / raw)
  To: Dixit, Ashutosh; +Cc: intel-gfx

On Mon, Nov 07, 2022 at 01:23:19PM -0800, Dixit, Ashutosh wrote:
>On Mon, 07 Nov 2022 02:13:46 -0800, Tvrtko Ursulin wrote:
>>
>> On 05/11/2022 00:32, Umesh Nerlige Ramappa wrote:
>> > PMU reads the GT timestamp as a 2x32 mmio read and since upper and lower
>> > 32 bit registers are read in a loop, there is a latency involved between
>> > getting the GT timestamp and the CPU timestamp. As part of the
>> > resolution, refactor intel_uncore_read64_2x32 to acquire forcewake and
>> > uncore lock prior to reading upper and lower regs.
>> >
>> > Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
>> > ---
>> >   drivers/gpu/drm/i915/intel_uncore.h | 44 ++++++++++++++++++++---------
>> >   1 file changed, 30 insertions(+), 14 deletions(-)
>> >
>> > diff --git a/drivers/gpu/drm/i915/intel_uncore.h b/drivers/gpu/drm/i915/intel_uncore.h
>> > index 5449146a0624..e9e38490815d 100644
>> > --- a/drivers/gpu/drm/i915/intel_uncore.h
>> > +++ b/drivers/gpu/drm/i915/intel_uncore.h
>> > @@ -382,20 +382,6 @@ __uncore_write(write_notrace, 32, l, false)
>> >    */
>> >   __uncore_read(read64, 64, q, true)
>> >   -static inline u64
>> > -intel_uncore_read64_2x32(struct intel_uncore *uncore,
>> > -			 i915_reg_t lower_reg, i915_reg_t upper_reg)
>> > -{
>> > -	u32 upper, lower, old_upper, loop = 0;
>> > -	upper = intel_uncore_read(uncore, upper_reg);
>> > -	do {
>> > -		old_upper = upper;
>> > -		lower = intel_uncore_read(uncore, lower_reg);
>> > -		upper = intel_uncore_read(uncore, upper_reg);
>> > -	} while (upper != old_upper && loop++ < 2);
>> > -	return (u64)upper << 32 | lower;
>> > -}
>> > -
>> >   #define intel_uncore_posting_read(...) ((void)intel_uncore_read_notrace(__VA_ARGS__))
>> >   #define intel_uncore_posting_read16(...) ((void)intel_uncore_read16_notrace(__VA_ARGS__))
>> >   @@ -455,6 +441,36 @@ static inline void intel_uncore_rmw_fw(struct
>> > intel_uncore *uncore,
>> >		intel_uncore_write_fw(uncore, reg, val);
>> >   }
>> >   +static inline u64
>> > +intel_uncore_read64_2x32(struct intel_uncore *uncore,
>> > +			 i915_reg_t lower_reg, i915_reg_t upper_reg)
>> > +{
>> > +	u32 upper, lower, old_upper, loop = 0;
>> > +	enum forcewake_domains fw_domains;
>> > +	unsigned long flags;
>> > +
>> > +	fw_domains = intel_uncore_forcewake_for_reg(uncore, lower_reg,
>> > +						    FW_REG_READ);
>> > +
>> > +	fw_domains |= intel_uncore_forcewake_for_reg(uncore, upper_reg,
>> > +						    FW_REG_READ);
>> > +
>> > +	spin_lock_irqsave(&uncore->lock, flags);
>> > +	intel_uncore_forcewake_get__locked(uncore, fw_domains);
>> > +
>> > +	upper = intel_uncore_read_fw(uncore, upper_reg);
>> > +	do {
>> > +		old_upper = upper;
>> > +		lower = intel_uncore_read_fw(uncore, lower_reg);
>> > +		upper = intel_uncore_read_fw(uncore, upper_reg);
>> > +	} while (upper != old_upper && loop++ < 2);
>> > +
>> > +	intel_uncore_forcewake_put__locked(uncore, fw_domains);
>>
>> I mulled over the fact this no longer applies the put hysteresis, but then
>> I saw GuC busyness is essentially the only current caller so thought it
>> doesn't really warrant adding a super long named
>> intel_uncore_forcewake_put_delayed__locked helper.
>>
>> Perhaps it would make sense to move this out of static inline,

Are you saying - drop the inline OR drop static inline? I am assuming 
the former.

>> in which
>> case it would also be easier to have the hysteresis without needing to
>> export any new helpers,

I don't understand this part. Do you mean that it makes it easier to 
just call __intel_uncore_forcewake_put(uncore, fw_domains, true) then?  
Just wondering how 'static inline' has any effect on that.

>> but mostly because it does not feel the static
>> inline is justified.

Agree, just carried it over from the previous helper definition.

>> Sounds an attractive option but it is passable as is.
>
>Yup, copy that. Also see now how this reduces the read latency. And also it
>would increase the latency a bit for a different thread trying to do an
>uncore read/write since we hold uncore->lock longer but should be ok I
>think.

Didn't think about it from that perspective. Worst case is that 
gt_park/gt_unpark may happen very frequently (as seen on some use 
cases). In that case, the unpark would end up calling this helper each 
time.

>
>> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>
>Copy that too:
>
>Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>

Thanks,
Umesh
>

>>
>> > +	spin_unlock_irqrestore(&uncore->lock, flags);
>> > +
>> > +	return (u64)upper << 32 | lower;
>> > +}
>> > +
>> >   static inline int intel_uncore_write_and_verify(struct intel_uncore *uncore,
>> >						i915_reg_t reg, u32 val,
>> >						u32 mask, u32 expected_val)

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Intel-gfx] [PATCH 1/2] i915/uncore: Acquire fw before loop in intel_uncore_read64_2x32
  2022-11-08  0:11       ` Umesh Nerlige Ramappa
@ 2022-11-08  0:45         ` Dixit, Ashutosh
  2022-11-08 10:06           ` Tvrtko Ursulin
  0 siblings, 1 reply; 14+ messages in thread
From: Dixit, Ashutosh @ 2022-11-08  0:45 UTC (permalink / raw)
  To: Umesh Nerlige Ramappa; +Cc: intel-gfx

On Mon, 07 Nov 2022 16:11:27 -0800, Umesh Nerlige Ramappa wrote:
>
> On Mon, Nov 07, 2022 at 01:23:19PM -0800, Dixit, Ashutosh wrote:
> > On Mon, 07 Nov 2022 02:13:46 -0800, Tvrtko Ursulin wrote:
> >>
> >> On 05/11/2022 00:32, Umesh Nerlige Ramappa wrote:
> >> > PMU reads the GT timestamp as a 2x32 mmio read and since upper and lower
> >> > 32 bit registers are read in a loop, there is a latency involved between
> >> > getting the GT timestamp and the CPU timestamp. As part of the
> >> > resolution, refactor intel_uncore_read64_2x32 to acquire forcewake and
> >> > uncore lock prior to reading upper and lower regs.
> >> >
> >> > Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
> >> > ---
> >> >   drivers/gpu/drm/i915/intel_uncore.h | 44 ++++++++++++++++++++---------
> >> >   1 file changed, 30 insertions(+), 14 deletions(-)
> >> >
> >> > diff --git a/drivers/gpu/drm/i915/intel_uncore.h b/drivers/gpu/drm/i915/intel_uncore.h
> >> > index 5449146a0624..e9e38490815d 100644
> >> > --- a/drivers/gpu/drm/i915/intel_uncore.h
> >> > +++ b/drivers/gpu/drm/i915/intel_uncore.h
> >> > @@ -382,20 +382,6 @@ __uncore_write(write_notrace, 32, l, false)
> >> >    */
> >> >   __uncore_read(read64, 64, q, true)
> >> >   -static inline u64
> >> > -intel_uncore_read64_2x32(struct intel_uncore *uncore,
> >> > -			 i915_reg_t lower_reg, i915_reg_t upper_reg)
> >> > -{
> >> > -	u32 upper, lower, old_upper, loop = 0;
> >> > -	upper = intel_uncore_read(uncore, upper_reg);
> >> > -	do {
> >> > -		old_upper = upper;
> >> > -		lower = intel_uncore_read(uncore, lower_reg);
> >> > -		upper = intel_uncore_read(uncore, upper_reg);
> >> > -	} while (upper != old_upper && loop++ < 2);
> >> > -	return (u64)upper << 32 | lower;
> >> > -}
> >> > -
> >> >   #define intel_uncore_posting_read(...) ((void)intel_uncore_read_notrace(__VA_ARGS__))
> >> >   #define intel_uncore_posting_read16(...) ((void)intel_uncore_read16_notrace(__VA_ARGS__))
> >> >   @@ -455,6 +441,36 @@ static inline void intel_uncore_rmw_fw(struct
> >> > intel_uncore *uncore,
> >> >		intel_uncore_write_fw(uncore, reg, val);
> >> >   }
> >> >   +static inline u64
> >> > +intel_uncore_read64_2x32(struct intel_uncore *uncore,
> >> > +			 i915_reg_t lower_reg, i915_reg_t upper_reg)
> >> > +{
> >> > +	u32 upper, lower, old_upper, loop = 0;
> >> > +	enum forcewake_domains fw_domains;
> >> > +	unsigned long flags;
> >> > +
> >> > +	fw_domains = intel_uncore_forcewake_for_reg(uncore, lower_reg,
> >> > +						    FW_REG_READ);
> >> > +
> >> > +	fw_domains |= intel_uncore_forcewake_for_reg(uncore, upper_reg,
> >> > +						    FW_REG_READ);
> >> > +
> >> > +	spin_lock_irqsave(&uncore->lock, flags);
> >> > +	intel_uncore_forcewake_get__locked(uncore, fw_domains);
> >> > +
> >> > +	upper = intel_uncore_read_fw(uncore, upper_reg);
> >> > +	do {
> >> > +		old_upper = upper;
> >> > +		lower = intel_uncore_read_fw(uncore, lower_reg);
> >> > +		upper = intel_uncore_read_fw(uncore, upper_reg);
> >> > +	} while (upper != old_upper && loop++ < 2);
> >> > +
> >> > +	intel_uncore_forcewake_put__locked(uncore, fw_domains);
> >>
> >> I mulled over the fact this no longer applies the put hysteresis, but then
> >> I saw GuC busyness is essentially the only current caller so thought it
> >> doesn't really warrant adding a super long named
> >> intel_uncore_forcewake_put_delayed__locked helper.
> >>
> >> Perhaps it would make sense to move this out of static inline,
>
> Are you saying - drop the inline OR drop static inline? I am assuming the
> former.

No you need to have 'static inline' for functions defined in a header
file. I also don't understand completely but seems what Tvrtko is saying is
move the function to the .c leaving only the declarations in the .h? Anyway
let Tvrtko explain more.

>
> >> in which
> >> case it would also be easier to have the hysteresis without needing to
> >> export any new helpers,
>
> I don't understand this part. Do you mean that it makes it easier to just
> call __intel_uncore_forcewake_put(uncore, fw_domains, true) then?

Yes I think this will work, drop the lock and call
__intel_uncore_forcewake_put.

> Just
> wondering how 'static inline' has any effect on that.
>
> >> but mostly because it does not feel the static
> >> inline is justified.
>
> Agree, just carried it over from the previous helper definition.
>
> >> Sounds an attractive option but it is passable as is.
> >
> > Yup, copy that. Also see now how this reduces the read latency. And also it
> > would increase the latency a bit for a different thread trying to do an
> > uncore read/write since we hold uncore->lock longer but should be ok I
> > think.
>
> Didn't think about it from that perspective. Worst case is that
> gt_park/gt_unpark may happen very frequently (as seen on some use
> cases). In that case, the unpark would end up calling this helper each
> time.
>
> >
> >> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> >
> > Copy that too:
> >
> > Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>
>
> Thanks,
> Umesh
> >
>
> >>
> >> > +	spin_unlock_irqrestore(&uncore->lock, flags);
> >> > +
> >> > +	return (u64)upper << 32 | lower;
> >> > +}
> >> > +
> >> >   static inline int intel_uncore_write_and_verify(struct intel_uncore *uncore,
> >> >						i915_reg_t reg, u32 val,
> >> >						u32 mask, u32 expected_val)

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Intel-gfx] [PATCH 1/2] i915/uncore: Acquire fw before loop in intel_uncore_read64_2x32
  2022-11-08  0:45         ` Dixit, Ashutosh
@ 2022-11-08 10:06           ` Tvrtko Ursulin
  0 siblings, 0 replies; 14+ messages in thread
From: Tvrtko Ursulin @ 2022-11-08 10:06 UTC (permalink / raw)
  To: Dixit, Ashutosh, Umesh Nerlige Ramappa; +Cc: intel-gfx


On 08/11/2022 00:45, Dixit, Ashutosh wrote:
> On Mon, 07 Nov 2022 16:11:27 -0800, Umesh Nerlige Ramappa wrote:
>>
>> On Mon, Nov 07, 2022 at 01:23:19PM -0800, Dixit, Ashutosh wrote:
>>> On Mon, 07 Nov 2022 02:13:46 -0800, Tvrtko Ursulin wrote:
>>>>
>>>> On 05/11/2022 00:32, Umesh Nerlige Ramappa wrote:
>>>>> PMU reads the GT timestamp as a 2x32 mmio read and since upper and lower
>>>>> 32 bit registers are read in a loop, there is a latency involved between
>>>>> getting the GT timestamp and the CPU timestamp. As part of the
>>>>> resolution, refactor intel_uncore_read64_2x32 to acquire forcewake and
>>>>> uncore lock prior to reading upper and lower regs.
>>>>>
>>>>> Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
>>>>> ---
>>>>>    drivers/gpu/drm/i915/intel_uncore.h | 44 ++++++++++++++++++++---------
>>>>>    1 file changed, 30 insertions(+), 14 deletions(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/i915/intel_uncore.h b/drivers/gpu/drm/i915/intel_uncore.h
>>>>> index 5449146a0624..e9e38490815d 100644
>>>>> --- a/drivers/gpu/drm/i915/intel_uncore.h
>>>>> +++ b/drivers/gpu/drm/i915/intel_uncore.h
>>>>> @@ -382,20 +382,6 @@ __uncore_write(write_notrace, 32, l, false)
>>>>>     */
>>>>>    __uncore_read(read64, 64, q, true)
>>>>>    -static inline u64
>>>>> -intel_uncore_read64_2x32(struct intel_uncore *uncore,
>>>>> -			 i915_reg_t lower_reg, i915_reg_t upper_reg)
>>>>> -{
>>>>> -	u32 upper, lower, old_upper, loop = 0;
>>>>> -	upper = intel_uncore_read(uncore, upper_reg);
>>>>> -	do {
>>>>> -		old_upper = upper;
>>>>> -		lower = intel_uncore_read(uncore, lower_reg);
>>>>> -		upper = intel_uncore_read(uncore, upper_reg);
>>>>> -	} while (upper != old_upper && loop++ < 2);
>>>>> -	return (u64)upper << 32 | lower;
>>>>> -}
>>>>> -
>>>>>    #define intel_uncore_posting_read(...) ((void)intel_uncore_read_notrace(__VA_ARGS__))
>>>>>    #define intel_uncore_posting_read16(...) ((void)intel_uncore_read16_notrace(__VA_ARGS__))
>>>>>    @@ -455,6 +441,36 @@ static inline void intel_uncore_rmw_fw(struct
>>>>> intel_uncore *uncore,
>>>>> 		intel_uncore_write_fw(uncore, reg, val);
>>>>>    }
>>>>>    +static inline u64
>>>>> +intel_uncore_read64_2x32(struct intel_uncore *uncore,
>>>>> +			 i915_reg_t lower_reg, i915_reg_t upper_reg)
>>>>> +{
>>>>> +	u32 upper, lower, old_upper, loop = 0;
>>>>> +	enum forcewake_domains fw_domains;
>>>>> +	unsigned long flags;
>>>>> +
>>>>> +	fw_domains = intel_uncore_forcewake_for_reg(uncore, lower_reg,
>>>>> +						    FW_REG_READ);
>>>>> +
>>>>> +	fw_domains |= intel_uncore_forcewake_for_reg(uncore, upper_reg,
>>>>> +						    FW_REG_READ);
>>>>> +
>>>>> +	spin_lock_irqsave(&uncore->lock, flags);
>>>>> +	intel_uncore_forcewake_get__locked(uncore, fw_domains);
>>>>> +
>>>>> +	upper = intel_uncore_read_fw(uncore, upper_reg);
>>>>> +	do {
>>>>> +		old_upper = upper;
>>>>> +		lower = intel_uncore_read_fw(uncore, lower_reg);
>>>>> +		upper = intel_uncore_read_fw(uncore, upper_reg);
>>>>> +	} while (upper != old_upper && loop++ < 2);
>>>>> +
>>>>> +	intel_uncore_forcewake_put__locked(uncore, fw_domains);
>>>>
>>>> I mulled over the fact this no longer applies the put hysteresis, but then
>>>> I saw GuC busyness is essentially the only current caller so thought it
>>>> doesn't really warrant adding a super long named
>>>> intel_uncore_forcewake_put_delayed__locked helper.
>>>>
>>>> Perhaps it would make sense to move this out of static inline,
>>
>> Are you saying - drop the inline OR drop static inline? I am assuming the
>> former.
> 
> No you need to have 'static inline' for functions defined in a header
> file. I also don't understand completely but seems what Tvrtko is saying is
> move the function to the .c leaving only the declarations in the .h? Anyway
> let Tvrtko explain more.

Yes I does not feel warranted for it to be a static inline so I'd just 
move it to .c. In which case..

>>>> in which
>>>> case it would also be easier to have the hysteresis without needing to
>>>> export any new helpers,
>>
>> I don't understand this part. Do you mean that it makes it easier to just
>> call __intel_uncore_forcewake_put(uncore, fw_domains, true) then?

.. you could indeed call this and keep the put hysteresis. But I don't 
think that it matters really. You can go with the patch as is for what I 
am concerned.

> Yes I think this will work, drop the lock and call
> __intel_uncore_forcewake_put.
> 
>> Just
>> wondering how 'static inline' has any effect on that.
>>
>>>> but mostly because it does not feel the static
>>>> inline is justified.
>>
>> Agree, just carried it over from the previous helper definition.
>>
>>>> Sounds an attractive option but it is passable as is.
>>>
>>> Yup, copy that. Also see now how this reduces the read latency. And also it
>>> would increase the latency a bit for a different thread trying to do an
>>> uncore read/write since we hold uncore->lock longer but should be ok I
>>> think.
>>
>> Didn't think about it from that perspective. Worst case is that
>> gt_park/gt_unpark may happen very frequently (as seen on some use
>> cases). In that case, the unpark would end up calling this helper each
>> time.

Concern is two mmio reads under the uncore lock versus two lock-unlock 
cycles with one mmio read under them each? Feels like a meh. I guess 
with this DC induced latency issue it's a worse worst case but 
difference between normal times and pathological spike is probably 
orders of magnitude right?

Regards,

Tvrtko

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2022-11-08 10:07 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-05  0:32 [Intel-gfx] [PATCH 0/2] Fix live busy stats selftest failure Umesh Nerlige Ramappa
2022-11-05  0:32 ` [Intel-gfx] [PATCH 1/2] i915/uncore: Acquire fw before loop in intel_uncore_read64_2x32 Umesh Nerlige Ramappa
2022-11-07 10:13   ` Tvrtko Ursulin
2022-11-07 21:23     ` Dixit, Ashutosh
2022-11-08  0:11       ` Umesh Nerlige Ramappa
2022-11-08  0:45         ` Dixit, Ashutosh
2022-11-08 10:06           ` Tvrtko Ursulin
2022-11-05  0:32 ` [Intel-gfx] [PATCH 2/2] drm/i915/selftest: Bump up sample period for busy stats selftest Umesh Nerlige Ramappa
2022-11-07 10:16   ` Tvrtko Ursulin
2022-11-07 19:01     ` Umesh Nerlige Ramappa
2022-11-07 23:33   ` Dixit, Ashutosh
2022-11-05  0:57 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for Fix live busy stats selftest failure Patchwork
2022-11-05  1:19 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2022-11-05 13:59 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.