All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] drm/vblank: Avoid storing a timestamp for the same frame twice
@ 2021-02-04  2:04 ` Ville Syrjala
  0 siblings, 0 replies; 43+ messages in thread
From: Ville Syrjala @ 2021-02-04  2:04 UTC (permalink / raw)
  To: dri-devel; +Cc: Daniel Vetter, intel-gfx, Dhinakaran Pandiyan, Rodrigo Vivi

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

drm_vblank_restore() exists because certain power saving states
can clobber the hardware frame counter. The way it does this is
by guesstimating how many frames were missed purely based on
the difference between the last stored timestamp vs. a newly
sampled timestamp.

If we should call this function before a full frame has
elapsed since we sampled the last timestamp we would end up
with a possibly slightly different timestamp value for the
same frame. Currently we will happily overwrite the already
stored timestamp for the frame with the new value. This
could cause userspace to observe two different timestamps
for the same frame (and the timestamp could even go
backwards depending on how much error we introduce when
correcting the timestamp based on the scanout position).

To avoid that let's not update the stored timestamp unless we're
also incrementing the sequence counter. We do still want to update
vblank->last with the freshly sampled hw frame counter value so
that subsequent vblank irqs/queries can actually use the hw frame
counter to determine how many frames have elapsed.

Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 drivers/gpu/drm/drm_vblank.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
index 893165eeddf3..e127a7db2088 100644
--- a/drivers/gpu/drm/drm_vblank.c
+++ b/drivers/gpu/drm/drm_vblank.c
@@ -176,6 +176,17 @@ static void store_vblank(struct drm_device *dev, unsigned int pipe,
 
 	vblank->last = last;
 
+	/*
+	 * drm_vblank_restore() wants to always update
+	 * vblank->last since we can't trust the frame counter
+	 * across power saving states. But we don't want to alter
+	 * the stored timestamp for the same frame number since
+	 * that would cause userspace to potentially observe two
+	 * different timestamps for the same frame.
+	 */
+	if (vblank_count_inc == 0)
+		return;
+
 	write_seqlock(&vblank->seqlock);
 	vblank->time = t_vblank;
 	atomic64_add(vblank_count_inc, &vblank->count);
-- 
2.26.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [Intel-gfx] [PATCH] drm/vblank: Avoid storing a timestamp for the same frame twice
@ 2021-02-04  2:04 ` Ville Syrjala
  0 siblings, 0 replies; 43+ messages in thread
From: Ville Syrjala @ 2021-02-04  2:04 UTC (permalink / raw)
  To: dri-devel; +Cc: Daniel Vetter, intel-gfx, Dhinakaran Pandiyan

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

drm_vblank_restore() exists because certain power saving states
can clobber the hardware frame counter. The way it does this is
by guesstimating how many frames were missed purely based on
the difference between the last stored timestamp vs. a newly
sampled timestamp.

If we should call this function before a full frame has
elapsed since we sampled the last timestamp we would end up
with a possibly slightly different timestamp value for the
same frame. Currently we will happily overwrite the already
stored timestamp for the frame with the new value. This
could cause userspace to observe two different timestamps
for the same frame (and the timestamp could even go
backwards depending on how much error we introduce when
correcting the timestamp based on the scanout position).

To avoid that let's not update the stored timestamp unless we're
also incrementing the sequence counter. We do still want to update
vblank->last with the freshly sampled hw frame counter value so
that subsequent vblank irqs/queries can actually use the hw frame
counter to determine how many frames have elapsed.

Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 drivers/gpu/drm/drm_vblank.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
index 893165eeddf3..e127a7db2088 100644
--- a/drivers/gpu/drm/drm_vblank.c
+++ b/drivers/gpu/drm/drm_vblank.c
@@ -176,6 +176,17 @@ static void store_vblank(struct drm_device *dev, unsigned int pipe,
 
 	vblank->last = last;
 
+	/*
+	 * drm_vblank_restore() wants to always update
+	 * vblank->last since we can't trust the frame counter
+	 * across power saving states. But we don't want to alter
+	 * the stored timestamp for the same frame number since
+	 * that would cause userspace to potentially observe two
+	 * different timestamps for the same frame.
+	 */
+	if (vblank_count_inc == 0)
+		return;
+
 	write_seqlock(&vblank->seqlock);
 	vblank->time = t_vblank;
 	atomic64_add(vblank_count_inc, &vblank->count);
-- 
2.26.2

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [Intel-gfx] ✓ Fi.CI.BAT: success for drm/vblank: Avoid storing a timestamp for the same frame twice
  2021-02-04  2:04 ` [Intel-gfx] " Ville Syrjala
  (?)
@ 2021-02-04  3:12 ` Patchwork
  -1 siblings, 0 replies; 43+ messages in thread
From: Patchwork @ 2021-02-04  3:12 UTC (permalink / raw)
  To: Ville Syrjala; +Cc: intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 4527 bytes --]

== Series Details ==

Series: drm/vblank: Avoid storing a timestamp for the same frame twice
URL   : https://patchwork.freedesktop.org/series/86672/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_9727 -> Patchwork_19581
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/index.html

Known issues
------------

  Here are the changes found in Patchwork_19581 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@amdgpu/amd_basic@query-info:
    - fi-tgl-y:           NOTRUN -> [SKIP][1] ([fdo#109315] / [i915#2575])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/fi-tgl-y/igt@amdgpu/amd_basic@query-info.html

  * igt@amdgpu/amd_cs_nop@sync-compute0:
    - fi-kbl-r:           NOTRUN -> [SKIP][2] ([fdo#109271]) +20 similar issues
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/fi-kbl-r/igt@amdgpu/amd_cs_nop@sync-compute0.html

  * igt@gem_huc_copy@huc-copy:
    - fi-kbl-r:           NOTRUN -> [SKIP][3] ([fdo#109271] / [i915#2190])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/fi-kbl-r/igt@gem_huc_copy@huc-copy.html

  * igt@kms_chamelium@hdmi-edid-read:
    - fi-kbl-r:           NOTRUN -> [SKIP][4] ([fdo#109271] / [fdo#111827]) +8 similar issues
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/fi-kbl-r/igt@kms_chamelium@hdmi-edid-read.html

  * igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-d:
    - fi-kbl-r:           NOTRUN -> [SKIP][5] ([fdo#109271] / [i915#533])
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/fi-kbl-r/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-d.html

  * igt@prime_self_import@basic-with_two_bos:
    - fi-tgl-y:           [PASS][6] -> [DMESG-WARN][7] ([i915#402]) +1 similar issue
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/fi-tgl-y/igt@prime_self_import@basic-with_two_bos.html
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/fi-tgl-y/igt@prime_self_import@basic-with_two_bos.html

  
#### Possible fixes ####

  * igt@fbdev@read:
    - fi-tgl-y:           [DMESG-WARN][8] ([i915#402]) -> [PASS][9] +2 similar issues
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/fi-tgl-y/igt@fbdev@read.html
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/fi-tgl-y/igt@fbdev@read.html

  * igt@gem_exec_suspend@basic-s3:
    - fi-tgl-y:           [DMESG-WARN][10] ([i915#2411] / [i915#402]) -> [PASS][11]
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/fi-tgl-y/igt@gem_exec_suspend@basic-s3.html
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/fi-tgl-y/igt@gem_exec_suspend@basic-s3.html

  * igt@kms_chamelium@dp-crc-fast:
    - fi-kbl-7500u:       [FAIL][12] ([i915#1372]) -> [PASS][13]
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/fi-kbl-7500u/igt@kms_chamelium@dp-crc-fast.html
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/fi-kbl-7500u/igt@kms_chamelium@dp-crc-fast.html

  
  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109315]: https://bugs.freedesktop.org/show_bug.cgi?id=109315
  [fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
  [i915#1372]: https://gitlab.freedesktop.org/drm/intel/issues/1372
  [i915#2190]: https://gitlab.freedesktop.org/drm/intel/issues/2190
  [i915#2411]: https://gitlab.freedesktop.org/drm/intel/issues/2411
  [i915#2575]: https://gitlab.freedesktop.org/drm/intel/issues/2575
  [i915#402]: https://gitlab.freedesktop.org/drm/intel/issues/402
  [i915#533]: https://gitlab.freedesktop.org/drm/intel/issues/533


Participating hosts (44 -> 39)
------------------------------

  Additional (1): fi-kbl-r 
  Missing    (6): fi-jsl-1 fi-ilk-m540 fi-hsw-4200u fi-bsw-cyan fi-ctg-p8600 fi-bdw-samus 


Build changes
-------------

  * Linux: CI_DRM_9727 -> Patchwork_19581

  CI-20190529: 20190529
  CI_DRM_9727: f707269365babf0b562f0f623ca36b37d7e0391a @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5989: 57a96840fd5aa7ec48c2f84b30e0420f84ec7386 @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_19581: 568e8428683fcad977ed3ab589c0c69de8abefe8 @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

568e8428683f drm/vblank: Avoid storing a timestamp for the same frame twice

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/index.html

[-- Attachment #1.2: Type: text/html, Size: 5678 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Intel-gfx] ✗ Fi.CI.IGT: failure for drm/vblank: Avoid storing a timestamp for the same frame twice
  2021-02-04  2:04 ` [Intel-gfx] " Ville Syrjala
  (?)
  (?)
@ 2021-02-04  5:44 ` Patchwork
  -1 siblings, 0 replies; 43+ messages in thread
From: Patchwork @ 2021-02-04  5:44 UTC (permalink / raw)
  To: Ville Syrjala; +Cc: intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 30285 bytes --]

== Series Details ==

Series: drm/vblank: Avoid storing a timestamp for the same frame twice
URL   : https://patchwork.freedesktop.org/series/86672/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_9727_full -> Patchwork_19581_full
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with Patchwork_19581_full absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_19581_full, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_19581_full:

### IGT changes ###

#### Possible regressions ####

  * igt@i915_module_load@reload-with-fault-injection:
    - shard-hsw:          [PASS][1] -> [DMESG-WARN][2]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-hsw7/igt@i915_module_load@reload-with-fault-injection.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-hsw8/igt@i915_module_load@reload-with-fault-injection.html

  
#### Suppressed ####

  The following results come from untrusted machines, tests, or statuses.
  They do not affect the overall result.

  * {igt@sysfs_clients@fair-7@vcs}:
    - shard-iclb:         [PASS][3] -> [FAIL][4]
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-iclb6/igt@sysfs_clients@fair-7@vcs.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-iclb2/igt@sysfs_clients@fair-7@vcs.html

  

### Piglit changes ###

#### Possible regressions ####

  * spec@arb_tessellation_shader@execution@built-in-functions@tcs-op-bitxor-uint-uint (NEW):
    - {pig-icl-1065g7}:   NOTRUN -> [INCOMPLETE][5] +7 similar issues
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/pig-icl-1065g7/spec@arb_tessellation_shader@execution@built-in-functions@tcs-op-bitxor-uint-uint.html

  * spec@arb_tessellation_shader@execution@built-in-functions@tcs-tanh-vec4 (NEW):
    - {pig-icl-1065g7}:   NOTRUN -> [CRASH][6]
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/pig-icl-1065g7/spec@arb_tessellation_shader@execution@built-in-functions@tcs-tanh-vec4.html

  
New tests
---------

  New tests have been introduced between CI_DRM_9727_full and Patchwork_19581_full:

### New Piglit tests (9) ###

  * spec@arb_tessellation_shader@execution@built-in-functions@tcs-mix-float-float-float:
    - Statuses : 1 incomplete(s)
    - Exec time: [0.0] s

  * spec@arb_tessellation_shader@execution@built-in-functions@tcs-not-bvec2:
    - Statuses : 1 incomplete(s)
    - Exec time: [0.0] s

  * spec@arb_tessellation_shader@execution@built-in-functions@tcs-op-assign-bitxor-int-int:
    - Statuses : 1 incomplete(s)
    - Exec time: [0.0] s

  * spec@arb_tessellation_shader@execution@built-in-functions@tcs-op-bitand-neg-abs-int-ivec4:
    - Statuses : 1 incomplete(s)
    - Exec time: [0.0] s

  * spec@arb_tessellation_shader@execution@built-in-functions@tcs-op-bitor-uvec2-uint:
    - Statuses : 1 incomplete(s)
    - Exec time: [0.0] s

  * spec@arb_tessellation_shader@execution@built-in-functions@tcs-op-bitxor-uint-uint:
    - Statuses : 1 incomplete(s)
    - Exec time: [0.0] s

  * spec@arb_tessellation_shader@execution@built-in-functions@tcs-op-rshift-uint-uint:
    - Statuses : 1 incomplete(s)
    - Exec time: [0.0] s

  * spec@arb_tessellation_shader@execution@built-in-functions@tcs-op-selection-bool-uvec4-uvec4:
    - Statuses : 1 incomplete(s)
    - Exec time: [0.0] s

  * spec@arb_tessellation_shader@execution@built-in-functions@tcs-tanh-vec4:
    - Statuses : 1 crash(s)
    - Exec time: [0.50] s

  

Known issues
------------

  Here are the changes found in Patchwork_19581_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@drm_mm@all@replace:
    - shard-skl:          [PASS][7] -> [INCOMPLETE][8] ([i915#2485] / [i915#2813])
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-skl9/igt@drm_mm@all@replace.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-skl3/igt@drm_mm@all@replace.html

  * igt@gem_exec_fair@basic-none-share@rcs0:
    - shard-iclb:         [PASS][9] -> [FAIL][10] ([i915#2842])
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-iclb8/igt@gem_exec_fair@basic-none-share@rcs0.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-iclb6/igt@gem_exec_fair@basic-none-share@rcs0.html

  * igt@gem_exec_fair@basic-none-vip@rcs0:
    - shard-kbl:          [PASS][11] -> [FAIL][12] ([i915#2842]) +2 similar issues
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-kbl6/igt@gem_exec_fair@basic-none-vip@rcs0.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-kbl2/igt@gem_exec_fair@basic-none-vip@rcs0.html

  * igt@gem_exec_fair@basic-none@rcs0:
    - shard-glk:          [PASS][13] -> [FAIL][14] ([i915#2842]) +2 similar issues
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-glk3/igt@gem_exec_fair@basic-none@rcs0.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-glk4/igt@gem_exec_fair@basic-none@rcs0.html

  * igt@gem_exec_fair@basic-pace@rcs0:
    - shard-tglb:         [PASS][15] -> [FAIL][16] ([i915#2842])
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-tglb7/igt@gem_exec_fair@basic-pace@rcs0.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-tglb5/igt@gem_exec_fair@basic-pace@rcs0.html

  * igt@gem_exec_schedule@u-fairslice@rcs0:
    - shard-apl:          [PASS][17] -> [DMESG-WARN][18] ([i915#1610])
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-apl4/igt@gem_exec_schedule@u-fairslice@rcs0.html
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-apl3/igt@gem_exec_schedule@u-fairslice@rcs0.html
    - shard-glk:          [PASS][19] -> [DMESG-WARN][20] ([i915#1610] / [i915#2803])
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-glk6/igt@gem_exec_schedule@u-fairslice@rcs0.html
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-glk4/igt@gem_exec_schedule@u-fairslice@rcs0.html

  * igt@gem_exec_schedule@u-fairslice@vcs1:
    - shard-tglb:         [PASS][21] -> [DMESG-WARN][22] ([i915#2803])
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-tglb6/igt@gem_exec_schedule@u-fairslice@vcs1.html
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-tglb6/igt@gem_exec_schedule@u-fairslice@vcs1.html

  * igt@gem_exec_schedule@u-semaphore-codependency:
    - shard-skl:          [PASS][23] -> [DMESG-WARN][24] ([i915#1610])
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-skl4/igt@gem_exec_schedule@u-semaphore-codependency.html
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-skl7/igt@gem_exec_schedule@u-semaphore-codependency.html

  * igt@gen9_exec_parse@basic-rejected:
    - shard-tglb:         NOTRUN -> [SKIP][25] ([fdo#112306])
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-tglb2/igt@gen9_exec_parse@basic-rejected.html

  * igt@i915_pm_dc@dc6-psr:
    - shard-iclb:         [PASS][26] -> [FAIL][27] ([i915#454])
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-iclb5/igt@i915_pm_dc@dc6-psr.html
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-iclb4/igt@i915_pm_dc@dc6-psr.html

  * igt@kms_chamelium@dp-frame-dump:
    - shard-kbl:          NOTRUN -> [SKIP][28] ([fdo#109271] / [fdo#111827]) +2 similar issues
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-kbl7/igt@kms_chamelium@dp-frame-dump.html

  * igt@kms_chamelium@hdmi-hpd-storm-disable:
    - shard-skl:          NOTRUN -> [SKIP][29] ([fdo#109271] / [fdo#111827]) +3 similar issues
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-skl4/igt@kms_chamelium@hdmi-hpd-storm-disable.html

  * igt@kms_color@pipe-b-ctm-0-75:
    - shard-skl:          [PASS][30] -> [DMESG-WARN][31] ([i915#1982])
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-skl7/igt@kms_color@pipe-b-ctm-0-75.html
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-skl8/igt@kms_color@pipe-b-ctm-0-75.html

  * igt@kms_cursor_crc@pipe-b-cursor-128x128-random:
    - shard-skl:          [PASS][32] -> [FAIL][33] ([i915#54]) +4 similar issues
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-skl2/igt@kms_cursor_crc@pipe-b-cursor-128x128-random.html
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-skl7/igt@kms_cursor_crc@pipe-b-cursor-128x128-random.html

  * igt@kms_cursor_edge_walk@pipe-b-256x256-left-edge:
    - shard-skl:          NOTRUN -> [SKIP][34] ([fdo#109271]) +42 similar issues
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-skl4/igt@kms_cursor_edge_walk@pipe-b-256x256-left-edge.html

  * igt@kms_cursor_legacy@cursor-vs-flip-toggle:
    - shard-hsw:          [PASS][35] -> [FAIL][36] ([i915#2370]) +1 similar issue
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-hsw7/igt@kms_cursor_legacy@cursor-vs-flip-toggle.html
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-hsw1/igt@kms_cursor_legacy@cursor-vs-flip-toggle.html

  * igt@kms_cursor_legacy@flip-vs-cursor-legacy:
    - shard-tglb:         [PASS][37] -> [FAIL][38] ([i915#2346])
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-tglb8/igt@kms_cursor_legacy@flip-vs-cursor-legacy.html
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-tglb7/igt@kms_cursor_legacy@flip-vs-cursor-legacy.html

  * igt@kms_draw_crc@draw-method-rgb565-mmap-cpu-untiled:
    - shard-snb:          [PASS][39] -> [SKIP][40] ([fdo#109271])
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-snb2/igt@kms_draw_crc@draw-method-rgb565-mmap-cpu-untiled.html
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-snb6/igt@kms_draw_crc@draw-method-rgb565-mmap-cpu-untiled.html

  * igt@kms_flip@flip-vs-suspend@a-dp1:
    - shard-apl:          [PASS][41] -> [DMESG-WARN][42] ([i915#180]) +1 similar issue
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-apl8/igt@kms_flip@flip-vs-suspend@a-dp1.html
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-apl1/igt@kms_flip@flip-vs-suspend@a-dp1.html

  * igt@kms_flip@flip-vs-suspend@c-dp1:
    - shard-kbl:          [PASS][43] -> [DMESG-WARN][44] ([i915#180]) +6 similar issues
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-kbl6/igt@kms_flip@flip-vs-suspend@c-dp1.html
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-kbl7/igt@kms_flip@flip-vs-suspend@c-dp1.html

  * igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile:
    - shard-kbl:          NOTRUN -> [FAIL][45] ([i915#2641])
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-kbl7/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile.html

  * igt@kms_frontbuffer_tracking@psr-rgb101010-draw-mmap-wc:
    - shard-kbl:          NOTRUN -> [SKIP][46] ([fdo#109271]) +30 similar issues
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-kbl7/igt@kms_frontbuffer_tracking@psr-rgb101010-draw-mmap-wc.html

  * igt@kms_hdr@bpc-switch-suspend:
    - shard-skl:          NOTRUN -> [FAIL][47] ([i915#1188])
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-skl1/igt@kms_hdr@bpc-switch-suspend.html

  * igt@kms_plane_alpha_blend@pipe-b-coverage-7efc:
    - shard-skl:          [PASS][48] -> [FAIL][49] ([fdo#108145] / [i915#265])
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-skl10/igt@kms_plane_alpha_blend@pipe-b-coverage-7efc.html
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-skl9/igt@kms_plane_alpha_blend@pipe-b-coverage-7efc.html

  * igt@kms_plane_alpha_blend@pipe-c-alpha-opaque-fb:
    - shard-kbl:          NOTRUN -> [FAIL][50] ([fdo#108145] / [i915#265])
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-kbl7/igt@kms_plane_alpha_blend@pipe-c-alpha-opaque-fb.html

  * igt@kms_plane_multiple@atomic-pipe-d-tiling-yf:
    - shard-tglb:         NOTRUN -> [SKIP][51] ([fdo#112054])
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-tglb2/igt@kms_plane_multiple@atomic-pipe-d-tiling-yf.html

  * igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-2:
    - shard-kbl:          NOTRUN -> [SKIP][52] ([fdo#109271] / [i915#658])
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-kbl7/igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-2.html

  * igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-1:
    - shard-skl:          NOTRUN -> [SKIP][53] ([fdo#109271] / [i915#658])
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-skl4/igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-1.html

  * igt@kms_psr@psr2_cursor_mmap_cpu:
    - shard-iclb:         [PASS][54] -> [SKIP][55] ([fdo#109441]) +2 similar issues
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-iclb2/igt@kms_psr@psr2_cursor_mmap_cpu.html
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-iclb7/igt@kms_psr@psr2_cursor_mmap_cpu.html

  * igt@kms_setmode@invalid-clone-exclusive-crtc:
    - shard-skl:          NOTRUN -> [WARN][56] ([i915#2100])
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-skl4/igt@kms_setmode@invalid-clone-exclusive-crtc.html

  * igt@kms_sysfs_edid_timing:
    - shard-kbl:          NOTRUN -> [FAIL][57] ([IGT#2])
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-kbl7/igt@kms_sysfs_edid_timing.html

  * igt@perf@polling-parameterized:
    - shard-skl:          [PASS][58] -> [FAIL][59] ([i915#1542])
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-skl1/igt@perf@polling-parameterized.html
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-skl8/igt@perf@polling-parameterized.html

  
#### Possible fixes ####

  * igt@gem_exec_fair@basic-pace@vecs0:
    - shard-tglb:         [FAIL][60] ([i915#2842]) -> [PASS][61]
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-tglb7/igt@gem_exec_fair@basic-pace@vecs0.html
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-tglb5/igt@gem_exec_fair@basic-pace@vecs0.html

  * igt@gem_exec_schedule@u-fairslice@rcs0:
    - shard-skl:          [DMESG-WARN][62] ([i915#1610] / [i915#2803]) -> [PASS][63]
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-skl7/igt@gem_exec_schedule@u-fairslice@rcs0.html
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-skl4/igt@gem_exec_schedule@u-fairslice@rcs0.html

  * igt@i915_pm_rpm@system-suspend-modeset:
    - shard-kbl:          [INCOMPLETE][64] ([i915#151] / [i915#155]) -> [PASS][65]
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-kbl1/igt@i915_pm_rpm@system-suspend-modeset.html
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-kbl7/igt@i915_pm_rpm@system-suspend-modeset.html

  * igt@i915_selftest@live@gt_heartbeat:
    - shard-iclb:         [DMESG-FAIL][66] ([i915#2291] / [i915#541]) -> [PASS][67]
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-iclb8/igt@i915_selftest@live@gt_heartbeat.html
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-iclb5/igt@i915_selftest@live@gt_heartbeat.html

  * igt@kms_cursor_crc@pipe-c-cursor-64x21-offscreen:
    - shard-skl:          [FAIL][68] ([i915#54]) -> [PASS][69] +5 similar issues
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-skl4/igt@kms_cursor_crc@pipe-c-cursor-64x21-offscreen.html
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-skl7/igt@kms_cursor_crc@pipe-c-cursor-64x21-offscreen.html

  * igt@kms_flip@nonexisting-fb-interruptible@a-edp1:
    - shard-skl:          [DMESG-WARN][70] ([i915#1982]) -> [PASS][71]
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-skl3/igt@kms_flip@nonexisting-fb-interruptible@a-edp1.html
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-skl9/igt@kms_flip@nonexisting-fb-interruptible@a-edp1.html

  * igt@kms_flip@plain-flip-fb-recreate-interruptible@b-edp1:
    - shard-skl:          [FAIL][72] ([i915#2122]) -> [PASS][73]
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-skl10/igt@kms_flip@plain-flip-fb-recreate-interruptible@b-edp1.html
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-skl4/igt@kms_flip@plain-flip-fb-recreate-interruptible@b-edp1.html

  * igt@kms_hdr@bpc-switch:
    - shard-skl:          [FAIL][74] ([i915#1188]) -> [PASS][75]
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-skl9/igt@kms_hdr@bpc-switch.html
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-skl3/igt@kms_hdr@bpc-switch.html

  * igt@kms_plane_alpha_blend@pipe-b-constant-alpha-min:
    - shard-skl:          [FAIL][76] ([fdo#108145] / [i915#265]) -> [PASS][77]
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-skl9/igt@kms_plane_alpha_blend@pipe-b-constant-alpha-min.html
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-skl3/igt@kms_plane_alpha_blend@pipe-b-constant-alpha-min.html

  * igt@kms_psr@psr2_cursor_plane_onoff:
    - shard-iclb:         [SKIP][78] ([fdo#109441]) -> [PASS][79] +1 similar issue
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-iclb7/igt@kms_psr@psr2_cursor_plane_onoff.html
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-iclb2/igt@kms_psr@psr2_cursor_plane_onoff.html

  * {igt@sysfs_clients@busy@vcs0}:
    - shard-skl:          [FAIL][80] ([i915#3019]) -> [PASS][81]
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-skl6/igt@sysfs_clients@busy@vcs0.html
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-skl6/igt@sysfs_clients@busy@vcs0.html

  * {igt@sysfs_clients@recycle}:
    - shard-iclb:         [FAIL][82] ([i915#3028]) -> [PASS][83]
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-iclb7/igt@sysfs_clients@recycle.html
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-iclb2/igt@sysfs_clients@recycle.html

  * {igt@sysfs_clients@recycle-many}:
    - shard-iclb:         [FAIL][84] -> [PASS][85]
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-iclb6/igt@sysfs_clients@recycle-many.html
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-iclb1/igt@sysfs_clients@recycle-many.html
    - shard-tglb:         [FAIL][86] -> [PASS][87]
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-tglb2/igt@sysfs_clients@recycle-many.html
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-tglb5/igt@sysfs_clients@recycle-many.html

  * {igt@sysfs_clients@sema-10@vecs0}:
    - shard-glk:          [SKIP][88] ([fdo#109271] / [i915#3026]) -> [PASS][89]
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-glk8/igt@sysfs_clients@sema-10@vecs0.html
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-glk6/igt@sysfs_clients@sema-10@vecs0.html

  * {igt@sysfs_clients@split-25@vcs0}:
    - shard-skl:          [SKIP][90] ([fdo#109271]) -> [PASS][91]
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-skl9/igt@sysfs_clients@split-25@vcs0.html
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-skl10/igt@sysfs_clients@split-25@vcs0.html

  
#### Warnings ####

  * igt@gem_exec_fair@basic-pace-solo@rcs0:
    - shard-glk:          [FAIL][92] ([i915#2851]) -> [FAIL][93] ([i915#2842])
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-glk1/igt@gem_exec_fair@basic-pace-solo@rcs0.html
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-glk2/igt@gem_exec_fair@basic-pace-solo@rcs0.html

  * igt@gem_exec_fair@basic-throttle@rcs0:
    - shard-iclb:         [FAIL][94] ([i915#2842]) -> [FAIL][95] ([i915#2849])
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-iclb7/igt@gem_exec_fair@basic-throttle@rcs0.html
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-iclb1/igt@gem_exec_fair@basic-throttle@rcs0.html

  * igt@i915_pm_rc6_residency@rc6-fence:
    - shard-iclb:         [WARN][96] ([i915#1804] / [i915#2684]) -> [WARN][97] ([i915#2681] / [i915#2684])
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-iclb6/igt@i915_pm_rc6_residency@rc6-fence.html
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-iclb1/igt@i915_pm_rc6_residency@rc6-fence.html

  * igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-2:
    - shard-iclb:         [SKIP][98] ([i915#2920]) -> [SKIP][99] ([i915#658]) +1 similar issue
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-iclb2/igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-2.html
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-iclb7/igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-2.html

  * igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-1:
    - shard-iclb:         [SKIP][100] ([i915#658]) -> [SKIP][101] ([i915#2920]) +3 similar issues
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-iclb6/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-1.html
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-iclb2/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-1.html

  * igt@runner@aborted:
    - shard-hsw:          ([FAIL][102], [FAIL][103]) ([i915#2295] / [i915#2505] / [i915#3002]) -> ([FAIL][104], [FAIL][105], [FAIL][106]) ([i915#142] / [i915#2292] / [i915#2295] / [i915#2505] / [i915#3002])
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-hsw8/igt@runner@aborted.html
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-hsw8/igt@runner@aborted.html
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-hsw8/igt@runner@aborted.html
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-hsw1/igt@runner@aborted.html
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-hsw8/igt@runner@aborted.html
    - shard-kbl:          ([FAIL][107], [FAIL][108], [FAIL][109], [FAIL][110]) ([i915#1814] / [i915#2295] / [i915#3002]) -> ([FAIL][111], [FAIL][112], [FAIL][113], [FAIL][114], [FAIL][115], [FAIL][116], [FAIL][117], [FAIL][118]) ([i915#1814] / [i915#2295] / [i915#3002] / [i915#602])
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-kbl7/igt@runner@aborted.html
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-kbl7/igt@runner@aborted.html
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-kbl4/igt@runner@aborted.html
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-kbl2/igt@runner@aborted.html
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-kbl7/igt@runner@aborted.html
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-kbl4/igt@runner@aborted.html
   [113]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-kbl7/igt@runner@aborted.html
   [114]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-kbl7/igt@runner@aborted.html
   [115]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-kbl1/igt@runner@aborted.html
   [116]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-kbl7/igt@runner@aborted.html
   [117]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-kbl7/igt@runner@aborted.html
   [118]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-kbl4/igt@runner@aborted.html
    - shard-apl:          ([FAIL][119], [FAIL][120], [FAIL][121]) ([i915#2295] / [i915#3002]) -> ([FAIL][122], [FAIL][123], [FAIL][124], [FAIL][125], [FAIL][126], [FAIL][127]) ([i915#1610] / [i915#1814] / [i915#2295] / [i915#2426] / [i915#3002])
   [119]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-apl3/igt@runner@aborted.html
   [120]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-apl7/igt@runner@aborted.html
   [121]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-apl4/igt@runner@aborted.html
   [122]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-apl1/igt@runner@aborted.html
   [123]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-apl8/igt@runner@aborted.html
   [124]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-apl3/igt@runner@aborted.html
   [125]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-apl8/igt@runner@aborted.html
   [126]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-apl1/igt@runner@aborted.html
   [127]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-apl3/igt@runner@aborted.html
    - shard-glk:          ([FAIL][128], [FAIL][129]) ([i915#2295] / [i915#3002] / [k.org#202321]) -> ([FAIL][130], [FAIL][131], [FAIL][132]) ([i915#2295] / [i915#2426] / [i915#3002] / [k.org#202321])
   [128]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-glk4/igt@runner@aborted.html
   [129]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-glk7/igt@runner@aborted.html
   [130]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-glk4/igt@runner@aborted.html
   [131]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-glk7/igt@runner@aborted.html
   [132]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-glk4/igt@runner@aborted.html
    - shard-tglb:         ([FAIL][133], [FAIL][134], [FAIL][135]) ([i915#2295] / [i915#2667] / [i915#3002]) -> ([FAIL][136], [FAIL][137], [FAIL][138], [FAIL][139]) ([i915#2295] / [i915#2426] / [i915#2667] / [i915#2803] / [i915#3002])
   [133]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-tglb6/igt@runner@aborted.html
   [134]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-tglb1/igt@runner@aborted.html
   [135]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9727/shard-tglb3/igt@runner@aborted.html
   [136]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-tglb8/igt@runner@aborted.html
   [137]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-tglb8/igt@runner@aborted.html
   [138]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-tglb6/igt@runner@aborted.html
   [139]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/shard-tglb7/igt@runner@aborted.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [IGT#2]: https://gitlab.freedesktop.org/drm/igt-gpu-tools/issues/2
  [fdo#108145]: https://bugs.freedesktop.org/show_bug.cgi?id=108145
  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109441]: https://bugs.freedesktop.org/show_bug.cgi?id=109441
  [fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
  [fdo#112054]: https://bugs.freedesktop.org/show_bug.cgi?id=112054
  [fdo#112306]: https://bugs.freedesktop.org/show_bug.cgi?id=112306
  [i915#1188]: https://gitlab.freedesktop.org/drm/intel/issues/1188
  [i915#142]: https://gitlab.freedesktop.org/drm/intel/issues/142
  [i915#151]: https://gitlab.freedesktop.org/drm/intel/issues/151
  [i915#1542]: https://gitlab.freedesktop.org/drm/intel/issues/1542
  [i915#155]: https://gitlab.freedesktop.org/drm/intel/issues/155
  [i915#1610]: https://gitlab.freedesktop.org/drm/intel/issues/1610
  [i915#180]: https://gitlab.freedesktop.org/drm/intel/issues/180
  [i915#1804]: https://gitlab.freedesktop.org/drm/intel/issues/1804
  [i915#1814]: https://gitlab.freedesktop.org/drm/intel/issues/1814
  [i915#1982]: https://gitlab.freedesktop.org/drm/intel/issues/1982
  [i915#2100]: https://gitlab.freedesktop.org/drm/intel/issues/2100
  [i915#2122]: https://gitlab.freedesktop.org/drm/intel/issues/2122
  [i915#2291]: https://gitlab.freedesktop.org/drm/intel/issues/2291
  [i915#2292]: https://gitlab.freedesktop.org/drm/intel/issues/2292
  [i915#2295]: https://gitlab.freedesktop.org/drm/intel/issues/2295
  [i915#2346]: https://gitlab.freedesktop.org/drm/intel/issues/2346
  [i915#2370]: https://gitlab.freedesktop.org/drm/intel/issues/2370
  [i915#2426]: https://gitlab.freedesktop.org/drm/intel/issues/2426
  [i915#2485]: https://gitlab.freedesktop.org/drm/intel/issues/2485
  [i915#2505]: https://gitlab.freedesktop.org/drm/intel/issues/2505
  [i915#2641]: https://gitlab.freedesktop.org/drm/intel/issues/2641
  [i915#265]: https://gitlab.freedesktop.org/drm/intel/issues/265
  [i915#2667]: https://gitlab.freedesktop.org/drm/intel/issues/2667
  [i915#2681]: https://gitlab.freedesktop.org/drm/intel/issues/2681
  [i915#2684]: https://gitlab.freedesktop.org/drm/intel/issues/2684
  [i915#2803]: https://gitlab.freedesktop.org/drm/intel/issues/2803
  [i915#2813]: https://gitlab.freedesktop.org/drm/intel/issues/2813
  [i915#2842]: https://gitlab.freedesktop.org/drm/intel/issues/2842
  [i915#2849]: https://gitlab.freedesktop.org/drm/intel/issues/2849
  [i915#2851]: https://gitlab.freedesktop.org/drm/intel/issues/2851
  [i915#2920]: https://gitlab.freedesktop.org/drm/intel/issues/2920
  [i915#3002]: https://gitlab.freedesktop.org/drm/intel/issues/3002
  [i915#3019]: https://gitlab.freedesktop.org/drm/intel/issues/3019
  [i915#3026]: https://gitlab.freedesktop.org/drm/intel/issues/3026
  [i915#3028]: https://gitlab.freedesktop.org/drm/intel/issues/3028
  [i915#454]: https://gitlab.freedesktop.org/drm/intel/issues/454
  [i915#54]: https://gitlab.freedesktop.org/drm/intel/issues/54
  [i915#541]: https://gitlab.freedesktop.org/drm/intel/issues/541
  [i915#602]: https://gitlab.freedesktop.org/drm/intel/issues/602
  [i915#658]: https://gitlab.freedesktop.org/drm/intel/issues/658
  [k.org#202321]: ht

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19581/index.html

[-- Attachment #1.2: Type: text/html, Size: 37021 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH] drm/vblank: Avoid storing a timestamp for the same frame twice
  2021-02-04  2:04 ` [Intel-gfx] " Ville Syrjala
@ 2021-02-04 15:32   ` Daniel Vetter
  -1 siblings, 0 replies; 43+ messages in thread
From: Daniel Vetter @ 2021-02-04 15:32 UTC (permalink / raw)
  To: Ville Syrjala
  Cc: Daniel Vetter, intel-gfx, Dhinakaran Pandiyan, dri-devel, Rodrigo Vivi

On Thu, Feb 04, 2021 at 04:04:00AM +0200, Ville Syrjala wrote:
> From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> 
> drm_vblank_restore() exists because certain power saving states
> can clobber the hardware frame counter. The way it does this is
> by guesstimating how many frames were missed purely based on
> the difference between the last stored timestamp vs. a newly
> sampled timestamp.
> 
> If we should call this function before a full frame has
> elapsed since we sampled the last timestamp we would end up
> with a possibly slightly different timestamp value for the
> same frame. Currently we will happily overwrite the already
> stored timestamp for the frame with the new value. This
> could cause userspace to observe two different timestamps
> for the same frame (and the timestamp could even go
> backwards depending on how much error we introduce when
> correcting the timestamp based on the scanout position).
> 
> To avoid that let's not update the stored timestamp unless we're
> also incrementing the sequence counter. We do still want to update
> vblank->last with the freshly sampled hw frame counter value so
> that subsequent vblank irqs/queries can actually use the hw frame
> counter to determine how many frames have elapsed.

Hm I'm not getting the reason for why we store the updated hw vblank
counter?

There's definitely a race when we grab the hw timestamp at a bad time
(which can't happen for the irq handler, realistically), so maybe we
should first adjust that to make sure we never store anything inconsistent
in the vblank state?

And when we have that we should be able to pull the inc == 0 check out
into _restore(), including comment. Which I think should be cleaner.

Or I'm totally off with why you want to store the hw vblank counter?

> 
> Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> ---
>  drivers/gpu/drm/drm_vblank.c | 11 +++++++++++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> index 893165eeddf3..e127a7db2088 100644
> --- a/drivers/gpu/drm/drm_vblank.c
> +++ b/drivers/gpu/drm/drm_vblank.c
> @@ -176,6 +176,17 @@ static void store_vblank(struct drm_device *dev, unsigned int pipe,
>  
>  	vblank->last = last;
>  
> +	/*
> +	 * drm_vblank_restore() wants to always update
> +	 * vblank->last since we can't trust the frame counter
> +	 * across power saving states. But we don't want to alter
> +	 * the stored timestamp for the same frame number since
> +	 * that would cause userspace to potentially observe two
> +	 * different timestamps for the same frame.
> +	 */
> +	if (vblank_count_inc == 0)
> +		return;
> +
>  	write_seqlock(&vblank->seqlock);
>  	vblank->time = t_vblank;
>  	atomic64_add(vblank_count_inc, &vblank->count);
> -- 
> 2.26.2
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/vblank: Avoid storing a timestamp for the same frame twice
@ 2021-02-04 15:32   ` Daniel Vetter
  0 siblings, 0 replies; 43+ messages in thread
From: Daniel Vetter @ 2021-02-04 15:32 UTC (permalink / raw)
  To: Ville Syrjala; +Cc: Daniel Vetter, intel-gfx, Dhinakaran Pandiyan, dri-devel

On Thu, Feb 04, 2021 at 04:04:00AM +0200, Ville Syrjala wrote:
> From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> 
> drm_vblank_restore() exists because certain power saving states
> can clobber the hardware frame counter. The way it does this is
> by guesstimating how many frames were missed purely based on
> the difference between the last stored timestamp vs. a newly
> sampled timestamp.
> 
> If we should call this function before a full frame has
> elapsed since we sampled the last timestamp we would end up
> with a possibly slightly different timestamp value for the
> same frame. Currently we will happily overwrite the already
> stored timestamp for the frame with the new value. This
> could cause userspace to observe two different timestamps
> for the same frame (and the timestamp could even go
> backwards depending on how much error we introduce when
> correcting the timestamp based on the scanout position).
> 
> To avoid that let's not update the stored timestamp unless we're
> also incrementing the sequence counter. We do still want to update
> vblank->last with the freshly sampled hw frame counter value so
> that subsequent vblank irqs/queries can actually use the hw frame
> counter to determine how many frames have elapsed.

Hm I'm not getting the reason for why we store the updated hw vblank
counter?

There's definitely a race when we grab the hw timestamp at a bad time
(which can't happen for the irq handler, realistically), so maybe we
should first adjust that to make sure we never store anything inconsistent
in the vblank state?

And when we have that we should be able to pull the inc == 0 check out
into _restore(), including comment. Which I think should be cleaner.

Or I'm totally off with why you want to store the hw vblank counter?

> 
> Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> ---
>  drivers/gpu/drm/drm_vblank.c | 11 +++++++++++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> index 893165eeddf3..e127a7db2088 100644
> --- a/drivers/gpu/drm/drm_vblank.c
> +++ b/drivers/gpu/drm/drm_vblank.c
> @@ -176,6 +176,17 @@ static void store_vblank(struct drm_device *dev, unsigned int pipe,
>  
>  	vblank->last = last;
>  
> +	/*
> +	 * drm_vblank_restore() wants to always update
> +	 * vblank->last since we can't trust the frame counter
> +	 * across power saving states. But we don't want to alter
> +	 * the stored timestamp for the same frame number since
> +	 * that would cause userspace to potentially observe two
> +	 * different timestamps for the same frame.
> +	 */
> +	if (vblank_count_inc == 0)
> +		return;
> +
>  	write_seqlock(&vblank->seqlock);
>  	vblank->time = t_vblank;
>  	atomic64_add(vblank_count_inc, &vblank->count);
> -- 
> 2.26.2
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH] drm/vblank: Avoid storing a timestamp for the same frame twice
  2021-02-04 15:32   ` [Intel-gfx] " Daniel Vetter
@ 2021-02-04 15:55     ` Ville Syrjälä
  -1 siblings, 0 replies; 43+ messages in thread
From: Ville Syrjälä @ 2021-02-04 15:55 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Daniel Vetter, intel-gfx, Dhinakaran Pandiyan, dri-devel, Rodrigo Vivi

On Thu, Feb 04, 2021 at 04:32:16PM +0100, Daniel Vetter wrote:
> On Thu, Feb 04, 2021 at 04:04:00AM +0200, Ville Syrjala wrote:
> > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > 
> > drm_vblank_restore() exists because certain power saving states
> > can clobber the hardware frame counter. The way it does this is
> > by guesstimating how many frames were missed purely based on
> > the difference between the last stored timestamp vs. a newly
> > sampled timestamp.
> > 
> > If we should call this function before a full frame has
> > elapsed since we sampled the last timestamp we would end up
> > with a possibly slightly different timestamp value for the
> > same frame. Currently we will happily overwrite the already
> > stored timestamp for the frame with the new value. This
> > could cause userspace to observe two different timestamps
> > for the same frame (and the timestamp could even go
> > backwards depending on how much error we introduce when
> > correcting the timestamp based on the scanout position).
> > 
> > To avoid that let's not update the stored timestamp unless we're
> > also incrementing the sequence counter. We do still want to update
> > vblank->last with the freshly sampled hw frame counter value so
> > that subsequent vblank irqs/queries can actually use the hw frame
> > counter to determine how many frames have elapsed.
> 
> Hm I'm not getting the reason for why we store the updated hw vblank
> counter?

Because next time a vblank irq happens the code will do:
diff = current_hw_counter - vblank->last

which won't work very well if vblank->last is garbage.

Updating vblank->last is pretty much why drm_vblank_restore()
exists at all.

> There's definitely a race when we grab the hw timestamp at a bad time
> (which can't happen for the irq handler, realistically), so maybe we
> should first adjust that to make sure we never store anything inconsistent
> in the vblank state?

Not sure what race you mean, or what inconsistent thing we store?

> 
> And when we have that we should be able to pull the inc == 0 check out
> into _restore(), including comment. Which I think should be cleaner.
> 
> Or I'm totally off with why you want to store the hw vblank counter?
> 
> > 
> > Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
> > Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> > Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > ---
> >  drivers/gpu/drm/drm_vblank.c | 11 +++++++++++
> >  1 file changed, 11 insertions(+)
> > 
> > diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> > index 893165eeddf3..e127a7db2088 100644
> > --- a/drivers/gpu/drm/drm_vblank.c
> > +++ b/drivers/gpu/drm/drm_vblank.c
> > @@ -176,6 +176,17 @@ static void store_vblank(struct drm_device *dev, unsigned int pipe,
> >  
> >  	vblank->last = last;
> >  
> > +	/*
> > +	 * drm_vblank_restore() wants to always update
> > +	 * vblank->last since we can't trust the frame counter
> > +	 * across power saving states. But we don't want to alter
> > +	 * the stored timestamp for the same frame number since
> > +	 * that would cause userspace to potentially observe two
> > +	 * different timestamps for the same frame.
> > +	 */
> > +	if (vblank_count_inc == 0)
> > +		return;
> > +
> >  	write_seqlock(&vblank->seqlock);
> >  	vblank->time = t_vblank;
> >  	atomic64_add(vblank_count_inc, &vblank->count);
> > -- 
> > 2.26.2
> > 
> 
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch

-- 
Ville Syrjälä
Intel
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/vblank: Avoid storing a timestamp for the same frame twice
@ 2021-02-04 15:55     ` Ville Syrjälä
  0 siblings, 0 replies; 43+ messages in thread
From: Ville Syrjälä @ 2021-02-04 15:55 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: Daniel Vetter, intel-gfx, Dhinakaran Pandiyan, dri-devel

On Thu, Feb 04, 2021 at 04:32:16PM +0100, Daniel Vetter wrote:
> On Thu, Feb 04, 2021 at 04:04:00AM +0200, Ville Syrjala wrote:
> > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > 
> > drm_vblank_restore() exists because certain power saving states
> > can clobber the hardware frame counter. The way it does this is
> > by guesstimating how many frames were missed purely based on
> > the difference between the last stored timestamp vs. a newly
> > sampled timestamp.
> > 
> > If we should call this function before a full frame has
> > elapsed since we sampled the last timestamp we would end up
> > with a possibly slightly different timestamp value for the
> > same frame. Currently we will happily overwrite the already
> > stored timestamp for the frame with the new value. This
> > could cause userspace to observe two different timestamps
> > for the same frame (and the timestamp could even go
> > backwards depending on how much error we introduce when
> > correcting the timestamp based on the scanout position).
> > 
> > To avoid that let's not update the stored timestamp unless we're
> > also incrementing the sequence counter. We do still want to update
> > vblank->last with the freshly sampled hw frame counter value so
> > that subsequent vblank irqs/queries can actually use the hw frame
> > counter to determine how many frames have elapsed.
> 
> Hm I'm not getting the reason for why we store the updated hw vblank
> counter?

Because next time a vblank irq happens the code will do:
diff = current_hw_counter - vblank->last

which won't work very well if vblank->last is garbage.

Updating vblank->last is pretty much why drm_vblank_restore()
exists at all.

> There's definitely a race when we grab the hw timestamp at a bad time
> (which can't happen for the irq handler, realistically), so maybe we
> should first adjust that to make sure we never store anything inconsistent
> in the vblank state?

Not sure what race you mean, or what inconsistent thing we store?

> 
> And when we have that we should be able to pull the inc == 0 check out
> into _restore(), including comment. Which I think should be cleaner.
> 
> Or I'm totally off with why you want to store the hw vblank counter?
> 
> > 
> > Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
> > Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> > Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > ---
> >  drivers/gpu/drm/drm_vblank.c | 11 +++++++++++
> >  1 file changed, 11 insertions(+)
> > 
> > diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> > index 893165eeddf3..e127a7db2088 100644
> > --- a/drivers/gpu/drm/drm_vblank.c
> > +++ b/drivers/gpu/drm/drm_vblank.c
> > @@ -176,6 +176,17 @@ static void store_vblank(struct drm_device *dev, unsigned int pipe,
> >  
> >  	vblank->last = last;
> >  
> > +	/*
> > +	 * drm_vblank_restore() wants to always update
> > +	 * vblank->last since we can't trust the frame counter
> > +	 * across power saving states. But we don't want to alter
> > +	 * the stored timestamp for the same frame number since
> > +	 * that would cause userspace to potentially observe two
> > +	 * different timestamps for the same frame.
> > +	 */
> > +	if (vblank_count_inc == 0)
> > +		return;
> > +
> >  	write_seqlock(&vblank->seqlock);
> >  	vblank->time = t_vblank;
> >  	atomic64_add(vblank_count_inc, &vblank->count);
> > -- 
> > 2.26.2
> > 
> 
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch

-- 
Ville Syrjälä
Intel
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH] drm/vblank: Avoid storing a timestamp for the same frame twice
  2021-02-04 15:55     ` [Intel-gfx] " Ville Syrjälä
@ 2021-02-05 15:46       ` Daniel Vetter
  -1 siblings, 0 replies; 43+ messages in thread
From: Daniel Vetter @ 2021-02-05 15:46 UTC (permalink / raw)
  To: Ville Syrjälä
  Cc: Daniel Vetter, intel-gfx, dri-devel, Dhinakaran Pandiyan, Rodrigo Vivi

On Thu, Feb 04, 2021 at 05:55:28PM +0200, Ville Syrjälä wrote:
> On Thu, Feb 04, 2021 at 04:32:16PM +0100, Daniel Vetter wrote:
> > On Thu, Feb 04, 2021 at 04:04:00AM +0200, Ville Syrjala wrote:
> > > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > 
> > > drm_vblank_restore() exists because certain power saving states
> > > can clobber the hardware frame counter. The way it does this is
> > > by guesstimating how many frames were missed purely based on
> > > the difference between the last stored timestamp vs. a newly
> > > sampled timestamp.
> > > 
> > > If we should call this function before a full frame has
> > > elapsed since we sampled the last timestamp we would end up
> > > with a possibly slightly different timestamp value for the
> > > same frame. Currently we will happily overwrite the already
> > > stored timestamp for the frame with the new value. This
> > > could cause userspace to observe two different timestamps
> > > for the same frame (and the timestamp could even go
> > > backwards depending on how much error we introduce when
> > > correcting the timestamp based on the scanout position).
> > > 
> > > To avoid that let's not update the stored timestamp unless we're
> > > also incrementing the sequence counter. We do still want to update
> > > vblank->last with the freshly sampled hw frame counter value so
> > > that subsequent vblank irqs/queries can actually use the hw frame
> > > counter to determine how many frames have elapsed.
> > 
> > Hm I'm not getting the reason for why we store the updated hw vblank
> > counter?
> 
> Because next time a vblank irq happens the code will do:
> diff = current_hw_counter - vblank->last
> 
> which won't work very well if vblank->last is garbage.
> 
> Updating vblank->last is pretty much why drm_vblank_restore()
> exists at all.

Oh sure, _restore has to update this, together with the timestamp.

But your code adds such an update where we update the hw vblank counter,
but not the timestamp, and that feels buggy. Either we're still in the
same frame, and then we should story nothing. Or we advanced, and then we
probably want a new timestampt for that frame too.

Advancing the vblank counter and not advancing the timestamp sounds like a
bug in our code.

> > There's definitely a race when we grab the hw timestamp at a bad time
> > (which can't happen for the irq handler, realistically), so maybe we
> > should first adjust that to make sure we never store anything inconsistent
> > in the vblank state?
> 
> Not sure what race you mean, or what inconsistent thing we store?

For the drm_handle_vblank code we have some fudge so we don't compute
something silly when the irq fires (like it often does) before
top-of-frame. Ofc that fudge is inheritedly racy, if the irq is extremely
delay (almost an entire frame) we'll get it wrong.

In practice it doesn't matter.

Now _restore can be called anytime, so we might end up in situations where
the exact point where we jump to the next frame count, and the exact time
where the hw counter jumps, don't lign up. And I think in that case funny
things can happen, and I'm not sure your approach of "update hw counter
but don't update timestamp" is the right way.

I think if we instead ignore any update if our fudge-corrected timestamp
is roughly the same, then we handle that race correctly and there's no
jumping around.

Cheers, Daniel

> > And when we have that we should be able to pull the inc == 0 check out
> > into _restore(), including comment. Which I think should be cleaner.
> > 
> > Or I'm totally off with why you want to store the hw vblank counter?
> > 
> > > 
> > > Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
> > > Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > > Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> > > Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > ---
> > >  drivers/gpu/drm/drm_vblank.c | 11 +++++++++++
> > >  1 file changed, 11 insertions(+)
> > > 
> > > diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> > > index 893165eeddf3..e127a7db2088 100644
> > > --- a/drivers/gpu/drm/drm_vblank.c
> > > +++ b/drivers/gpu/drm/drm_vblank.c
> > > @@ -176,6 +176,17 @@ static void store_vblank(struct drm_device *dev, unsigned int pipe,
> > >  
> > >  	vblank->last = last;
> > >  
> > > +	/*
> > > +	 * drm_vblank_restore() wants to always update
> > > +	 * vblank->last since we can't trust the frame counter
> > > +	 * across power saving states. But we don't want to alter
> > > +	 * the stored timestamp for the same frame number since
> > > +	 * that would cause userspace to potentially observe two
> > > +	 * different timestamps for the same frame.
> > > +	 */
> > > +	if (vblank_count_inc == 0)
> > > +		return;
> > > +
> > >  	write_seqlock(&vblank->seqlock);
> > >  	vblank->time = t_vblank;
> > >  	atomic64_add(vblank_count_inc, &vblank->count);
> > > -- 
> > > 2.26.2
> > > 
> > 
> > -- 
> > Daniel Vetter
> > Software Engineer, Intel Corporation
> > http://blog.ffwll.ch
> 
> -- 
> Ville Syrjälä
> Intel

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/vblank: Avoid storing a timestamp for the same frame twice
@ 2021-02-05 15:46       ` Daniel Vetter
  0 siblings, 0 replies; 43+ messages in thread
From: Daniel Vetter @ 2021-02-05 15:46 UTC (permalink / raw)
  To: Ville Syrjälä
  Cc: Daniel Vetter, intel-gfx, dri-devel, Dhinakaran Pandiyan

On Thu, Feb 04, 2021 at 05:55:28PM +0200, Ville Syrjälä wrote:
> On Thu, Feb 04, 2021 at 04:32:16PM +0100, Daniel Vetter wrote:
> > On Thu, Feb 04, 2021 at 04:04:00AM +0200, Ville Syrjala wrote:
> > > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > 
> > > drm_vblank_restore() exists because certain power saving states
> > > can clobber the hardware frame counter. The way it does this is
> > > by guesstimating how many frames were missed purely based on
> > > the difference between the last stored timestamp vs. a newly
> > > sampled timestamp.
> > > 
> > > If we should call this function before a full frame has
> > > elapsed since we sampled the last timestamp we would end up
> > > with a possibly slightly different timestamp value for the
> > > same frame. Currently we will happily overwrite the already
> > > stored timestamp for the frame with the new value. This
> > > could cause userspace to observe two different timestamps
> > > for the same frame (and the timestamp could even go
> > > backwards depending on how much error we introduce when
> > > correcting the timestamp based on the scanout position).
> > > 
> > > To avoid that let's not update the stored timestamp unless we're
> > > also incrementing the sequence counter. We do still want to update
> > > vblank->last with the freshly sampled hw frame counter value so
> > > that subsequent vblank irqs/queries can actually use the hw frame
> > > counter to determine how many frames have elapsed.
> > 
> > Hm I'm not getting the reason for why we store the updated hw vblank
> > counter?
> 
> Because next time a vblank irq happens the code will do:
> diff = current_hw_counter - vblank->last
> 
> which won't work very well if vblank->last is garbage.
> 
> Updating vblank->last is pretty much why drm_vblank_restore()
> exists at all.

Oh sure, _restore has to update this, together with the timestamp.

But your code adds such an update where we update the hw vblank counter,
but not the timestamp, and that feels buggy. Either we're still in the
same frame, and then we should story nothing. Or we advanced, and then we
probably want a new timestampt for that frame too.

Advancing the vblank counter and not advancing the timestamp sounds like a
bug in our code.

> > There's definitely a race when we grab the hw timestamp at a bad time
> > (which can't happen for the irq handler, realistically), so maybe we
> > should first adjust that to make sure we never store anything inconsistent
> > in the vblank state?
> 
> Not sure what race you mean, or what inconsistent thing we store?

For the drm_handle_vblank code we have some fudge so we don't compute
something silly when the irq fires (like it often does) before
top-of-frame. Ofc that fudge is inheritedly racy, if the irq is extremely
delay (almost an entire frame) we'll get it wrong.

In practice it doesn't matter.

Now _restore can be called anytime, so we might end up in situations where
the exact point where we jump to the next frame count, and the exact time
where the hw counter jumps, don't lign up. And I think in that case funny
things can happen, and I'm not sure your approach of "update hw counter
but don't update timestamp" is the right way.

I think if we instead ignore any update if our fudge-corrected timestamp
is roughly the same, then we handle that race correctly and there's no
jumping around.

Cheers, Daniel

> > And when we have that we should be able to pull the inc == 0 check out
> > into _restore(), including comment. Which I think should be cleaner.
> > 
> > Or I'm totally off with why you want to store the hw vblank counter?
> > 
> > > 
> > > Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
> > > Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > > Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> > > Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > ---
> > >  drivers/gpu/drm/drm_vblank.c | 11 +++++++++++
> > >  1 file changed, 11 insertions(+)
> > > 
> > > diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> > > index 893165eeddf3..e127a7db2088 100644
> > > --- a/drivers/gpu/drm/drm_vblank.c
> > > +++ b/drivers/gpu/drm/drm_vblank.c
> > > @@ -176,6 +176,17 @@ static void store_vblank(struct drm_device *dev, unsigned int pipe,
> > >  
> > >  	vblank->last = last;
> > >  
> > > +	/*
> > > +	 * drm_vblank_restore() wants to always update
> > > +	 * vblank->last since we can't trust the frame counter
> > > +	 * across power saving states. But we don't want to alter
> > > +	 * the stored timestamp for the same frame number since
> > > +	 * that would cause userspace to potentially observe two
> > > +	 * different timestamps for the same frame.
> > > +	 */
> > > +	if (vblank_count_inc == 0)
> > > +		return;
> > > +
> > >  	write_seqlock(&vblank->seqlock);
> > >  	vblank->time = t_vblank;
> > >  	atomic64_add(vblank_count_inc, &vblank->count);
> > > -- 
> > > 2.26.2
> > > 
> > 
> > -- 
> > Daniel Vetter
> > Software Engineer, Intel Corporation
> > http://blog.ffwll.ch
> 
> -- 
> Ville Syrjälä
> Intel

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH] drm/vblank: Avoid storing a timestamp for the same frame twice
  2021-02-05 15:46       ` [Intel-gfx] " Daniel Vetter
@ 2021-02-05 16:24         ` Ville Syrjälä
  -1 siblings, 0 replies; 43+ messages in thread
From: Ville Syrjälä @ 2021-02-05 16:24 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Daniel Vetter, intel-gfx, Dhinakaran Pandiyan, dri-devel, Rodrigo Vivi

On Fri, Feb 05, 2021 at 04:46:27PM +0100, Daniel Vetter wrote:
> On Thu, Feb 04, 2021 at 05:55:28PM +0200, Ville Syrjälä wrote:
> > On Thu, Feb 04, 2021 at 04:32:16PM +0100, Daniel Vetter wrote:
> > > On Thu, Feb 04, 2021 at 04:04:00AM +0200, Ville Syrjala wrote:
> > > > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > > 
> > > > drm_vblank_restore() exists because certain power saving states
> > > > can clobber the hardware frame counter. The way it does this is
> > > > by guesstimating how many frames were missed purely based on
> > > > the difference between the last stored timestamp vs. a newly
> > > > sampled timestamp.
> > > > 
> > > > If we should call this function before a full frame has
> > > > elapsed since we sampled the last timestamp we would end up
> > > > with a possibly slightly different timestamp value for the
> > > > same frame. Currently we will happily overwrite the already
> > > > stored timestamp for the frame with the new value. This
> > > > could cause userspace to observe two different timestamps
> > > > for the same frame (and the timestamp could even go
> > > > backwards depending on how much error we introduce when
> > > > correcting the timestamp based on the scanout position).
> > > > 
> > > > To avoid that let's not update the stored timestamp unless we're
> > > > also incrementing the sequence counter. We do still want to update
> > > > vblank->last with the freshly sampled hw frame counter value so
> > > > that subsequent vblank irqs/queries can actually use the hw frame
> > > > counter to determine how many frames have elapsed.
> > > 
> > > Hm I'm not getting the reason for why we store the updated hw vblank
> > > counter?
> > 
> > Because next time a vblank irq happens the code will do:
> > diff = current_hw_counter - vblank->last
> > 
> > which won't work very well if vblank->last is garbage.
> > 
> > Updating vblank->last is pretty much why drm_vblank_restore()
> > exists at all.
> 
> Oh sure, _restore has to update this, together with the timestamp.
> 
> But your code adds such an update where we update the hw vblank counter,
> but not the timestamp, and that feels buggy. Either we're still in the
> same frame, and then we should story nothing. Or we advanced, and then we
> probably want a new timestampt for that frame too.

Even if we're still in the same frame the hw frame counter may already
have been reset due to the power well having been turned off. That is
what I'm trying to fix here.

Now I suppose that's fairly unlikely, at least with PSR which probably
does impose some extra delays before the power gets yanked. But at least
theoretically possible.

> 
> Advancing the vblank counter and not advancing the timestamp sounds like a
> bug in our code.

We're not advancing the vblank counter. We're storing a new
timestamp for a vblank counter value which already had a timestamp.

> 
> > > There's definitely a race when we grab the hw timestamp at a bad time
> > > (which can't happen for the irq handler, realistically), so maybe we
> > > should first adjust that to make sure we never store anything inconsistent
> > > in the vblank state?
> > 
> > Not sure what race you mean, or what inconsistent thing we store?
> 
> For the drm_handle_vblank code we have some fudge so we don't compute
> something silly when the irq fires (like it often does) before
> top-of-frame. Ofc that fudge is inheritedly racy, if the irq is extremely
> delay (almost an entire frame) we'll get it wrong.

Sorry, still no idea what fudge you mean.

> 
> In practice it doesn't matter.
> 
> Now _restore can be called anytime, so we might end up in situations where
> the exact point where we jump to the next frame count, and the exact time
> where the hw counter jumps, don't lign up. And I think in that case funny
> things can happen, and I'm not sure your approach of "update hw counter
> but don't update timestamp" is the right way.
> 
> I think if we instead ignore any update if our fudge-corrected timestamp
> is roughly the same, then we handle that race correctly and there's no
> jumping around.

We can't just not update vblank->last, assuming the theory holds
that the power well may turn off even if the last vblank timestamp
was sampled less than a full frame ago.

That will cause the next diff=current_hw_counter-vblank->last to
generate total garbage and then the vblank seq number will jump
to some random value. Which is exactly the main problem
drm_vblank_restore() is trying to prevent.

-- 
Ville Syrjälä
Intel
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/vblank: Avoid storing a timestamp for the same frame twice
@ 2021-02-05 16:24         ` Ville Syrjälä
  0 siblings, 0 replies; 43+ messages in thread
From: Ville Syrjälä @ 2021-02-05 16:24 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: Daniel Vetter, intel-gfx, Dhinakaran Pandiyan, dri-devel

On Fri, Feb 05, 2021 at 04:46:27PM +0100, Daniel Vetter wrote:
> On Thu, Feb 04, 2021 at 05:55:28PM +0200, Ville Syrjälä wrote:
> > On Thu, Feb 04, 2021 at 04:32:16PM +0100, Daniel Vetter wrote:
> > > On Thu, Feb 04, 2021 at 04:04:00AM +0200, Ville Syrjala wrote:
> > > > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > > 
> > > > drm_vblank_restore() exists because certain power saving states
> > > > can clobber the hardware frame counter. The way it does this is
> > > > by guesstimating how many frames were missed purely based on
> > > > the difference between the last stored timestamp vs. a newly
> > > > sampled timestamp.
> > > > 
> > > > If we should call this function before a full frame has
> > > > elapsed since we sampled the last timestamp we would end up
> > > > with a possibly slightly different timestamp value for the
> > > > same frame. Currently we will happily overwrite the already
> > > > stored timestamp for the frame with the new value. This
> > > > could cause userspace to observe two different timestamps
> > > > for the same frame (and the timestamp could even go
> > > > backwards depending on how much error we introduce when
> > > > correcting the timestamp based on the scanout position).
> > > > 
> > > > To avoid that let's not update the stored timestamp unless we're
> > > > also incrementing the sequence counter. We do still want to update
> > > > vblank->last with the freshly sampled hw frame counter value so
> > > > that subsequent vblank irqs/queries can actually use the hw frame
> > > > counter to determine how many frames have elapsed.
> > > 
> > > Hm I'm not getting the reason for why we store the updated hw vblank
> > > counter?
> > 
> > Because next time a vblank irq happens the code will do:
> > diff = current_hw_counter - vblank->last
> > 
> > which won't work very well if vblank->last is garbage.
> > 
> > Updating vblank->last is pretty much why drm_vblank_restore()
> > exists at all.
> 
> Oh sure, _restore has to update this, together with the timestamp.
> 
> But your code adds such an update where we update the hw vblank counter,
> but not the timestamp, and that feels buggy. Either we're still in the
> same frame, and then we should story nothing. Or we advanced, and then we
> probably want a new timestampt for that frame too.

Even if we're still in the same frame the hw frame counter may already
have been reset due to the power well having been turned off. That is
what I'm trying to fix here.

Now I suppose that's fairly unlikely, at least with PSR which probably
does impose some extra delays before the power gets yanked. But at least
theoretically possible.

> 
> Advancing the vblank counter and not advancing the timestamp sounds like a
> bug in our code.

We're not advancing the vblank counter. We're storing a new
timestamp for a vblank counter value which already had a timestamp.

> 
> > > There's definitely a race when we grab the hw timestamp at a bad time
> > > (which can't happen for the irq handler, realistically), so maybe we
> > > should first adjust that to make sure we never store anything inconsistent
> > > in the vblank state?
> > 
> > Not sure what race you mean, or what inconsistent thing we store?
> 
> For the drm_handle_vblank code we have some fudge so we don't compute
> something silly when the irq fires (like it often does) before
> top-of-frame. Ofc that fudge is inheritedly racy, if the irq is extremely
> delay (almost an entire frame) we'll get it wrong.

Sorry, still no idea what fudge you mean.

> 
> In practice it doesn't matter.
> 
> Now _restore can be called anytime, so we might end up in situations where
> the exact point where we jump to the next frame count, and the exact time
> where the hw counter jumps, don't lign up. And I think in that case funny
> things can happen, and I'm not sure your approach of "update hw counter
> but don't update timestamp" is the right way.
> 
> I think if we instead ignore any update if our fudge-corrected timestamp
> is roughly the same, then we handle that race correctly and there's no
> jumping around.

We can't just not update vblank->last, assuming the theory holds
that the power well may turn off even if the last vblank timestamp
was sampled less than a full frame ago.

That will cause the next diff=current_hw_counter-vblank->last to
generate total garbage and then the vblank seq number will jump
to some random value. Which is exactly the main problem
drm_vblank_restore() is trying to prevent.

-- 
Ville Syrjälä
Intel
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH] drm/vblank: Avoid storing a timestamp for the same frame twice
  2021-02-05 16:24         ` [Intel-gfx] " Ville Syrjälä
@ 2021-02-05 21:19           ` Ville Syrjälä
  -1 siblings, 0 replies; 43+ messages in thread
From: Ville Syrjälä @ 2021-02-05 21:19 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Daniel Vetter, intel-gfx, Dhinakaran Pandiyan, dri-devel, Rodrigo Vivi

On Fri, Feb 05, 2021 at 06:24:08PM +0200, Ville Syrjälä wrote:
> On Fri, Feb 05, 2021 at 04:46:27PM +0100, Daniel Vetter wrote:
> > On Thu, Feb 04, 2021 at 05:55:28PM +0200, Ville Syrjälä wrote:
> > > On Thu, Feb 04, 2021 at 04:32:16PM +0100, Daniel Vetter wrote:
> > > > On Thu, Feb 04, 2021 at 04:04:00AM +0200, Ville Syrjala wrote:
> > > > > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > > > 
> > > > > drm_vblank_restore() exists because certain power saving states
> > > > > can clobber the hardware frame counter. The way it does this is
> > > > > by guesstimating how many frames were missed purely based on
> > > > > the difference between the last stored timestamp vs. a newly
> > > > > sampled timestamp.
> > > > > 
> > > > > If we should call this function before a full frame has
> > > > > elapsed since we sampled the last timestamp we would end up
> > > > > with a possibly slightly different timestamp value for the
> > > > > same frame. Currently we will happily overwrite the already
> > > > > stored timestamp for the frame with the new value. This
> > > > > could cause userspace to observe two different timestamps
> > > > > for the same frame (and the timestamp could even go
> > > > > backwards depending on how much error we introduce when
> > > > > correcting the timestamp based on the scanout position).
> > > > > 
> > > > > To avoid that let's not update the stored timestamp unless we're
> > > > > also incrementing the sequence counter. We do still want to update
> > > > > vblank->last with the freshly sampled hw frame counter value so
> > > > > that subsequent vblank irqs/queries can actually use the hw frame
> > > > > counter to determine how many frames have elapsed.
> > > > 
> > > > Hm I'm not getting the reason for why we store the updated hw vblank
> > > > counter?
> > > 
> > > Because next time a vblank irq happens the code will do:
> > > diff = current_hw_counter - vblank->last
> > > 
> > > which won't work very well if vblank->last is garbage.
> > > 
> > > Updating vblank->last is pretty much why drm_vblank_restore()
> > > exists at all.
> > 
> > Oh sure, _restore has to update this, together with the timestamp.
> > 
> > But your code adds such an update where we update the hw vblank counter,
> > but not the timestamp, and that feels buggy. Either we're still in the
> > same frame, and then we should story nothing. Or we advanced, and then we
> > probably want a new timestampt for that frame too.
> 
> Even if we're still in the same frame the hw frame counter may already
> have been reset due to the power well having been turned off. That is
> what I'm trying to fix here.
> 
> Now I suppose that's fairly unlikely, at least with PSR which probably
> does impose some extra delays before the power gets yanked. But at least
> theoretically possible.

Pondering about this a bit further. I think the fact that the current
code takes the round-to-closest approach I used for the vblank handler
is perhaps a bit bad. It could push the seq counter forward if we're
past the halfway point of a frame. I think that rounding behaviour
makes sense for the irq since those tick steadily and so allowing a bit
of error either way seems correct to me. Perhaps round-down might be
the better option for _restore(). Not quites sure, need more thinking
probably.

Another idea that came to me now is that maybe we should actually just
check if the current hw frame counter value looks sane, as in something
like:

diff_hw_counter = current_hw_counter-stored_hw_counter
diff_ts = (current_ts-stored_ts)/framedur

if (diff_hw_counter ~= diff_ts)
	diff = diff_hw_counter;
else
	diff = diff_ts;

and if they seem to match then just keep trusting the hw counter.
So only if there's a significant difference would we disregard
the diff of the hw counter and instead use the diff based on the
timestamps. Not sure what "significant" is though; One frame, two
frames?

-- 
Ville Syrjälä
Intel
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/vblank: Avoid storing a timestamp for the same frame twice
@ 2021-02-05 21:19           ` Ville Syrjälä
  0 siblings, 0 replies; 43+ messages in thread
From: Ville Syrjälä @ 2021-02-05 21:19 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: Daniel Vetter, intel-gfx, Dhinakaran Pandiyan, dri-devel

On Fri, Feb 05, 2021 at 06:24:08PM +0200, Ville Syrjälä wrote:
> On Fri, Feb 05, 2021 at 04:46:27PM +0100, Daniel Vetter wrote:
> > On Thu, Feb 04, 2021 at 05:55:28PM +0200, Ville Syrjälä wrote:
> > > On Thu, Feb 04, 2021 at 04:32:16PM +0100, Daniel Vetter wrote:
> > > > On Thu, Feb 04, 2021 at 04:04:00AM +0200, Ville Syrjala wrote:
> > > > > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > > > 
> > > > > drm_vblank_restore() exists because certain power saving states
> > > > > can clobber the hardware frame counter. The way it does this is
> > > > > by guesstimating how many frames were missed purely based on
> > > > > the difference between the last stored timestamp vs. a newly
> > > > > sampled timestamp.
> > > > > 
> > > > > If we should call this function before a full frame has
> > > > > elapsed since we sampled the last timestamp we would end up
> > > > > with a possibly slightly different timestamp value for the
> > > > > same frame. Currently we will happily overwrite the already
> > > > > stored timestamp for the frame with the new value. This
> > > > > could cause userspace to observe two different timestamps
> > > > > for the same frame (and the timestamp could even go
> > > > > backwards depending on how much error we introduce when
> > > > > correcting the timestamp based on the scanout position).
> > > > > 
> > > > > To avoid that let's not update the stored timestamp unless we're
> > > > > also incrementing the sequence counter. We do still want to update
> > > > > vblank->last with the freshly sampled hw frame counter value so
> > > > > that subsequent vblank irqs/queries can actually use the hw frame
> > > > > counter to determine how many frames have elapsed.
> > > > 
> > > > Hm I'm not getting the reason for why we store the updated hw vblank
> > > > counter?
> > > 
> > > Because next time a vblank irq happens the code will do:
> > > diff = current_hw_counter - vblank->last
> > > 
> > > which won't work very well if vblank->last is garbage.
> > > 
> > > Updating vblank->last is pretty much why drm_vblank_restore()
> > > exists at all.
> > 
> > Oh sure, _restore has to update this, together with the timestamp.
> > 
> > But your code adds such an update where we update the hw vblank counter,
> > but not the timestamp, and that feels buggy. Either we're still in the
> > same frame, and then we should story nothing. Or we advanced, and then we
> > probably want a new timestampt for that frame too.
> 
> Even if we're still in the same frame the hw frame counter may already
> have been reset due to the power well having been turned off. That is
> what I'm trying to fix here.
> 
> Now I suppose that's fairly unlikely, at least with PSR which probably
> does impose some extra delays before the power gets yanked. But at least
> theoretically possible.

Pondering about this a bit further. I think the fact that the current
code takes the round-to-closest approach I used for the vblank handler
is perhaps a bit bad. It could push the seq counter forward if we're
past the halfway point of a frame. I think that rounding behaviour
makes sense for the irq since those tick steadily and so allowing a bit
of error either way seems correct to me. Perhaps round-down might be
the better option for _restore(). Not quites sure, need more thinking
probably.

Another idea that came to me now is that maybe we should actually just
check if the current hw frame counter value looks sane, as in something
like:

diff_hw_counter = current_hw_counter-stored_hw_counter
diff_ts = (current_ts-stored_ts)/framedur

if (diff_hw_counter ~= diff_ts)
	diff = diff_hw_counter;
else
	diff = diff_ts;

and if they seem to match then just keep trusting the hw counter.
So only if there's a significant difference would we disregard
the diff of the hw counter and instead use the diff based on the
timestamps. Not sure what "significant" is though; One frame, two
frames?

-- 
Ville Syrjälä
Intel
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH] drm/vblank: Avoid storing a timestamp for the same frame twice
  2021-02-05 21:19           ` [Intel-gfx] " Ville Syrjälä
@ 2021-02-08  9:56             ` Daniel Vetter
  -1 siblings, 0 replies; 43+ messages in thread
From: Daniel Vetter @ 2021-02-08  9:56 UTC (permalink / raw)
  To: Ville Syrjälä
  Cc: Daniel Vetter, intel-gfx, dri-devel, Dhinakaran Pandiyan, Rodrigo Vivi

On Fri, Feb 05, 2021 at 11:19:19PM +0200, Ville Syrjälä wrote:
> On Fri, Feb 05, 2021 at 06:24:08PM +0200, Ville Syrjälä wrote:
> > On Fri, Feb 05, 2021 at 04:46:27PM +0100, Daniel Vetter wrote:
> > > On Thu, Feb 04, 2021 at 05:55:28PM +0200, Ville Syrjälä wrote:
> > > > On Thu, Feb 04, 2021 at 04:32:16PM +0100, Daniel Vetter wrote:
> > > > > On Thu, Feb 04, 2021 at 04:04:00AM +0200, Ville Syrjala wrote:
> > > > > > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > > > > 
> > > > > > drm_vblank_restore() exists because certain power saving states
> > > > > > can clobber the hardware frame counter. The way it does this is
> > > > > > by guesstimating how many frames were missed purely based on
> > > > > > the difference between the last stored timestamp vs. a newly
> > > > > > sampled timestamp.
> > > > > > 
> > > > > > If we should call this function before a full frame has
> > > > > > elapsed since we sampled the last timestamp we would end up
> > > > > > with a possibly slightly different timestamp value for the
> > > > > > same frame. Currently we will happily overwrite the already
> > > > > > stored timestamp for the frame with the new value. This
> > > > > > could cause userspace to observe two different timestamps
> > > > > > for the same frame (and the timestamp could even go
> > > > > > backwards depending on how much error we introduce when
> > > > > > correcting the timestamp based on the scanout position).
> > > > > > 
> > > > > > To avoid that let's not update the stored timestamp unless we're
> > > > > > also incrementing the sequence counter. We do still want to update
> > > > > > vblank->last with the freshly sampled hw frame counter value so
> > > > > > that subsequent vblank irqs/queries can actually use the hw frame
> > > > > > counter to determine how many frames have elapsed.
> > > > > 
> > > > > Hm I'm not getting the reason for why we store the updated hw vblank
> > > > > counter?
> > > > 
> > > > Because next time a vblank irq happens the code will do:
> > > > diff = current_hw_counter - vblank->last
> > > > 
> > > > which won't work very well if vblank->last is garbage.
> > > > 
> > > > Updating vblank->last is pretty much why drm_vblank_restore()
> > > > exists at all.
> > > 
> > > Oh sure, _restore has to update this, together with the timestamp.
> > > 
> > > But your code adds such an update where we update the hw vblank counter,
> > > but not the timestamp, and that feels buggy. Either we're still in the
> > > same frame, and then we should story nothing. Or we advanced, and then we
> > > probably want a new timestampt for that frame too.
> > 
> > Even if we're still in the same frame the hw frame counter may already
> > have been reset due to the power well having been turned off. That is
> > what I'm trying to fix here.
> > 
> > Now I suppose that's fairly unlikely, at least with PSR which probably
> > does impose some extra delays before the power gets yanked. But at least
> > theoretically possible.
> 
> Pondering about this a bit further. I think the fact that the current
> code takes the round-to-closest approach I used for the vblank handler
> is perhaps a bit bad. It could push the seq counter forward if we're
> past the halfway point of a frame. I think that rounding behaviour
> makes sense for the irq since those tick steadily and so allowing a bit
> of error either way seems correct to me. Perhaps round-down might be
> the better option for _restore(). Not quites sure, need more thinking
> probably.

Yes this is the rounding I'm worried about.

But your point above that the hw might reset the counter again is also
valid. I'm assuming what you're worried about is that we first do a
_restore (and the hw vblank counter hasn't been trashed yet), and then in
the same frame we do another restore, but now the hw frame counter has
been trashe, and we need to update it?

> Another idea that came to me now is that maybe we should actually just
> check if the current hw frame counter value looks sane, as in something
> like:
> 
> diff_hw_counter = current_hw_counter-stored_hw_counter
> diff_ts = (current_ts-stored_ts)/framedur
> 
> if (diff_hw_counter ~= diff_ts)
> 	diff = diff_hw_counter;
> else
> 	diff = diff_ts;
> 
> and if they seem to match then just keep trusting the hw counter.
> So only if there's a significant difference would we disregard
> the diff of the hw counter and instead use the diff based on the
> timestamps. Not sure what "significant" is though; One frame, two
> frames?

Hm, another idea: The only point where we can trust the entire hw counter
+ timestamp sampling is when the irq happens. Because then we know the
driver will have properly corrected for any hw oddities (like hw counter
flipping not at top-of-frame, like the core expects).

So what if _restore always goes back to the last such trusted hw counter
for computing the frame counter diff and all that stuff? That way if we
have a bunch of _restore with incosisten hw vblank counter, we will a)
only take the last one (fixes the bug you're trying to fix) b) still use
the same last trusted baseline for computations (addresses the race I'm
seeing).

Or does this not work?

It does complicate the code a bit, because we'd need to store the
count/timestamp information from _restore outside of the usual vblank ts
array. But I think that addresses everything.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/vblank: Avoid storing a timestamp for the same frame twice
@ 2021-02-08  9:56             ` Daniel Vetter
  0 siblings, 0 replies; 43+ messages in thread
From: Daniel Vetter @ 2021-02-08  9:56 UTC (permalink / raw)
  To: Ville Syrjälä
  Cc: Daniel Vetter, intel-gfx, dri-devel, Dhinakaran Pandiyan

On Fri, Feb 05, 2021 at 11:19:19PM +0200, Ville Syrjälä wrote:
> On Fri, Feb 05, 2021 at 06:24:08PM +0200, Ville Syrjälä wrote:
> > On Fri, Feb 05, 2021 at 04:46:27PM +0100, Daniel Vetter wrote:
> > > On Thu, Feb 04, 2021 at 05:55:28PM +0200, Ville Syrjälä wrote:
> > > > On Thu, Feb 04, 2021 at 04:32:16PM +0100, Daniel Vetter wrote:
> > > > > On Thu, Feb 04, 2021 at 04:04:00AM +0200, Ville Syrjala wrote:
> > > > > > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > > > > 
> > > > > > drm_vblank_restore() exists because certain power saving states
> > > > > > can clobber the hardware frame counter. The way it does this is
> > > > > > by guesstimating how many frames were missed purely based on
> > > > > > the difference between the last stored timestamp vs. a newly
> > > > > > sampled timestamp.
> > > > > > 
> > > > > > If we should call this function before a full frame has
> > > > > > elapsed since we sampled the last timestamp we would end up
> > > > > > with a possibly slightly different timestamp value for the
> > > > > > same frame. Currently we will happily overwrite the already
> > > > > > stored timestamp for the frame with the new value. This
> > > > > > could cause userspace to observe two different timestamps
> > > > > > for the same frame (and the timestamp could even go
> > > > > > backwards depending on how much error we introduce when
> > > > > > correcting the timestamp based on the scanout position).
> > > > > > 
> > > > > > To avoid that let's not update the stored timestamp unless we're
> > > > > > also incrementing the sequence counter. We do still want to update
> > > > > > vblank->last with the freshly sampled hw frame counter value so
> > > > > > that subsequent vblank irqs/queries can actually use the hw frame
> > > > > > counter to determine how many frames have elapsed.
> > > > > 
> > > > > Hm I'm not getting the reason for why we store the updated hw vblank
> > > > > counter?
> > > > 
> > > > Because next time a vblank irq happens the code will do:
> > > > diff = current_hw_counter - vblank->last
> > > > 
> > > > which won't work very well if vblank->last is garbage.
> > > > 
> > > > Updating vblank->last is pretty much why drm_vblank_restore()
> > > > exists at all.
> > > 
> > > Oh sure, _restore has to update this, together with the timestamp.
> > > 
> > > But your code adds such an update where we update the hw vblank counter,
> > > but not the timestamp, and that feels buggy. Either we're still in the
> > > same frame, and then we should story nothing. Or we advanced, and then we
> > > probably want a new timestampt for that frame too.
> > 
> > Even if we're still in the same frame the hw frame counter may already
> > have been reset due to the power well having been turned off. That is
> > what I'm trying to fix here.
> > 
> > Now I suppose that's fairly unlikely, at least with PSR which probably
> > does impose some extra delays before the power gets yanked. But at least
> > theoretically possible.
> 
> Pondering about this a bit further. I think the fact that the current
> code takes the round-to-closest approach I used for the vblank handler
> is perhaps a bit bad. It could push the seq counter forward if we're
> past the halfway point of a frame. I think that rounding behaviour
> makes sense for the irq since those tick steadily and so allowing a bit
> of error either way seems correct to me. Perhaps round-down might be
> the better option for _restore(). Not quites sure, need more thinking
> probably.

Yes this is the rounding I'm worried about.

But your point above that the hw might reset the counter again is also
valid. I'm assuming what you're worried about is that we first do a
_restore (and the hw vblank counter hasn't been trashed yet), and then in
the same frame we do another restore, but now the hw frame counter has
been trashe, and we need to update it?

> Another idea that came to me now is that maybe we should actually just
> check if the current hw frame counter value looks sane, as in something
> like:
> 
> diff_hw_counter = current_hw_counter-stored_hw_counter
> diff_ts = (current_ts-stored_ts)/framedur
> 
> if (diff_hw_counter ~= diff_ts)
> 	diff = diff_hw_counter;
> else
> 	diff = diff_ts;
> 
> and if they seem to match then just keep trusting the hw counter.
> So only if there's a significant difference would we disregard
> the diff of the hw counter and instead use the diff based on the
> timestamps. Not sure what "significant" is though; One frame, two
> frames?

Hm, another idea: The only point where we can trust the entire hw counter
+ timestamp sampling is when the irq happens. Because then we know the
driver will have properly corrected for any hw oddities (like hw counter
flipping not at top-of-frame, like the core expects).

So what if _restore always goes back to the last such trusted hw counter
for computing the frame counter diff and all that stuff? That way if we
have a bunch of _restore with incosisten hw vblank counter, we will a)
only take the last one (fixes the bug you're trying to fix) b) still use
the same last trusted baseline for computations (addresses the race I'm
seeing).

Or does this not work?

It does complicate the code a bit, because we'd need to store the
count/timestamp information from _restore outside of the usual vblank ts
array. But I think that addresses everything.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH] drm/vblank: Avoid storing a timestamp for the same frame twice
  2021-02-08  9:56             ` [Intel-gfx] " Daniel Vetter
@ 2021-02-08 16:58               ` Ville Syrjälä
  -1 siblings, 0 replies; 43+ messages in thread
From: Ville Syrjälä @ 2021-02-08 16:58 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Daniel Vetter, intel-gfx, Dhinakaran Pandiyan, dri-devel, Rodrigo Vivi

On Mon, Feb 08, 2021 at 10:56:36AM +0100, Daniel Vetter wrote:
> On Fri, Feb 05, 2021 at 11:19:19PM +0200, Ville Syrjälä wrote:
> > On Fri, Feb 05, 2021 at 06:24:08PM +0200, Ville Syrjälä wrote:
> > > On Fri, Feb 05, 2021 at 04:46:27PM +0100, Daniel Vetter wrote:
> > > > On Thu, Feb 04, 2021 at 05:55:28PM +0200, Ville Syrjälä wrote:
> > > > > On Thu, Feb 04, 2021 at 04:32:16PM +0100, Daniel Vetter wrote:
> > > > > > On Thu, Feb 04, 2021 at 04:04:00AM +0200, Ville Syrjala wrote:
> > > > > > > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > > > > > 
> > > > > > > drm_vblank_restore() exists because certain power saving states
> > > > > > > can clobber the hardware frame counter. The way it does this is
> > > > > > > by guesstimating how many frames were missed purely based on
> > > > > > > the difference between the last stored timestamp vs. a newly
> > > > > > > sampled timestamp.
> > > > > > > 
> > > > > > > If we should call this function before a full frame has
> > > > > > > elapsed since we sampled the last timestamp we would end up
> > > > > > > with a possibly slightly different timestamp value for the
> > > > > > > same frame. Currently we will happily overwrite the already
> > > > > > > stored timestamp for the frame with the new value. This
> > > > > > > could cause userspace to observe two different timestamps
> > > > > > > for the same frame (and the timestamp could even go
> > > > > > > backwards depending on how much error we introduce when
> > > > > > > correcting the timestamp based on the scanout position).
> > > > > > > 
> > > > > > > To avoid that let's not update the stored timestamp unless we're
> > > > > > > also incrementing the sequence counter. We do still want to update
> > > > > > > vblank->last with the freshly sampled hw frame counter value so
> > > > > > > that subsequent vblank irqs/queries can actually use the hw frame
> > > > > > > counter to determine how many frames have elapsed.
> > > > > > 
> > > > > > Hm I'm not getting the reason for why we store the updated hw vblank
> > > > > > counter?
> > > > > 
> > > > > Because next time a vblank irq happens the code will do:
> > > > > diff = current_hw_counter - vblank->last
> > > > > 
> > > > > which won't work very well if vblank->last is garbage.
> > > > > 
> > > > > Updating vblank->last is pretty much why drm_vblank_restore()
> > > > > exists at all.
> > > > 
> > > > Oh sure, _restore has to update this, together with the timestamp.
> > > > 
> > > > But your code adds such an update where we update the hw vblank counter,
> > > > but not the timestamp, and that feels buggy. Either we're still in the
> > > > same frame, and then we should story nothing. Or we advanced, and then we
> > > > probably want a new timestampt for that frame too.
> > > 
> > > Even if we're still in the same frame the hw frame counter may already
> > > have been reset due to the power well having been turned off. That is
> > > what I'm trying to fix here.
> > > 
> > > Now I suppose that's fairly unlikely, at least with PSR which probably
> > > does impose some extra delays before the power gets yanked. But at least
> > > theoretically possible.
> > 
> > Pondering about this a bit further. I think the fact that the current
> > code takes the round-to-closest approach I used for the vblank handler
> > is perhaps a bit bad. It could push the seq counter forward if we're
> > past the halfway point of a frame. I think that rounding behaviour
> > makes sense for the irq since those tick steadily and so allowing a bit
> > of error either way seems correct to me. Perhaps round-down might be
> > the better option for _restore(). Not quites sure, need more thinking
> > probably.
> 
> Yes this is the rounding I'm worried about.

Actually I don't think this is really an issue since we are working 
with the corrected timestamps here. Those always line up with
frames, so unless the correction is really buggy or the hw somehow
skips a partial frame it should work rather well. At least when
operating with small timescales. For large gaps the error might
creep up, but I don't think a small error in the predicted seq
number over a long timespan is really a problem.

> 
> But your point above that the hw might reset the counter again is also
> valid. I'm assuming what you're worried about is that we first do a
> _restore (and the hw vblank counter hasn't been trashed yet), and then in
> the same frame we do another restore, but now the hw frame counter has
> been trashe, and we need to update it?

Yeah, although the pre-trashing _restore could also just be
a vblank irq I think.

> 
> > Another idea that came to me now is that maybe we should actually just
> > check if the current hw frame counter value looks sane, as in something
> > like:
> > 
> > diff_hw_counter = current_hw_counter-stored_hw_counter
> > diff_ts = (current_ts-stored_ts)/framedur
> > 
> > if (diff_hw_counter ~= diff_ts)
> > 	diff = diff_hw_counter;
> > else
> > 	diff = diff_ts;
> > 
> > and if they seem to match then just keep trusting the hw counter.
> > So only if there's a significant difference would we disregard
> > the diff of the hw counter and instead use the diff based on the
> > timestamps. Not sure what "significant" is though; One frame, two
> > frames?
> 
> Hm, another idea: The only point where we can trust the entire hw counter
> + timestamp sampling is when the irq happens. Because then we know the
> driver will have properly corrected for any hw oddities (like hw counter
> flipping not at top-of-frame, like the core expects).

i915 at least gives out correct data regardless of when you sample
it. Well, except for the cases where the hw counter gets trashed,
in which case the hw counter is garbage (when compared with .last)
but the timestamp is still correct.

> 
> So what if _restore always goes back to the last such trusted hw counter
> for computing the frame counter diff and all that stuff? That way if we
> have a bunch of _restore with incosisten hw vblank counter, we will a)
> only take the last one (fixes the bug you're trying to fix) b) still use
> the same last trusted baseline for computations (addresses the race I'm
> seeing).
> 
> Or does this not work?

I don't think I really understand what you're suggesting here.
_restore is already using the last trusted data (the stored
timestamp + .last).

So the one thing _restore will have to update is .last.
I think it can either do what it does now and set .last
to the current hw counter value + update the timestamp
to match, or it could perhaps adjust the stored .last
such that the already stored timestamp and the updated
.last match up. But I think both of those options have
the same level or inaccuracy since both would still do
the same ts_diff->hw_counter_diff prediction. 

> 
> It does complicate the code a bit, because we'd need to store the
> count/timestamp information from _restore outside of the usual vblank ts
> array. But I think that addresses everything.

Hmm. So restore would store this extra information
somewhere else, and not update the normal stuff at all?
What exactly would we do with that extra data?

-- 
Ville Syrjälä
Intel
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/vblank: Avoid storing a timestamp for the same frame twice
@ 2021-02-08 16:58               ` Ville Syrjälä
  0 siblings, 0 replies; 43+ messages in thread
From: Ville Syrjälä @ 2021-02-08 16:58 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: Daniel Vetter, intel-gfx, Dhinakaran Pandiyan, dri-devel

On Mon, Feb 08, 2021 at 10:56:36AM +0100, Daniel Vetter wrote:
> On Fri, Feb 05, 2021 at 11:19:19PM +0200, Ville Syrjälä wrote:
> > On Fri, Feb 05, 2021 at 06:24:08PM +0200, Ville Syrjälä wrote:
> > > On Fri, Feb 05, 2021 at 04:46:27PM +0100, Daniel Vetter wrote:
> > > > On Thu, Feb 04, 2021 at 05:55:28PM +0200, Ville Syrjälä wrote:
> > > > > On Thu, Feb 04, 2021 at 04:32:16PM +0100, Daniel Vetter wrote:
> > > > > > On Thu, Feb 04, 2021 at 04:04:00AM +0200, Ville Syrjala wrote:
> > > > > > > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > > > > > 
> > > > > > > drm_vblank_restore() exists because certain power saving states
> > > > > > > can clobber the hardware frame counter. The way it does this is
> > > > > > > by guesstimating how many frames were missed purely based on
> > > > > > > the difference between the last stored timestamp vs. a newly
> > > > > > > sampled timestamp.
> > > > > > > 
> > > > > > > If we should call this function before a full frame has
> > > > > > > elapsed since we sampled the last timestamp we would end up
> > > > > > > with a possibly slightly different timestamp value for the
> > > > > > > same frame. Currently we will happily overwrite the already
> > > > > > > stored timestamp for the frame with the new value. This
> > > > > > > could cause userspace to observe two different timestamps
> > > > > > > for the same frame (and the timestamp could even go
> > > > > > > backwards depending on how much error we introduce when
> > > > > > > correcting the timestamp based on the scanout position).
> > > > > > > 
> > > > > > > To avoid that let's not update the stored timestamp unless we're
> > > > > > > also incrementing the sequence counter. We do still want to update
> > > > > > > vblank->last with the freshly sampled hw frame counter value so
> > > > > > > that subsequent vblank irqs/queries can actually use the hw frame
> > > > > > > counter to determine how many frames have elapsed.
> > > > > > 
> > > > > > Hm I'm not getting the reason for why we store the updated hw vblank
> > > > > > counter?
> > > > > 
> > > > > Because next time a vblank irq happens the code will do:
> > > > > diff = current_hw_counter - vblank->last
> > > > > 
> > > > > which won't work very well if vblank->last is garbage.
> > > > > 
> > > > > Updating vblank->last is pretty much why drm_vblank_restore()
> > > > > exists at all.
> > > > 
> > > > Oh sure, _restore has to update this, together with the timestamp.
> > > > 
> > > > But your code adds such an update where we update the hw vblank counter,
> > > > but not the timestamp, and that feels buggy. Either we're still in the
> > > > same frame, and then we should story nothing. Or we advanced, and then we
> > > > probably want a new timestampt for that frame too.
> > > 
> > > Even if we're still in the same frame the hw frame counter may already
> > > have been reset due to the power well having been turned off. That is
> > > what I'm trying to fix here.
> > > 
> > > Now I suppose that's fairly unlikely, at least with PSR which probably
> > > does impose some extra delays before the power gets yanked. But at least
> > > theoretically possible.
> > 
> > Pondering about this a bit further. I think the fact that the current
> > code takes the round-to-closest approach I used for the vblank handler
> > is perhaps a bit bad. It could push the seq counter forward if we're
> > past the halfway point of a frame. I think that rounding behaviour
> > makes sense for the irq since those tick steadily and so allowing a bit
> > of error either way seems correct to me. Perhaps round-down might be
> > the better option for _restore(). Not quites sure, need more thinking
> > probably.
> 
> Yes this is the rounding I'm worried about.

Actually I don't think this is really an issue since we are working 
with the corrected timestamps here. Those always line up with
frames, so unless the correction is really buggy or the hw somehow
skips a partial frame it should work rather well. At least when
operating with small timescales. For large gaps the error might
creep up, but I don't think a small error in the predicted seq
number over a long timespan is really a problem.

> 
> But your point above that the hw might reset the counter again is also
> valid. I'm assuming what you're worried about is that we first do a
> _restore (and the hw vblank counter hasn't been trashed yet), and then in
> the same frame we do another restore, but now the hw frame counter has
> been trashe, and we need to update it?

Yeah, although the pre-trashing _restore could also just be
a vblank irq I think.

> 
> > Another idea that came to me now is that maybe we should actually just
> > check if the current hw frame counter value looks sane, as in something
> > like:
> > 
> > diff_hw_counter = current_hw_counter-stored_hw_counter
> > diff_ts = (current_ts-stored_ts)/framedur
> > 
> > if (diff_hw_counter ~= diff_ts)
> > 	diff = diff_hw_counter;
> > else
> > 	diff = diff_ts;
> > 
> > and if they seem to match then just keep trusting the hw counter.
> > So only if there's a significant difference would we disregard
> > the diff of the hw counter and instead use the diff based on the
> > timestamps. Not sure what "significant" is though; One frame, two
> > frames?
> 
> Hm, another idea: The only point where we can trust the entire hw counter
> + timestamp sampling is when the irq happens. Because then we know the
> driver will have properly corrected for any hw oddities (like hw counter
> flipping not at top-of-frame, like the core expects).

i915 at least gives out correct data regardless of when you sample
it. Well, except for the cases where the hw counter gets trashed,
in which case the hw counter is garbage (when compared with .last)
but the timestamp is still correct.

> 
> So what if _restore always goes back to the last such trusted hw counter
> for computing the frame counter diff and all that stuff? That way if we
> have a bunch of _restore with incosisten hw vblank counter, we will a)
> only take the last one (fixes the bug you're trying to fix) b) still use
> the same last trusted baseline for computations (addresses the race I'm
> seeing).
> 
> Or does this not work?

I don't think I really understand what you're suggesting here.
_restore is already using the last trusted data (the stored
timestamp + .last).

So the one thing _restore will have to update is .last.
I think it can either do what it does now and set .last
to the current hw counter value + update the timestamp
to match, or it could perhaps adjust the stored .last
such that the already stored timestamp and the updated
.last match up. But I think both of those options have
the same level or inaccuracy since both would still do
the same ts_diff->hw_counter_diff prediction. 

> 
> It does complicate the code a bit, because we'd need to store the
> count/timestamp information from _restore outside of the usual vblank ts
> array. But I think that addresses everything.

Hmm. So restore would store this extra information
somewhere else, and not update the normal stuff at all?
What exactly would we do with that extra data?

-- 
Ville Syrjälä
Intel
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH] drm/vblank: Avoid storing a timestamp for the same frame twice
  2021-02-08 16:58               ` [Intel-gfx] " Ville Syrjälä
@ 2021-02-08 17:43                 ` Daniel Vetter
  -1 siblings, 0 replies; 43+ messages in thread
From: Daniel Vetter @ 2021-02-08 17:43 UTC (permalink / raw)
  To: Ville Syrjälä
  Cc: intel-gfx, Dhinakaran Pandiyan, dri-devel, Rodrigo Vivi

On Mon, Feb 8, 2021 at 5:58 PM Ville Syrjälä
<ville.syrjala@linux.intel.com> wrote:
>
> On Mon, Feb 08, 2021 at 10:56:36AM +0100, Daniel Vetter wrote:
> > On Fri, Feb 05, 2021 at 11:19:19PM +0200, Ville Syrjälä wrote:
> > > On Fri, Feb 05, 2021 at 06:24:08PM +0200, Ville Syrjälä wrote:
> > > > On Fri, Feb 05, 2021 at 04:46:27PM +0100, Daniel Vetter wrote:
> > > > > On Thu, Feb 04, 2021 at 05:55:28PM +0200, Ville Syrjälä wrote:
> > > > > > On Thu, Feb 04, 2021 at 04:32:16PM +0100, Daniel Vetter wrote:
> > > > > > > On Thu, Feb 04, 2021 at 04:04:00AM +0200, Ville Syrjala wrote:
> > > > > > > > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > > > > > >
> > > > > > > > drm_vblank_restore() exists because certain power saving states
> > > > > > > > can clobber the hardware frame counter. The way it does this is
> > > > > > > > by guesstimating how many frames were missed purely based on
> > > > > > > > the difference between the last stored timestamp vs. a newly
> > > > > > > > sampled timestamp.
> > > > > > > >
> > > > > > > > If we should call this function before a full frame has
> > > > > > > > elapsed since we sampled the last timestamp we would end up
> > > > > > > > with a possibly slightly different timestamp value for the
> > > > > > > > same frame. Currently we will happily overwrite the already
> > > > > > > > stored timestamp for the frame with the new value. This
> > > > > > > > could cause userspace to observe two different timestamps
> > > > > > > > for the same frame (and the timestamp could even go
> > > > > > > > backwards depending on how much error we introduce when
> > > > > > > > correcting the timestamp based on the scanout position).
> > > > > > > >
> > > > > > > > To avoid that let's not update the stored timestamp unless we're
> > > > > > > > also incrementing the sequence counter. We do still want to update
> > > > > > > > vblank->last with the freshly sampled hw frame counter value so
> > > > > > > > that subsequent vblank irqs/queries can actually use the hw frame
> > > > > > > > counter to determine how many frames have elapsed.
> > > > > > >
> > > > > > > Hm I'm not getting the reason for why we store the updated hw vblank
> > > > > > > counter?
> > > > > >
> > > > > > Because next time a vblank irq happens the code will do:
> > > > > > diff = current_hw_counter - vblank->last
> > > > > >
> > > > > > which won't work very well if vblank->last is garbage.
> > > > > >
> > > > > > Updating vblank->last is pretty much why drm_vblank_restore()
> > > > > > exists at all.
> > > > >
> > > > > Oh sure, _restore has to update this, together with the timestamp.
> > > > >
> > > > > But your code adds such an update where we update the hw vblank counter,
> > > > > but not the timestamp, and that feels buggy. Either we're still in the
> > > > > same frame, and then we should story nothing. Or we advanced, and then we
> > > > > probably want a new timestampt for that frame too.
> > > >
> > > > Even if we're still in the same frame the hw frame counter may already
> > > > have been reset due to the power well having been turned off. That is
> > > > what I'm trying to fix here.
> > > >
> > > > Now I suppose that's fairly unlikely, at least with PSR which probably
> > > > does impose some extra delays before the power gets yanked. But at least
> > > > theoretically possible.
> > >
> > > Pondering about this a bit further. I think the fact that the current
> > > code takes the round-to-closest approach I used for the vblank handler
> > > is perhaps a bit bad. It could push the seq counter forward if we're
> > > past the halfway point of a frame. I think that rounding behaviour
> > > makes sense for the irq since those tick steadily and so allowing a bit
> > > of error either way seems correct to me. Perhaps round-down might be
> > > the better option for _restore(). Not quites sure, need more thinking
> > > probably.
> >
> > Yes this is the rounding I'm worried about.
>
> Actually I don't think this is really an issue since we are working
> with the corrected timestamps here. Those always line up with
> frames, so unless the correction is really buggy or the hw somehow
> skips a partial frame it should work rather well. At least when
> operating with small timescales. For large gaps the error might
> creep up, but I don't think a small error in the predicted seq
> number over a long timespan is really a problem.

That corrected timestamp is what can go wrong I think: There's no
guarantee that drm_crtc_vblank_helper_get_vblank_timestamp_internal()
flips to top-of-frame at the exact same time than when the hw vblank
counter flips. Or at least I'm not seeing where we correct them both
together.

> > But your point above that the hw might reset the counter again is also
> > valid. I'm assuming what you're worried about is that we first do a
> > _restore (and the hw vblank counter hasn't been trashed yet), and then in
> > the same frame we do another restore, but now the hw frame counter has
> > been trashe, and we need to update it?
>
> Yeah, although the pre-trashing _restore could also just be
> a vblank irq I think.
>
> >
> > > Another idea that came to me now is that maybe we should actually just
> > > check if the current hw frame counter value looks sane, as in something
> > > like:
> > >
> > > diff_hw_counter = current_hw_counter-stored_hw_counter
> > > diff_ts = (current_ts-stored_ts)/framedur
> > >
> > > if (diff_hw_counter ~= diff_ts)
> > >     diff = diff_hw_counter;
> > > else
> > >     diff = diff_ts;
> > >
> > > and if they seem to match then just keep trusting the hw counter.
> > > So only if there's a significant difference would we disregard
> > > the diff of the hw counter and instead use the diff based on the
> > > timestamps. Not sure what "significant" is though; One frame, two
> > > frames?
> >
> > Hm, another idea: The only point where we can trust the entire hw counter
> > + timestamp sampling is when the irq happens. Because then we know the
> > driver will have properly corrected for any hw oddities (like hw counter
> > flipping not at top-of-frame, like the core expects).
>
> i915 at least gives out correct data regardless of when you sample
> it. Well, except for the cases where the hw counter gets trashed,
> in which case the hw counter is garbage (when compared with .last)
> but the timestamp is still correct.

Hm where/how do we handle this? Maybe I'm just out of date with how it
all works nowadays.

> > So what if _restore always goes back to the last such trusted hw counter
> > for computing the frame counter diff and all that stuff? That way if we
> > have a bunch of _restore with incosisten hw vblank counter, we will a)
> > only take the last one (fixes the bug you're trying to fix) b) still use
> > the same last trusted baseline for computations (addresses the race I'm
> > seeing).
> >
> > Or does this not work?
>
> I don't think I really understand what you're suggesting here.
> _restore is already using the last trusted data (the stored
> timestamp + .last).
>
> So the one thing _restore will have to update is .last.
> I think it can either do what it does now and set .last
> to the current hw counter value + update the timestamp
> to match, or it could perhaps adjust the stored .last
> such that the already stored timestamp and the updated
> .last match up. But I think both of those options have
> the same level or inaccuracy since both would still do
> the same ts_diff->hw_counter_diff prediction.
>
> >
> > It does complicate the code a bit, because we'd need to store the
> > count/timestamp information from _restore outside of the usual vblank ts
> > array. But I think that addresses everything.
>
> Hmm. So restore would store this extra information
> somewhere else, and not update the normal stuff at all?
> What exactly would we do with that extra data?

Hm I guess I didn't think this through. But the idea I had was:
- _restore always recomputes back from the las
drm_crtc_handl_vblank-stored timestamp.
- the first drm_crtc_handle_vblank bakes in any corrections that
_restore has prepared meanwhile
- same applies to all the sampling functions we might look at lastes
timestamps/counter values.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/vblank: Avoid storing a timestamp for the same frame twice
@ 2021-02-08 17:43                 ` Daniel Vetter
  0 siblings, 0 replies; 43+ messages in thread
From: Daniel Vetter @ 2021-02-08 17:43 UTC (permalink / raw)
  To: Ville Syrjälä; +Cc: intel-gfx, Dhinakaran Pandiyan, dri-devel

On Mon, Feb 8, 2021 at 5:58 PM Ville Syrjälä
<ville.syrjala@linux.intel.com> wrote:
>
> On Mon, Feb 08, 2021 at 10:56:36AM +0100, Daniel Vetter wrote:
> > On Fri, Feb 05, 2021 at 11:19:19PM +0200, Ville Syrjälä wrote:
> > > On Fri, Feb 05, 2021 at 06:24:08PM +0200, Ville Syrjälä wrote:
> > > > On Fri, Feb 05, 2021 at 04:46:27PM +0100, Daniel Vetter wrote:
> > > > > On Thu, Feb 04, 2021 at 05:55:28PM +0200, Ville Syrjälä wrote:
> > > > > > On Thu, Feb 04, 2021 at 04:32:16PM +0100, Daniel Vetter wrote:
> > > > > > > On Thu, Feb 04, 2021 at 04:04:00AM +0200, Ville Syrjala wrote:
> > > > > > > > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > > > > > >
> > > > > > > > drm_vblank_restore() exists because certain power saving states
> > > > > > > > can clobber the hardware frame counter. The way it does this is
> > > > > > > > by guesstimating how many frames were missed purely based on
> > > > > > > > the difference between the last stored timestamp vs. a newly
> > > > > > > > sampled timestamp.
> > > > > > > >
> > > > > > > > If we should call this function before a full frame has
> > > > > > > > elapsed since we sampled the last timestamp we would end up
> > > > > > > > with a possibly slightly different timestamp value for the
> > > > > > > > same frame. Currently we will happily overwrite the already
> > > > > > > > stored timestamp for the frame with the new value. This
> > > > > > > > could cause userspace to observe two different timestamps
> > > > > > > > for the same frame (and the timestamp could even go
> > > > > > > > backwards depending on how much error we introduce when
> > > > > > > > correcting the timestamp based on the scanout position).
> > > > > > > >
> > > > > > > > To avoid that let's not update the stored timestamp unless we're
> > > > > > > > also incrementing the sequence counter. We do still want to update
> > > > > > > > vblank->last with the freshly sampled hw frame counter value so
> > > > > > > > that subsequent vblank irqs/queries can actually use the hw frame
> > > > > > > > counter to determine how many frames have elapsed.
> > > > > > >
> > > > > > > Hm I'm not getting the reason for why we store the updated hw vblank
> > > > > > > counter?
> > > > > >
> > > > > > Because next time a vblank irq happens the code will do:
> > > > > > diff = current_hw_counter - vblank->last
> > > > > >
> > > > > > which won't work very well if vblank->last is garbage.
> > > > > >
> > > > > > Updating vblank->last is pretty much why drm_vblank_restore()
> > > > > > exists at all.
> > > > >
> > > > > Oh sure, _restore has to update this, together with the timestamp.
> > > > >
> > > > > But your code adds such an update where we update the hw vblank counter,
> > > > > but not the timestamp, and that feels buggy. Either we're still in the
> > > > > same frame, and then we should story nothing. Or we advanced, and then we
> > > > > probably want a new timestampt for that frame too.
> > > >
> > > > Even if we're still in the same frame the hw frame counter may already
> > > > have been reset due to the power well having been turned off. That is
> > > > what I'm trying to fix here.
> > > >
> > > > Now I suppose that's fairly unlikely, at least with PSR which probably
> > > > does impose some extra delays before the power gets yanked. But at least
> > > > theoretically possible.
> > >
> > > Pondering about this a bit further. I think the fact that the current
> > > code takes the round-to-closest approach I used for the vblank handler
> > > is perhaps a bit bad. It could push the seq counter forward if we're
> > > past the halfway point of a frame. I think that rounding behaviour
> > > makes sense for the irq since those tick steadily and so allowing a bit
> > > of error either way seems correct to me. Perhaps round-down might be
> > > the better option for _restore(). Not quites sure, need more thinking
> > > probably.
> >
> > Yes this is the rounding I'm worried about.
>
> Actually I don't think this is really an issue since we are working
> with the corrected timestamps here. Those always line up with
> frames, so unless the correction is really buggy or the hw somehow
> skips a partial frame it should work rather well. At least when
> operating with small timescales. For large gaps the error might
> creep up, but I don't think a small error in the predicted seq
> number over a long timespan is really a problem.

That corrected timestamp is what can go wrong I think: There's no
guarantee that drm_crtc_vblank_helper_get_vblank_timestamp_internal()
flips to top-of-frame at the exact same time than when the hw vblank
counter flips. Or at least I'm not seeing where we correct them both
together.

> > But your point above that the hw might reset the counter again is also
> > valid. I'm assuming what you're worried about is that we first do a
> > _restore (and the hw vblank counter hasn't been trashed yet), and then in
> > the same frame we do another restore, but now the hw frame counter has
> > been trashe, and we need to update it?
>
> Yeah, although the pre-trashing _restore could also just be
> a vblank irq I think.
>
> >
> > > Another idea that came to me now is that maybe we should actually just
> > > check if the current hw frame counter value looks sane, as in something
> > > like:
> > >
> > > diff_hw_counter = current_hw_counter-stored_hw_counter
> > > diff_ts = (current_ts-stored_ts)/framedur
> > >
> > > if (diff_hw_counter ~= diff_ts)
> > >     diff = diff_hw_counter;
> > > else
> > >     diff = diff_ts;
> > >
> > > and if they seem to match then just keep trusting the hw counter.
> > > So only if there's a significant difference would we disregard
> > > the diff of the hw counter and instead use the diff based on the
> > > timestamps. Not sure what "significant" is though; One frame, two
> > > frames?
> >
> > Hm, another idea: The only point where we can trust the entire hw counter
> > + timestamp sampling is when the irq happens. Because then we know the
> > driver will have properly corrected for any hw oddities (like hw counter
> > flipping not at top-of-frame, like the core expects).
>
> i915 at least gives out correct data regardless of when you sample
> it. Well, except for the cases where the hw counter gets trashed,
> in which case the hw counter is garbage (when compared with .last)
> but the timestamp is still correct.

Hm where/how do we handle this? Maybe I'm just out of date with how it
all works nowadays.

> > So what if _restore always goes back to the last such trusted hw counter
> > for computing the frame counter diff and all that stuff? That way if we
> > have a bunch of _restore with incosisten hw vblank counter, we will a)
> > only take the last one (fixes the bug you're trying to fix) b) still use
> > the same last trusted baseline for computations (addresses the race I'm
> > seeing).
> >
> > Or does this not work?
>
> I don't think I really understand what you're suggesting here.
> _restore is already using the last trusted data (the stored
> timestamp + .last).
>
> So the one thing _restore will have to update is .last.
> I think it can either do what it does now and set .last
> to the current hw counter value + update the timestamp
> to match, or it could perhaps adjust the stored .last
> such that the already stored timestamp and the updated
> .last match up. But I think both of those options have
> the same level or inaccuracy since both would still do
> the same ts_diff->hw_counter_diff prediction.
>
> >
> > It does complicate the code a bit, because we'd need to store the
> > count/timestamp information from _restore outside of the usual vblank ts
> > array. But I think that addresses everything.
>
> Hmm. So restore would store this extra information
> somewhere else, and not update the normal stuff at all?
> What exactly would we do with that extra data?

Hm I guess I didn't think this through. But the idea I had was:
- _restore always recomputes back from the las
drm_crtc_handl_vblank-stored timestamp.
- the first drm_crtc_handle_vblank bakes in any corrections that
_restore has prepared meanwhile
- same applies to all the sampling functions we might look at lastes
timestamps/counter values.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH] drm/vblank: Avoid storing a timestamp for the same frame twice
  2021-02-08 17:43                 ` [Intel-gfx] " Daniel Vetter
@ 2021-02-08 18:05                   ` Ville Syrjälä
  -1 siblings, 0 replies; 43+ messages in thread
From: Ville Syrjälä @ 2021-02-08 18:05 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: intel-gfx, Dhinakaran Pandiyan, dri-devel, Rodrigo Vivi

On Mon, Feb 08, 2021 at 06:43:53PM +0100, Daniel Vetter wrote:
> On Mon, Feb 8, 2021 at 5:58 PM Ville Syrjälä
> <ville.syrjala@linux.intel.com> wrote:
> >
> > On Mon, Feb 08, 2021 at 10:56:36AM +0100, Daniel Vetter wrote:
> > > On Fri, Feb 05, 2021 at 11:19:19PM +0200, Ville Syrjälä wrote:
> > > > On Fri, Feb 05, 2021 at 06:24:08PM +0200, Ville Syrjälä wrote:
> > > > > On Fri, Feb 05, 2021 at 04:46:27PM +0100, Daniel Vetter wrote:
> > > > > > On Thu, Feb 04, 2021 at 05:55:28PM +0200, Ville Syrjälä wrote:
> > > > > > > On Thu, Feb 04, 2021 at 04:32:16PM +0100, Daniel Vetter wrote:
> > > > > > > > On Thu, Feb 04, 2021 at 04:04:00AM +0200, Ville Syrjala wrote:
> > > > > > > > > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > > > > > > >
> > > > > > > > > drm_vblank_restore() exists because certain power saving states
> > > > > > > > > can clobber the hardware frame counter. The way it does this is
> > > > > > > > > by guesstimating how many frames were missed purely based on
> > > > > > > > > the difference between the last stored timestamp vs. a newly
> > > > > > > > > sampled timestamp.
> > > > > > > > >
> > > > > > > > > If we should call this function before a full frame has
> > > > > > > > > elapsed since we sampled the last timestamp we would end up
> > > > > > > > > with a possibly slightly different timestamp value for the
> > > > > > > > > same frame. Currently we will happily overwrite the already
> > > > > > > > > stored timestamp for the frame with the new value. This
> > > > > > > > > could cause userspace to observe two different timestamps
> > > > > > > > > for the same frame (and the timestamp could even go
> > > > > > > > > backwards depending on how much error we introduce when
> > > > > > > > > correcting the timestamp based on the scanout position).
> > > > > > > > >
> > > > > > > > > To avoid that let's not update the stored timestamp unless we're
> > > > > > > > > also incrementing the sequence counter. We do still want to update
> > > > > > > > > vblank->last with the freshly sampled hw frame counter value so
> > > > > > > > > that subsequent vblank irqs/queries can actually use the hw frame
> > > > > > > > > counter to determine how many frames have elapsed.
> > > > > > > >
> > > > > > > > Hm I'm not getting the reason for why we store the updated hw vblank
> > > > > > > > counter?
> > > > > > >
> > > > > > > Because next time a vblank irq happens the code will do:
> > > > > > > diff = current_hw_counter - vblank->last
> > > > > > >
> > > > > > > which won't work very well if vblank->last is garbage.
> > > > > > >
> > > > > > > Updating vblank->last is pretty much why drm_vblank_restore()
> > > > > > > exists at all.
> > > > > >
> > > > > > Oh sure, _restore has to update this, together with the timestamp.
> > > > > >
> > > > > > But your code adds such an update where we update the hw vblank counter,
> > > > > > but not the timestamp, and that feels buggy. Either we're still in the
> > > > > > same frame, and then we should story nothing. Or we advanced, and then we
> > > > > > probably want a new timestampt for that frame too.
> > > > >
> > > > > Even if we're still in the same frame the hw frame counter may already
> > > > > have been reset due to the power well having been turned off. That is
> > > > > what I'm trying to fix here.
> > > > >
> > > > > Now I suppose that's fairly unlikely, at least with PSR which probably
> > > > > does impose some extra delays before the power gets yanked. But at least
> > > > > theoretically possible.
> > > >
> > > > Pondering about this a bit further. I think the fact that the current
> > > > code takes the round-to-closest approach I used for the vblank handler
> > > > is perhaps a bit bad. It could push the seq counter forward if we're
> > > > past the halfway point of a frame. I think that rounding behaviour
> > > > makes sense for the irq since those tick steadily and so allowing a bit
> > > > of error either way seems correct to me. Perhaps round-down might be
> > > > the better option for _restore(). Not quites sure, need more thinking
> > > > probably.
> > >
> > > Yes this is the rounding I'm worried about.
> >
> > Actually I don't think this is really an issue since we are working
> > with the corrected timestamps here. Those always line up with
> > frames, so unless the correction is really buggy or the hw somehow
> > skips a partial frame it should work rather well. At least when
> > operating with small timescales. For large gaps the error might
> > creep up, but I don't think a small error in the predicted seq
> > number over a long timespan is really a problem.
> 
> That corrected timestamp is what can go wrong I think: There's no
> guarantee that drm_crtc_vblank_helper_get_vblank_timestamp_internal()
> flips to top-of-frame at the exact same time than when the hw vblank
> counter flips. Or at least I'm not seeing where we correct them both
> together.

We do this seqlock type of thing:
	do {
                cur_vblank = __get_vblank_counter(dev, pipe);
                rc = drm_get_last_vbltimestamp(dev, pipe, &t_vblank, in_vblank_irq);
        } while (cur_vblank != __get_vblank_counter(dev, pipe) && --count > 0);

which guarantees the timestamp really is for the frame we think it is for.

> 
> > > But your point above that the hw might reset the counter again is also
> > > valid. I'm assuming what you're worried about is that we first do a
> > > _restore (and the hw vblank counter hasn't been trashed yet), and then in
> > > the same frame we do another restore, but now the hw frame counter has
> > > been trashe, and we need to update it?
> >
> > Yeah, although the pre-trashing _restore could also just be
> > a vblank irq I think.
> >
> > >
> > > > Another idea that came to me now is that maybe we should actually just
> > > > check if the current hw frame counter value looks sane, as in something
> > > > like:
> > > >
> > > > diff_hw_counter = current_hw_counter-stored_hw_counter
> > > > diff_ts = (current_ts-stored_ts)/framedur
> > > >
> > > > if (diff_hw_counter ~= diff_ts)
> > > >     diff = diff_hw_counter;
> > > > else
> > > >     diff = diff_ts;
> > > >
> > > > and if they seem to match then just keep trusting the hw counter.
> > > > So only if there's a significant difference would we disregard
> > > > the diff of the hw counter and instead use the diff based on the
> > > > timestamps. Not sure what "significant" is though; One frame, two
> > > > frames?
> > >
> > > Hm, another idea: The only point where we can trust the entire hw counter
> > > + timestamp sampling is when the irq happens. Because then we know the
> > > driver will have properly corrected for any hw oddities (like hw counter
> > > flipping not at top-of-frame, like the core expects).
> >
> > i915 at least gives out correct data regardless of when you sample
> > it. Well, except for the cases where the hw counter gets trashed,
> > in which case the hw counter is garbage (when compared with .last)
> > but the timestamp is still correct.
> 
> Hm where/how do we handle this? Maybe I'm just out of date with how it
> all works nowadays.

There's not much to handle. We know when exactly the counters increment and
thus can give out the correct answer to the question "which frame is this?".

> 
> > > So what if _restore always goes back to the last such trusted hw counter
> > > for computing the frame counter diff and all that stuff? That way if we
> > > have a bunch of _restore with incosisten hw vblank counter, we will a)
> > > only take the last one (fixes the bug you're trying to fix) b) still use
> > > the same last trusted baseline for computations (addresses the race I'm
> > > seeing).
> > >
> > > Or does this not work?
> >
> > I don't think I really understand what you're suggesting here.
> > _restore is already using the last trusted data (the stored
> > timestamp + .last).
> >
> > So the one thing _restore will have to update is .last.
> > I think it can either do what it does now and set .last
> > to the current hw counter value + update the timestamp
> > to match, or it could perhaps adjust the stored .last
> > such that the already stored timestamp and the updated
> > .last match up. But I think both of those options have
> > the same level or inaccuracy since both would still do
> > the same ts_diff->hw_counter_diff prediction.
> >
> > >
> > > It does complicate the code a bit, because we'd need to store the
> > > count/timestamp information from _restore outside of the usual vblank ts
> > > array. But I think that addresses everything.
> >
> > Hmm. So restore would store this extra information
> > somewhere else, and not update the normal stuff at all?
> > What exactly would we do with that extra data?
> 
> Hm I guess I didn't think this through. But the idea I had was:
> - _restore always recomputes back from the las
> drm_crtc_handl_vblank-stored timestamp.
> - the first drm_crtc_handle_vblank bakes in any corrections that
> _restore has prepared meanwhile
> - same applies to all the sampling functions we might look at lastes
> timestamps/counter values.

So I guess instead of _restore adjusting .last we would instead
mainatian a separate correction information and apply it when
doing the diff between the current hw counter vs. .last. Not sure
why that would be particularly better than just adjusting .last
directly.

-- 
Ville Syrjälä
Intel
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/vblank: Avoid storing a timestamp for the same frame twice
@ 2021-02-08 18:05                   ` Ville Syrjälä
  0 siblings, 0 replies; 43+ messages in thread
From: Ville Syrjälä @ 2021-02-08 18:05 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: intel-gfx, Dhinakaran Pandiyan, dri-devel

On Mon, Feb 08, 2021 at 06:43:53PM +0100, Daniel Vetter wrote:
> On Mon, Feb 8, 2021 at 5:58 PM Ville Syrjälä
> <ville.syrjala@linux.intel.com> wrote:
> >
> > On Mon, Feb 08, 2021 at 10:56:36AM +0100, Daniel Vetter wrote:
> > > On Fri, Feb 05, 2021 at 11:19:19PM +0200, Ville Syrjälä wrote:
> > > > On Fri, Feb 05, 2021 at 06:24:08PM +0200, Ville Syrjälä wrote:
> > > > > On Fri, Feb 05, 2021 at 04:46:27PM +0100, Daniel Vetter wrote:
> > > > > > On Thu, Feb 04, 2021 at 05:55:28PM +0200, Ville Syrjälä wrote:
> > > > > > > On Thu, Feb 04, 2021 at 04:32:16PM +0100, Daniel Vetter wrote:
> > > > > > > > On Thu, Feb 04, 2021 at 04:04:00AM +0200, Ville Syrjala wrote:
> > > > > > > > > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > > > > > > >
> > > > > > > > > drm_vblank_restore() exists because certain power saving states
> > > > > > > > > can clobber the hardware frame counter. The way it does this is
> > > > > > > > > by guesstimating how many frames were missed purely based on
> > > > > > > > > the difference between the last stored timestamp vs. a newly
> > > > > > > > > sampled timestamp.
> > > > > > > > >
> > > > > > > > > If we should call this function before a full frame has
> > > > > > > > > elapsed since we sampled the last timestamp we would end up
> > > > > > > > > with a possibly slightly different timestamp value for the
> > > > > > > > > same frame. Currently we will happily overwrite the already
> > > > > > > > > stored timestamp for the frame with the new value. This
> > > > > > > > > could cause userspace to observe two different timestamps
> > > > > > > > > for the same frame (and the timestamp could even go
> > > > > > > > > backwards depending on how much error we introduce when
> > > > > > > > > correcting the timestamp based on the scanout position).
> > > > > > > > >
> > > > > > > > > To avoid that let's not update the stored timestamp unless we're
> > > > > > > > > also incrementing the sequence counter. We do still want to update
> > > > > > > > > vblank->last with the freshly sampled hw frame counter value so
> > > > > > > > > that subsequent vblank irqs/queries can actually use the hw frame
> > > > > > > > > counter to determine how many frames have elapsed.
> > > > > > > >
> > > > > > > > Hm I'm not getting the reason for why we store the updated hw vblank
> > > > > > > > counter?
> > > > > > >
> > > > > > > Because next time a vblank irq happens the code will do:
> > > > > > > diff = current_hw_counter - vblank->last
> > > > > > >
> > > > > > > which won't work very well if vblank->last is garbage.
> > > > > > >
> > > > > > > Updating vblank->last is pretty much why drm_vblank_restore()
> > > > > > > exists at all.
> > > > > >
> > > > > > Oh sure, _restore has to update this, together with the timestamp.
> > > > > >
> > > > > > But your code adds such an update where we update the hw vblank counter,
> > > > > > but not the timestamp, and that feels buggy. Either we're still in the
> > > > > > same frame, and then we should story nothing. Or we advanced, and then we
> > > > > > probably want a new timestampt for that frame too.
> > > > >
> > > > > Even if we're still in the same frame the hw frame counter may already
> > > > > have been reset due to the power well having been turned off. That is
> > > > > what I'm trying to fix here.
> > > > >
> > > > > Now I suppose that's fairly unlikely, at least with PSR which probably
> > > > > does impose some extra delays before the power gets yanked. But at least
> > > > > theoretically possible.
> > > >
> > > > Pondering about this a bit further. I think the fact that the current
> > > > code takes the round-to-closest approach I used for the vblank handler
> > > > is perhaps a bit bad. It could push the seq counter forward if we're
> > > > past the halfway point of a frame. I think that rounding behaviour
> > > > makes sense for the irq since those tick steadily and so allowing a bit
> > > > of error either way seems correct to me. Perhaps round-down might be
> > > > the better option for _restore(). Not quites sure, need more thinking
> > > > probably.
> > >
> > > Yes this is the rounding I'm worried about.
> >
> > Actually I don't think this is really an issue since we are working
> > with the corrected timestamps here. Those always line up with
> > frames, so unless the correction is really buggy or the hw somehow
> > skips a partial frame it should work rather well. At least when
> > operating with small timescales. For large gaps the error might
> > creep up, but I don't think a small error in the predicted seq
> > number over a long timespan is really a problem.
> 
> That corrected timestamp is what can go wrong I think: There's no
> guarantee that drm_crtc_vblank_helper_get_vblank_timestamp_internal()
> flips to top-of-frame at the exact same time than when the hw vblank
> counter flips. Or at least I'm not seeing where we correct them both
> together.

We do this seqlock type of thing:
	do {
                cur_vblank = __get_vblank_counter(dev, pipe);
                rc = drm_get_last_vbltimestamp(dev, pipe, &t_vblank, in_vblank_irq);
        } while (cur_vblank != __get_vblank_counter(dev, pipe) && --count > 0);

which guarantees the timestamp really is for the frame we think it is for.

> 
> > > But your point above that the hw might reset the counter again is also
> > > valid. I'm assuming what you're worried about is that we first do a
> > > _restore (and the hw vblank counter hasn't been trashed yet), and then in
> > > the same frame we do another restore, but now the hw frame counter has
> > > been trashe, and we need to update it?
> >
> > Yeah, although the pre-trashing _restore could also just be
> > a vblank irq I think.
> >
> > >
> > > > Another idea that came to me now is that maybe we should actually just
> > > > check if the current hw frame counter value looks sane, as in something
> > > > like:
> > > >
> > > > diff_hw_counter = current_hw_counter-stored_hw_counter
> > > > diff_ts = (current_ts-stored_ts)/framedur
> > > >
> > > > if (diff_hw_counter ~= diff_ts)
> > > >     diff = diff_hw_counter;
> > > > else
> > > >     diff = diff_ts;
> > > >
> > > > and if they seem to match then just keep trusting the hw counter.
> > > > So only if there's a significant difference would we disregard
> > > > the diff of the hw counter and instead use the diff based on the
> > > > timestamps. Not sure what "significant" is though; One frame, two
> > > > frames?
> > >
> > > Hm, another idea: The only point where we can trust the entire hw counter
> > > + timestamp sampling is when the irq happens. Because then we know the
> > > driver will have properly corrected for any hw oddities (like hw counter
> > > flipping not at top-of-frame, like the core expects).
> >
> > i915 at least gives out correct data regardless of when you sample
> > it. Well, except for the cases where the hw counter gets trashed,
> > in which case the hw counter is garbage (when compared with .last)
> > but the timestamp is still correct.
> 
> Hm where/how do we handle this? Maybe I'm just out of date with how it
> all works nowadays.

There's not much to handle. We know when exactly the counters increment and
thus can give out the correct answer to the question "which frame is this?".

> 
> > > So what if _restore always goes back to the last such trusted hw counter
> > > for computing the frame counter diff and all that stuff? That way if we
> > > have a bunch of _restore with incosisten hw vblank counter, we will a)
> > > only take the last one (fixes the bug you're trying to fix) b) still use
> > > the same last trusted baseline for computations (addresses the race I'm
> > > seeing).
> > >
> > > Or does this not work?
> >
> > I don't think I really understand what you're suggesting here.
> > _restore is already using the last trusted data (the stored
> > timestamp + .last).
> >
> > So the one thing _restore will have to update is .last.
> > I think it can either do what it does now and set .last
> > to the current hw counter value + update the timestamp
> > to match, or it could perhaps adjust the stored .last
> > such that the already stored timestamp and the updated
> > .last match up. But I think both of those options have
> > the same level or inaccuracy since both would still do
> > the same ts_diff->hw_counter_diff prediction.
> >
> > >
> > > It does complicate the code a bit, because we'd need to store the
> > > count/timestamp information from _restore outside of the usual vblank ts
> > > array. But I think that addresses everything.
> >
> > Hmm. So restore would store this extra information
> > somewhere else, and not update the normal stuff at all?
> > What exactly would we do with that extra data?
> 
> Hm I guess I didn't think this through. But the idea I had was:
> - _restore always recomputes back from the las
> drm_crtc_handl_vblank-stored timestamp.
> - the first drm_crtc_handle_vblank bakes in any corrections that
> _restore has prepared meanwhile
> - same applies to all the sampling functions we might look at lastes
> timestamps/counter values.

So I guess instead of _restore adjusting .last we would instead
mainatian a separate correction information and apply it when
doing the diff between the current hw counter vs. .last. Not sure
why that would be particularly better than just adjusting .last
directly.

-- 
Ville Syrjälä
Intel
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH] drm/vblank: Avoid storing a timestamp for the same frame twice
  2021-02-04  2:04 ` [Intel-gfx] " Ville Syrjala
@ 2021-02-09 10:07   ` Daniel Vetter
  -1 siblings, 0 replies; 43+ messages in thread
From: Daniel Vetter @ 2021-02-09 10:07 UTC (permalink / raw)
  To: Ville Syrjala
  Cc: Daniel Vetter, intel-gfx, Dhinakaran Pandiyan, dri-devel, Rodrigo Vivi

On Thu, Feb 04, 2021 at 04:04:00AM +0200, Ville Syrjala wrote:
> From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> 
> drm_vblank_restore() exists because certain power saving states
> can clobber the hardware frame counter. The way it does this is
> by guesstimating how many frames were missed purely based on
> the difference between the last stored timestamp vs. a newly
> sampled timestamp.
> 
> If we should call this function before a full frame has
> elapsed since we sampled the last timestamp we would end up
> with a possibly slightly different timestamp value for the
> same frame. Currently we will happily overwrite the already
> stored timestamp for the frame with the new value. This
> could cause userspace to observe two different timestamps
> for the same frame (and the timestamp could even go
> backwards depending on how much error we introduce when
> correcting the timestamp based on the scanout position).
> 
> To avoid that let's not update the stored timestamp unless we're
> also incrementing the sequence counter. We do still want to update
> vblank->last with the freshly sampled hw frame counter value so
> that subsequent vblank irqs/queries can actually use the hw frame
> counter to determine how many frames have elapsed.
> 
> Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>

Ok, top-posting because lol I got confused. I mixed up the guesstimation
work we do for when we don't have a vblank counter with the precise vblank
timestamp stuff.

I think it'd still be good to maybe lock down/document a bit better the
requirements for drm_crtc_vblank_restore, but I convinced myself now that
your patch looks correct.

Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>

> ---
>  drivers/gpu/drm/drm_vblank.c | 11 +++++++++++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> index 893165eeddf3..e127a7db2088 100644
> --- a/drivers/gpu/drm/drm_vblank.c
> +++ b/drivers/gpu/drm/drm_vblank.c
> @@ -176,6 +176,17 @@ static void store_vblank(struct drm_device *dev, unsigned int pipe,
>  
>  	vblank->last = last;
>  
> +	/*
> +	 * drm_vblank_restore() wants to always update
> +	 * vblank->last since we can't trust the frame counter
> +	 * across power saving states. But we don't want to alter
> +	 * the stored timestamp for the same frame number since
> +	 * that would cause userspace to potentially observe two
> +	 * different timestamps for the same frame.
> +	 */
> +	if (vblank_count_inc == 0)
> +		return;
> +
>  	write_seqlock(&vblank->seqlock);
>  	vblank->time = t_vblank;
>  	atomic64_add(vblank_count_inc, &vblank->count);
> -- 
> 2.26.2
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/vblank: Avoid storing a timestamp for the same frame twice
@ 2021-02-09 10:07   ` Daniel Vetter
  0 siblings, 0 replies; 43+ messages in thread
From: Daniel Vetter @ 2021-02-09 10:07 UTC (permalink / raw)
  To: Ville Syrjala; +Cc: Daniel Vetter, intel-gfx, Dhinakaran Pandiyan, dri-devel

On Thu, Feb 04, 2021 at 04:04:00AM +0200, Ville Syrjala wrote:
> From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> 
> drm_vblank_restore() exists because certain power saving states
> can clobber the hardware frame counter. The way it does this is
> by guesstimating how many frames were missed purely based on
> the difference between the last stored timestamp vs. a newly
> sampled timestamp.
> 
> If we should call this function before a full frame has
> elapsed since we sampled the last timestamp we would end up
> with a possibly slightly different timestamp value for the
> same frame. Currently we will happily overwrite the already
> stored timestamp for the frame with the new value. This
> could cause userspace to observe two different timestamps
> for the same frame (and the timestamp could even go
> backwards depending on how much error we introduce when
> correcting the timestamp based on the scanout position).
> 
> To avoid that let's not update the stored timestamp unless we're
> also incrementing the sequence counter. We do still want to update
> vblank->last with the freshly sampled hw frame counter value so
> that subsequent vblank irqs/queries can actually use the hw frame
> counter to determine how many frames have elapsed.
> 
> Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>

Ok, top-posting because lol I got confused. I mixed up the guesstimation
work we do for when we don't have a vblank counter with the precise vblank
timestamp stuff.

I think it'd still be good to maybe lock down/document a bit better the
requirements for drm_crtc_vblank_restore, but I convinced myself now that
your patch looks correct.

Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>

> ---
>  drivers/gpu/drm/drm_vblank.c | 11 +++++++++++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> index 893165eeddf3..e127a7db2088 100644
> --- a/drivers/gpu/drm/drm_vblank.c
> +++ b/drivers/gpu/drm/drm_vblank.c
> @@ -176,6 +176,17 @@ static void store_vblank(struct drm_device *dev, unsigned int pipe,
>  
>  	vblank->last = last;
>  
> +	/*
> +	 * drm_vblank_restore() wants to always update
> +	 * vblank->last since we can't trust the frame counter
> +	 * across power saving states. But we don't want to alter
> +	 * the stored timestamp for the same frame number since
> +	 * that would cause userspace to potentially observe two
> +	 * different timestamps for the same frame.
> +	 */
> +	if (vblank_count_inc == 0)
> +		return;
> +
>  	write_seqlock(&vblank->seqlock);
>  	vblank->time = t_vblank;
>  	atomic64_add(vblank_count_inc, &vblank->count);
> -- 
> 2.26.2
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH] drm/vblank: Avoid storing a timestamp for the same frame twice
  2021-02-09 10:07   ` [Intel-gfx] " Daniel Vetter
@ 2021-02-09 15:40     ` Ville Syrjälä
  -1 siblings, 0 replies; 43+ messages in thread
From: Ville Syrjälä @ 2021-02-09 15:40 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Daniel Vetter, intel-gfx, Dhinakaran Pandiyan, dri-devel, Rodrigo Vivi

On Tue, Feb 09, 2021 at 11:07:53AM +0100, Daniel Vetter wrote:
> On Thu, Feb 04, 2021 at 04:04:00AM +0200, Ville Syrjala wrote:
> > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > 
> > drm_vblank_restore() exists because certain power saving states
> > can clobber the hardware frame counter. The way it does this is
> > by guesstimating how many frames were missed purely based on
> > the difference between the last stored timestamp vs. a newly
> > sampled timestamp.
> > 
> > If we should call this function before a full frame has
> > elapsed since we sampled the last timestamp we would end up
> > with a possibly slightly different timestamp value for the
> > same frame. Currently we will happily overwrite the already
> > stored timestamp for the frame with the new value. This
> > could cause userspace to observe two different timestamps
> > for the same frame (and the timestamp could even go
> > backwards depending on how much error we introduce when
> > correcting the timestamp based on the scanout position).
> > 
> > To avoid that let's not update the stored timestamp unless we're
> > also incrementing the sequence counter. We do still want to update
> > vblank->last with the freshly sampled hw frame counter value so
> > that subsequent vblank irqs/queries can actually use the hw frame
> > counter to determine how many frames have elapsed.
> > 
> > Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
> > Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> > Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> 
> Ok, top-posting because lol I got confused. I mixed up the guesstimation
> work we do for when we don't have a vblank counter with the precise vblank
> timestamp stuff.
> 
> I think it'd still be good to maybe lock down/document a bit better the
> requirements for drm_crtc_vblank_restore, but I convinced myself now that
> your patch looks correct.
> 
> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>

Ta.

Though I wonder if we should just do something like this instead:
-       store_vblank(dev, pipe, diff, t_vblank, cur_vblank);
+       vblank->last = (cur_vblank - diff) & max_vblank_count;

to make it entirely obvious that this exists only to fix up
the stored hw counter value?

Would also avoid the problem the original patch tries to fix
because we'd simply never store a new timestamp here.

> 
> > ---
> >  drivers/gpu/drm/drm_vblank.c | 11 +++++++++++
> >  1 file changed, 11 insertions(+)
> > 
> > diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> > index 893165eeddf3..e127a7db2088 100644
> > --- a/drivers/gpu/drm/drm_vblank.c
> > +++ b/drivers/gpu/drm/drm_vblank.c
> > @@ -176,6 +176,17 @@ static void store_vblank(struct drm_device *dev, unsigned int pipe,
> >  
> >  	vblank->last = last;
> >  
> > +	/*
> > +	 * drm_vblank_restore() wants to always update
> > +	 * vblank->last since we can't trust the frame counter
> > +	 * across power saving states. But we don't want to alter
> > +	 * the stored timestamp for the same frame number since
> > +	 * that would cause userspace to potentially observe two
> > +	 * different timestamps for the same frame.
> > +	 */
> > +	if (vblank_count_inc == 0)
> > +		return;
> > +
> >  	write_seqlock(&vblank->seqlock);
> >  	vblank->time = t_vblank;
> >  	atomic64_add(vblank_count_inc, &vblank->count);
> > -- 
> > 2.26.2
> > 
> 
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch

-- 
Ville Syrjälä
Intel
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/vblank: Avoid storing a timestamp for the same frame twice
@ 2021-02-09 15:40     ` Ville Syrjälä
  0 siblings, 0 replies; 43+ messages in thread
From: Ville Syrjälä @ 2021-02-09 15:40 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: Daniel Vetter, intel-gfx, Dhinakaran Pandiyan, dri-devel

On Tue, Feb 09, 2021 at 11:07:53AM +0100, Daniel Vetter wrote:
> On Thu, Feb 04, 2021 at 04:04:00AM +0200, Ville Syrjala wrote:
> > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > 
> > drm_vblank_restore() exists because certain power saving states
> > can clobber the hardware frame counter. The way it does this is
> > by guesstimating how many frames were missed purely based on
> > the difference between the last stored timestamp vs. a newly
> > sampled timestamp.
> > 
> > If we should call this function before a full frame has
> > elapsed since we sampled the last timestamp we would end up
> > with a possibly slightly different timestamp value for the
> > same frame. Currently we will happily overwrite the already
> > stored timestamp for the frame with the new value. This
> > could cause userspace to observe two different timestamps
> > for the same frame (and the timestamp could even go
> > backwards depending on how much error we introduce when
> > correcting the timestamp based on the scanout position).
> > 
> > To avoid that let's not update the stored timestamp unless we're
> > also incrementing the sequence counter. We do still want to update
> > vblank->last with the freshly sampled hw frame counter value so
> > that subsequent vblank irqs/queries can actually use the hw frame
> > counter to determine how many frames have elapsed.
> > 
> > Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
> > Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> > Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> 
> Ok, top-posting because lol I got confused. I mixed up the guesstimation
> work we do for when we don't have a vblank counter with the precise vblank
> timestamp stuff.
> 
> I think it'd still be good to maybe lock down/document a bit better the
> requirements for drm_crtc_vblank_restore, but I convinced myself now that
> your patch looks correct.
> 
> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>

Ta.

Though I wonder if we should just do something like this instead:
-       store_vblank(dev, pipe, diff, t_vblank, cur_vblank);
+       vblank->last = (cur_vblank - diff) & max_vblank_count;

to make it entirely obvious that this exists only to fix up
the stored hw counter value?

Would also avoid the problem the original patch tries to fix
because we'd simply never store a new timestamp here.

> 
> > ---
> >  drivers/gpu/drm/drm_vblank.c | 11 +++++++++++
> >  1 file changed, 11 insertions(+)
> > 
> > diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> > index 893165eeddf3..e127a7db2088 100644
> > --- a/drivers/gpu/drm/drm_vblank.c
> > +++ b/drivers/gpu/drm/drm_vblank.c
> > @@ -176,6 +176,17 @@ static void store_vblank(struct drm_device *dev, unsigned int pipe,
> >  
> >  	vblank->last = last;
> >  
> > +	/*
> > +	 * drm_vblank_restore() wants to always update
> > +	 * vblank->last since we can't trust the frame counter
> > +	 * across power saving states. But we don't want to alter
> > +	 * the stored timestamp for the same frame number since
> > +	 * that would cause userspace to potentially observe two
> > +	 * different timestamps for the same frame.
> > +	 */
> > +	if (vblank_count_inc == 0)
> > +		return;
> > +
> >  	write_seqlock(&vblank->seqlock);
> >  	vblank->time = t_vblank;
> >  	atomic64_add(vblank_count_inc, &vblank->count);
> > -- 
> > 2.26.2
> > 
> 
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch

-- 
Ville Syrjälä
Intel
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH] drm/vblank: Avoid storing a timestamp for the same frame twice
  2021-02-09 15:40     ` [Intel-gfx] " Ville Syrjälä
@ 2021-02-09 16:44       ` Daniel Vetter
  -1 siblings, 0 replies; 43+ messages in thread
From: Daniel Vetter @ 2021-02-09 16:44 UTC (permalink / raw)
  To: Ville Syrjälä
  Cc: intel-gfx, Dhinakaran Pandiyan, dri-devel, Rodrigo Vivi

On Tue, Feb 9, 2021 at 4:41 PM Ville Syrjälä
<ville.syrjala@linux.intel.com> wrote:
> On Tue, Feb 09, 2021 at 11:07:53AM +0100, Daniel Vetter wrote:
> > On Thu, Feb 04, 2021 at 04:04:00AM +0200, Ville Syrjala wrote:
> > > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > >
> > > drm_vblank_restore() exists because certain power saving states
> > > can clobber the hardware frame counter. The way it does this is
> > > by guesstimating how many frames were missed purely based on
> > > the difference between the last stored timestamp vs. a newly
> > > sampled timestamp.
> > >
> > > If we should call this function before a full frame has
> > > elapsed since we sampled the last timestamp we would end up
> > > with a possibly slightly different timestamp value for the
> > > same frame. Currently we will happily overwrite the already
> > > stored timestamp for the frame with the new value. This
> > > could cause userspace to observe two different timestamps
> > > for the same frame (and the timestamp could even go
> > > backwards depending on how much error we introduce when
> > > correcting the timestamp based on the scanout position).
> > >
> > > To avoid that let's not update the stored timestamp unless we're
> > > also incrementing the sequence counter. We do still want to update
> > > vblank->last with the freshly sampled hw frame counter value so
> > > that subsequent vblank irqs/queries can actually use the hw frame
> > > counter to determine how many frames have elapsed.
> > >
> > > Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
> > > Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > > Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> > > Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> >
> > Ok, top-posting because lol I got confused. I mixed up the guesstimation
> > work we do for when we don't have a vblank counter with the precise vblank
> > timestamp stuff.
> >
> > I think it'd still be good to maybe lock down/document a bit better the
> > requirements for drm_crtc_vblank_restore, but I convinced myself now that
> > your patch looks correct.
> >
> > Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>
> Ta.
>
> Though I wonder if we should just do something like this instead:
> -       store_vblank(dev, pipe, diff, t_vblank, cur_vblank);
> +       vblank->last = (cur_vblank - diff) & max_vblank_count;
>
> to make it entirely obvious that this exists only to fix up
> the stored hw counter value?
>
> Would also avoid the problem the original patch tries to fix
> because we'd simply never store a new timestamp here.

Hm yeah, I think that would nicely limit the impact. But need to check
overflow/underflow math is all correct. And I think that would neatly
implement the trick I proposed to address the bug that wasn't there
:-)

The only thing that I've thought of as issue is that we might have
more wrap-around of the hw vblank counter, but that shouldn't be worse
than without this - anytime we have the vblank on for long enough we
fix the entire thing, and I think our wrap handling is now consistent
enough (there was some "let's just add a large bump" stuff for dri1
userspace iirc) that this shouldn't be any problem.

Plus the comment about _restore being very special would be in the
restore function, so this would also be rather tidy. If you go with
this maybe extend the kerneldoc for ->last to mention that
drm_vblank_restore() adjusts it?

The more I ponder this, the more I like it ... which probably means
I'm missing something, because this is drm_vblank.c?

Cheers, Daniel

>
> >
> > > ---
> > >  drivers/gpu/drm/drm_vblank.c | 11 +++++++++++
> > >  1 file changed, 11 insertions(+)
> > >
> > > diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> > > index 893165eeddf3..e127a7db2088 100644
> > > --- a/drivers/gpu/drm/drm_vblank.c
> > > +++ b/drivers/gpu/drm/drm_vblank.c
> > > @@ -176,6 +176,17 @@ static void store_vblank(struct drm_device *dev, unsigned int pipe,
> > >
> > >     vblank->last = last;
> > >
> > > +   /*
> > > +    * drm_vblank_restore() wants to always update
> > > +    * vblank->last since we can't trust the frame counter
> > > +    * across power saving states. But we don't want to alter
> > > +    * the stored timestamp for the same frame number since
> > > +    * that would cause userspace to potentially observe two
> > > +    * different timestamps for the same frame.
> > > +    */
> > > +   if (vblank_count_inc == 0)
> > > +           return;
> > > +
> > >     write_seqlock(&vblank->seqlock);
> > >     vblank->time = t_vblank;
> > >     atomic64_add(vblank_count_inc, &vblank->count);
> > > --
> > > 2.26.2
> > >
> >
> > --
> > Daniel Vetter
> > Software Engineer, Intel Corporation
> > http://blog.ffwll.ch
>
> --
> Ville Syrjälä
> Intel



-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/vblank: Avoid storing a timestamp for the same frame twice
@ 2021-02-09 16:44       ` Daniel Vetter
  0 siblings, 0 replies; 43+ messages in thread
From: Daniel Vetter @ 2021-02-09 16:44 UTC (permalink / raw)
  To: Ville Syrjälä; +Cc: intel-gfx, Dhinakaran Pandiyan, dri-devel

On Tue, Feb 9, 2021 at 4:41 PM Ville Syrjälä
<ville.syrjala@linux.intel.com> wrote:
> On Tue, Feb 09, 2021 at 11:07:53AM +0100, Daniel Vetter wrote:
> > On Thu, Feb 04, 2021 at 04:04:00AM +0200, Ville Syrjala wrote:
> > > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > >
> > > drm_vblank_restore() exists because certain power saving states
> > > can clobber the hardware frame counter. The way it does this is
> > > by guesstimating how many frames were missed purely based on
> > > the difference between the last stored timestamp vs. a newly
> > > sampled timestamp.
> > >
> > > If we should call this function before a full frame has
> > > elapsed since we sampled the last timestamp we would end up
> > > with a possibly slightly different timestamp value for the
> > > same frame. Currently we will happily overwrite the already
> > > stored timestamp for the frame with the new value. This
> > > could cause userspace to observe two different timestamps
> > > for the same frame (and the timestamp could even go
> > > backwards depending on how much error we introduce when
> > > correcting the timestamp based on the scanout position).
> > >
> > > To avoid that let's not update the stored timestamp unless we're
> > > also incrementing the sequence counter. We do still want to update
> > > vblank->last with the freshly sampled hw frame counter value so
> > > that subsequent vblank irqs/queries can actually use the hw frame
> > > counter to determine how many frames have elapsed.
> > >
> > > Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
> > > Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > > Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> > > Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> >
> > Ok, top-posting because lol I got confused. I mixed up the guesstimation
> > work we do for when we don't have a vblank counter with the precise vblank
> > timestamp stuff.
> >
> > I think it'd still be good to maybe lock down/document a bit better the
> > requirements for drm_crtc_vblank_restore, but I convinced myself now that
> > your patch looks correct.
> >
> > Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>
> Ta.
>
> Though I wonder if we should just do something like this instead:
> -       store_vblank(dev, pipe, diff, t_vblank, cur_vblank);
> +       vblank->last = (cur_vblank - diff) & max_vblank_count;
>
> to make it entirely obvious that this exists only to fix up
> the stored hw counter value?
>
> Would also avoid the problem the original patch tries to fix
> because we'd simply never store a new timestamp here.

Hm yeah, I think that would nicely limit the impact. But need to check
overflow/underflow math is all correct. And I think that would neatly
implement the trick I proposed to address the bug that wasn't there
:-)

The only thing that I've thought of as issue is that we might have
more wrap-around of the hw vblank counter, but that shouldn't be worse
than without this - anytime we have the vblank on for long enough we
fix the entire thing, and I think our wrap handling is now consistent
enough (there was some "let's just add a large bump" stuff for dri1
userspace iirc) that this shouldn't be any problem.

Plus the comment about _restore being very special would be in the
restore function, so this would also be rather tidy. If you go with
this maybe extend the kerneldoc for ->last to mention that
drm_vblank_restore() adjusts it?

The more I ponder this, the more I like it ... which probably means
I'm missing something, because this is drm_vblank.c?

Cheers, Daniel

>
> >
> > > ---
> > >  drivers/gpu/drm/drm_vblank.c | 11 +++++++++++
> > >  1 file changed, 11 insertions(+)
> > >
> > > diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> > > index 893165eeddf3..e127a7db2088 100644
> > > --- a/drivers/gpu/drm/drm_vblank.c
> > > +++ b/drivers/gpu/drm/drm_vblank.c
> > > @@ -176,6 +176,17 @@ static void store_vblank(struct drm_device *dev, unsigned int pipe,
> > >
> > >     vblank->last = last;
> > >
> > > +   /*
> > > +    * drm_vblank_restore() wants to always update
> > > +    * vblank->last since we can't trust the frame counter
> > > +    * across power saving states. But we don't want to alter
> > > +    * the stored timestamp for the same frame number since
> > > +    * that would cause userspace to potentially observe two
> > > +    * different timestamps for the same frame.
> > > +    */
> > > +   if (vblank_count_inc == 0)
> > > +           return;
> > > +
> > >     write_seqlock(&vblank->seqlock);
> > >     vblank->time = t_vblank;
> > >     atomic64_add(vblank_count_inc, &vblank->count);
> > > --
> > > 2.26.2
> > >
> >
> > --
> > Daniel Vetter
> > Software Engineer, Intel Corporation
> > http://blog.ffwll.ch
>
> --
> Ville Syrjälä
> Intel



-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v2] drm/vblank: Do not store a new vblank timestamp in drm_vblank_restore()
  2021-02-04  2:04 ` [Intel-gfx] " Ville Syrjala
@ 2021-02-18 16:03   ` Ville Syrjala
  -1 siblings, 0 replies; 43+ messages in thread
From: Ville Syrjala @ 2021-02-18 16:03 UTC (permalink / raw)
  To: dri-devel; +Cc: Daniel Vetter, intel-gfx, Dhinakaran Pandiyan, Rodrigo Vivi

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

drm_vblank_restore() exists because certain power saving states
can clobber the hardware frame counter. The way it does this is
by guesstimating how many frames were missed purely based on
the difference between the last stored timestamp vs. a newly
sampled timestamp.

If we should call this function before a full frame has
elapsed since we sampled the last timestamp we would end up
with a possibly slightly different timestamp value for the
same frame. Currently we will happily overwrite the already
stored timestamp for the frame with the new value. This
could cause userspace to observe two different timestamps
for the same frame (and the timestamp could even go
backwards depending on how much error we introduce when
correcting the timestamp based on the scanout position).

To avoid that let's not update the stored timestamp at all,
and instead we just fix up the last recorded hw vblank counter
value such that the already stored timestamp/seq number will
match. Thus the next time a vblank irq happens it will calculate
the correct diff between the current and stored hw vblank counter
values.

Sidenote: Another possible idea that came to mind would be to
do this correction only if the power really was removed since
the last time we sampled the hw frame counter. But to do that
we would need a robust way to detect when it has occurred. Some
possibilities could involve some kind of hardare power well
transition counter, or potentially we could store a magic value
in a scratch register that lives in the same power well. But
I'm not sure either of those exist, so would need an actual
investigation to find out. All of that is very hardware specific
of course, so would have to be done in the driver code.

Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 drivers/gpu/drm/drm_vblank.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
index 2bd989688eae..3417e1ac7918 100644
--- a/drivers/gpu/drm/drm_vblank.c
+++ b/drivers/gpu/drm/drm_vblank.c
@@ -1478,6 +1478,7 @@ static void drm_vblank_restore(struct drm_device *dev, unsigned int pipe)
 	u64 diff_ns;
 	u32 cur_vblank, diff = 1;
 	int count = DRM_TIMESTAMP_MAXRETRIES;
+	u32 max_vblank_count = drm_max_vblank_count(dev, pipe);
 
 	if (drm_WARN_ON(dev, pipe >= dev->num_crtcs))
 		return;
@@ -1504,7 +1505,7 @@ static void drm_vblank_restore(struct drm_device *dev, unsigned int pipe)
 	drm_dbg_vbl(dev,
 		    "missed %d vblanks in %lld ns, frame duration=%d ns, hw_diff=%d\n",
 		    diff, diff_ns, framedur_ns, cur_vblank - vblank->last);
-	store_vblank(dev, pipe, diff, t_vblank, cur_vblank);
+	vblank->last = (cur_vblank - diff) & max_vblank_count;
 }
 
 /**
-- 
2.26.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [Intel-gfx] [PATCH v2] drm/vblank: Do not store a new vblank timestamp in drm_vblank_restore()
@ 2021-02-18 16:03   ` Ville Syrjala
  0 siblings, 0 replies; 43+ messages in thread
From: Ville Syrjala @ 2021-02-18 16:03 UTC (permalink / raw)
  To: dri-devel; +Cc: Daniel Vetter, intel-gfx, Dhinakaran Pandiyan

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

drm_vblank_restore() exists because certain power saving states
can clobber the hardware frame counter. The way it does this is
by guesstimating how many frames were missed purely based on
the difference between the last stored timestamp vs. a newly
sampled timestamp.

If we should call this function before a full frame has
elapsed since we sampled the last timestamp we would end up
with a possibly slightly different timestamp value for the
same frame. Currently we will happily overwrite the already
stored timestamp for the frame with the new value. This
could cause userspace to observe two different timestamps
for the same frame (and the timestamp could even go
backwards depending on how much error we introduce when
correcting the timestamp based on the scanout position).

To avoid that let's not update the stored timestamp at all,
and instead we just fix up the last recorded hw vblank counter
value such that the already stored timestamp/seq number will
match. Thus the next time a vblank irq happens it will calculate
the correct diff between the current and stored hw vblank counter
values.

Sidenote: Another possible idea that came to mind would be to
do this correction only if the power really was removed since
the last time we sampled the hw frame counter. But to do that
we would need a robust way to detect when it has occurred. Some
possibilities could involve some kind of hardare power well
transition counter, or potentially we could store a magic value
in a scratch register that lives in the same power well. But
I'm not sure either of those exist, so would need an actual
investigation to find out. All of that is very hardware specific
of course, so would have to be done in the driver code.

Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 drivers/gpu/drm/drm_vblank.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
index 2bd989688eae..3417e1ac7918 100644
--- a/drivers/gpu/drm/drm_vblank.c
+++ b/drivers/gpu/drm/drm_vblank.c
@@ -1478,6 +1478,7 @@ static void drm_vblank_restore(struct drm_device *dev, unsigned int pipe)
 	u64 diff_ns;
 	u32 cur_vblank, diff = 1;
 	int count = DRM_TIMESTAMP_MAXRETRIES;
+	u32 max_vblank_count = drm_max_vblank_count(dev, pipe);
 
 	if (drm_WARN_ON(dev, pipe >= dev->num_crtcs))
 		return;
@@ -1504,7 +1505,7 @@ static void drm_vblank_restore(struct drm_device *dev, unsigned int pipe)
 	drm_dbg_vbl(dev,
 		    "missed %d vblanks in %lld ns, frame duration=%d ns, hw_diff=%d\n",
 		    diff, diff_ns, framedur_ns, cur_vblank - vblank->last);
-	store_vblank(dev, pipe, diff, t_vblank, cur_vblank);
+	vblank->last = (cur_vblank - diff) & max_vblank_count;
 }
 
 /**
-- 
2.26.2

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* Re: [PATCH v2] drm/vblank: Do not store a new vblank timestamp in drm_vblank_restore()
  2021-02-18 16:03   ` [Intel-gfx] " Ville Syrjala
@ 2021-02-18 16:10     ` Ville Syrjälä
  -1 siblings, 0 replies; 43+ messages in thread
From: Ville Syrjälä @ 2021-02-18 16:10 UTC (permalink / raw)
  To: dri-devel; +Cc: Daniel Vetter, intel-gfx, Dhinakaran Pandiyan, Rodrigo Vivi

On Thu, Feb 18, 2021 at 06:03:05PM +0200, Ville Syrjala wrote:
> From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> 
> drm_vblank_restore() exists because certain power saving states
> can clobber the hardware frame counter. The way it does this is
> by guesstimating how many frames were missed purely based on
> the difference between the last stored timestamp vs. a newly
> sampled timestamp.
> 
> If we should call this function before a full frame has
> elapsed since we sampled the last timestamp we would end up
> with a possibly slightly different timestamp value for the
> same frame. Currently we will happily overwrite the already
> stored timestamp for the frame with the new value. This
> could cause userspace to observe two different timestamps
> for the same frame (and the timestamp could even go
> backwards depending on how much error we introduce when
> correcting the timestamp based on the scanout position).
> 
> To avoid that let's not update the stored timestamp at all,
> and instead we just fix up the last recorded hw vblank counter
> value such that the already stored timestamp/seq number will
> match. Thus the next time a vblank irq happens it will calculate
> the correct diff between the current and stored hw vblank counter
> values.
> 
> Sidenote: Another possible idea that came to mind would be to
> do this correction only if the power really was removed since
> the last time we sampled the hw frame counter. But to do that
> we would need a robust way to detect when it has occurred. Some
> possibilities could involve some kind of hardare power well
> transition counter, or potentially we could store a magic value
> in a scratch register that lives in the same power well. But
> I'm not sure either of those exist, so would need an actual
> investigation to find out. All of that is very hardware specific
> of course, so would have to be done in the driver code.

Forgot to mention that I wasn't able to test this with PSR
since HSW+PSR1 is bork, but I did test it a bit w/o PSR
by artificially adding arbitrary offsets to the reported
hw frame counter value. The behaviour seemed sane enough
at least.

> 
> Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> ---
>  drivers/gpu/drm/drm_vblank.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> index 2bd989688eae..3417e1ac7918 100644
> --- a/drivers/gpu/drm/drm_vblank.c
> +++ b/drivers/gpu/drm/drm_vblank.c
> @@ -1478,6 +1478,7 @@ static void drm_vblank_restore(struct drm_device *dev, unsigned int pipe)
>  	u64 diff_ns;
>  	u32 cur_vblank, diff = 1;
>  	int count = DRM_TIMESTAMP_MAXRETRIES;
> +	u32 max_vblank_count = drm_max_vblank_count(dev, pipe);
>  
>  	if (drm_WARN_ON(dev, pipe >= dev->num_crtcs))
>  		return;
> @@ -1504,7 +1505,7 @@ static void drm_vblank_restore(struct drm_device *dev, unsigned int pipe)
>  	drm_dbg_vbl(dev,
>  		    "missed %d vblanks in %lld ns, frame duration=%d ns, hw_diff=%d\n",
>  		    diff, diff_ns, framedur_ns, cur_vblank - vblank->last);
> -	store_vblank(dev, pipe, diff, t_vblank, cur_vblank);
> +	vblank->last = (cur_vblank - diff) & max_vblank_count;
>  }
>  
>  /**
> -- 
> 2.26.2

-- 
Ville Syrjälä
Intel
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH v2] drm/vblank: Do not store a new vblank timestamp in drm_vblank_restore()
@ 2021-02-18 16:10     ` Ville Syrjälä
  0 siblings, 0 replies; 43+ messages in thread
From: Ville Syrjälä @ 2021-02-18 16:10 UTC (permalink / raw)
  To: dri-devel; +Cc: Daniel Vetter, intel-gfx, Dhinakaran Pandiyan

On Thu, Feb 18, 2021 at 06:03:05PM +0200, Ville Syrjala wrote:
> From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> 
> drm_vblank_restore() exists because certain power saving states
> can clobber the hardware frame counter. The way it does this is
> by guesstimating how many frames were missed purely based on
> the difference between the last stored timestamp vs. a newly
> sampled timestamp.
> 
> If we should call this function before a full frame has
> elapsed since we sampled the last timestamp we would end up
> with a possibly slightly different timestamp value for the
> same frame. Currently we will happily overwrite the already
> stored timestamp for the frame with the new value. This
> could cause userspace to observe two different timestamps
> for the same frame (and the timestamp could even go
> backwards depending on how much error we introduce when
> correcting the timestamp based on the scanout position).
> 
> To avoid that let's not update the stored timestamp at all,
> and instead we just fix up the last recorded hw vblank counter
> value such that the already stored timestamp/seq number will
> match. Thus the next time a vblank irq happens it will calculate
> the correct diff between the current and stored hw vblank counter
> values.
> 
> Sidenote: Another possible idea that came to mind would be to
> do this correction only if the power really was removed since
> the last time we sampled the hw frame counter. But to do that
> we would need a robust way to detect when it has occurred. Some
> possibilities could involve some kind of hardare power well
> transition counter, or potentially we could store a magic value
> in a scratch register that lives in the same power well. But
> I'm not sure either of those exist, so would need an actual
> investigation to find out. All of that is very hardware specific
> of course, so would have to be done in the driver code.

Forgot to mention that I wasn't able to test this with PSR
since HSW+PSR1 is bork, but I did test it a bit w/o PSR
by artificially adding arbitrary offsets to the reported
hw frame counter value. The behaviour seemed sane enough
at least.

> 
> Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> ---
>  drivers/gpu/drm/drm_vblank.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> index 2bd989688eae..3417e1ac7918 100644
> --- a/drivers/gpu/drm/drm_vblank.c
> +++ b/drivers/gpu/drm/drm_vblank.c
> @@ -1478,6 +1478,7 @@ static void drm_vblank_restore(struct drm_device *dev, unsigned int pipe)
>  	u64 diff_ns;
>  	u32 cur_vblank, diff = 1;
>  	int count = DRM_TIMESTAMP_MAXRETRIES;
> +	u32 max_vblank_count = drm_max_vblank_count(dev, pipe);
>  
>  	if (drm_WARN_ON(dev, pipe >= dev->num_crtcs))
>  		return;
> @@ -1504,7 +1505,7 @@ static void drm_vblank_restore(struct drm_device *dev, unsigned int pipe)
>  	drm_dbg_vbl(dev,
>  		    "missed %d vblanks in %lld ns, frame duration=%d ns, hw_diff=%d\n",
>  		    diff, diff_ns, framedur_ns, cur_vblank - vblank->last);
> -	store_vblank(dev, pipe, diff, t_vblank, cur_vblank);
> +	vblank->last = (cur_vblank - diff) & max_vblank_count;
>  }
>  
>  /**
> -- 
> 2.26.2

-- 
Ville Syrjälä
Intel
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Intel-gfx] ✗ Fi.CI.BAT: failure for drm/vblank: Avoid storing a timestamp for the same frame twice (rev2)
  2021-02-04  2:04 ` [Intel-gfx] " Ville Syrjala
                   ` (5 preceding siblings ...)
  (?)
@ 2021-02-18 19:08 ` Patchwork
  2021-02-18 19:22   ` Ville Syrjälä
  -1 siblings, 1 reply; 43+ messages in thread
From: Patchwork @ 2021-02-18 19:08 UTC (permalink / raw)
  To: Ville Syrjälä; +Cc: intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 4612 bytes --]

== Series Details ==

Series: drm/vblank: Avoid storing a timestamp for the same frame twice (rev2)
URL   : https://patchwork.freedesktop.org/series/86672/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_9786 -> Patchwork_19701
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with Patchwork_19701 absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_19701, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/index.html

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_19701:

### IGT changes ###

#### Possible regressions ####

  * igt@gem_exec_suspend@basic-s0:
    - fi-cfl-8109u:       [PASS][1] -> [INCOMPLETE][2]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/fi-cfl-8109u/igt@gem_exec_suspend@basic-s0.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/fi-cfl-8109u/igt@gem_exec_suspend@basic-s0.html

  
Known issues
------------

  Here are the changes found in Patchwork_19701 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@amdgpu/amd_cs_nop@sync-compute0:
    - fi-kbl-r:           NOTRUN -> [SKIP][3] ([fdo#109271]) +20 similar issues
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/fi-kbl-r/igt@amdgpu/amd_cs_nop@sync-compute0.html

  * igt@gem_huc_copy@huc-copy:
    - fi-kbl-r:           NOTRUN -> [SKIP][4] ([fdo#109271] / [i915#2190])
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/fi-kbl-r/igt@gem_huc_copy@huc-copy.html

  * igt@gem_linear_blits@basic:
    - fi-tgl-y:           [PASS][5] -> [DMESG-WARN][6] ([i915#402]) +1 similar issue
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/fi-tgl-y/igt@gem_linear_blits@basic.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/fi-tgl-y/igt@gem_linear_blits@basic.html

  * igt@i915_pm_rpm@module-reload:
    - fi-kbl-guc:         [PASS][7] -> [FAIL][8] ([i915#2203] / [i915#579])
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/fi-kbl-guc/igt@i915_pm_rpm@module-reload.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/fi-kbl-guc/igt@i915_pm_rpm@module-reload.html

  * igt@kms_chamelium@hdmi-edid-read:
    - fi-kbl-r:           NOTRUN -> [SKIP][9] ([fdo#109271] / [fdo#111827]) +8 similar issues
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/fi-kbl-r/igt@kms_chamelium@hdmi-edid-read.html

  * igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-d:
    - fi-kbl-r:           NOTRUN -> [SKIP][10] ([fdo#109271] / [i915#533])
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/fi-kbl-r/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-d.html

  
#### Possible fixes ####

  * igt@gem_mmap_gtt@basic:
    - fi-tgl-y:           [DMESG-WARN][11] ([i915#402]) -> [PASS][12] +1 similar issue
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/fi-tgl-y/igt@gem_mmap_gtt@basic.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/fi-tgl-y/igt@gem_mmap_gtt@basic.html

  
  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
  [i915#2190]: https://gitlab.freedesktop.org/drm/intel/issues/2190
  [i915#2203]: https://gitlab.freedesktop.org/drm/intel/issues/2203
  [i915#402]: https://gitlab.freedesktop.org/drm/intel/issues/402
  [i915#533]: https://gitlab.freedesktop.org/drm/intel/issues/533
  [i915#579]: https://gitlab.freedesktop.org/drm/intel/issues/579


Participating hosts (46 -> 39)
------------------------------

  Missing    (7): fi-cml-u2 fi-ilk-m540 fi-hsw-4200u fi-bsw-cyan fi-ctg-p8600 fi-ehl-2 fi-bdw-samus 


Build changes
-------------

  * Linux: CI_DRM_9786 -> Patchwork_19701

  CI-20190529: 20190529
  CI_DRM_9786: 487d534b8912194d104e05b66e3a0303800300ff @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_6008: 34ccd8e8c38587e7d46ec964d30d17863b166fda @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_19701: b9e2377b1bd55114447c010cfd7f8b4302744afa @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

b9e2377b1bd5 drm/vblank: Do not store a new vblank timestamp in drm_vblank_restore()

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/index.html

[-- Attachment #1.2: Type: text/html, Size: 5672 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx]  ✗ Fi.CI.BAT: failure for drm/vblank: Avoid storing a timestamp for the same frame twice (rev2)
  2021-02-18 19:08 ` [Intel-gfx] ✗ Fi.CI.BAT: failure for drm/vblank: Avoid storing a timestamp for the same frame twice (rev2) Patchwork
@ 2021-02-18 19:22   ` Ville Syrjälä
  2021-02-18 19:51     ` Vudum, Lakshminarayana
  0 siblings, 1 reply; 43+ messages in thread
From: Ville Syrjälä @ 2021-02-18 19:22 UTC (permalink / raw)
  To: intel-gfx; +Cc: Vudum, Lakshminarayana

On Thu, Feb 18, 2021 at 07:08:27PM -0000, Patchwork wrote:
> == Series Details ==
> 
> Series: drm/vblank: Avoid storing a timestamp for the same frame twice (rev2)
> URL   : https://patchwork.freedesktop.org/series/86672/
> State : failure
> 
> == Summary ==
> 
> CI Bug Log - changes from CI_DRM_9786 -> Patchwork_19701
> ====================================================
> 
> Summary
> -------
> 
>   **FAILURE**
> 
>   Serious unknown changes coming with Patchwork_19701 absolutely need to be
>   verified manually.
>   
>   If you think the reported changes have nothing to do with the changes
>   introduced in Patchwork_19701, please notify your bug team to allow them
>   to document this new failure mode, which will reduce false positives in CI.
> 
>   External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/index.html
> 
> Possible new issues
> -------------------
> 
>   Here are the unknown changes that may have been introduced in Patchwork_19701:
> 
> ### IGT changes ###
> 
> #### Possible regressions ####
> 
>   * igt@gem_exec_suspend@basic-s0:
>     - fi-cfl-8109u:       [PASS][1] -> [INCOMPLETE][2]
>    [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/fi-cfl-8109u/igt@gem_exec_suspend@basic-s0.html
>    [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/fi-cfl-8109u/igt@gem_exec_suspend@basic-s0.html

Looks like the machine went AWOL during suspend. Seems unrelated to
the patch at hand.

-- 
Ville Syrjälä
Intel
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Intel-gfx] ✓ Fi.CI.BAT: success for drm/vblank: Avoid storing a timestamp for the same frame twice (rev2)
  2021-02-04  2:04 ` [Intel-gfx] " Ville Syrjala
                   ` (6 preceding siblings ...)
  (?)
@ 2021-02-18 19:29 ` Patchwork
  -1 siblings, 0 replies; 43+ messages in thread
From: Patchwork @ 2021-02-18 19:29 UTC (permalink / raw)
  To: Ville Syrjälä; +Cc: intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 4211 bytes --]

== Series Details ==

Series: drm/vblank: Avoid storing a timestamp for the same frame twice (rev2)
URL   : https://patchwork.freedesktop.org/series/86672/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_9786 -> Patchwork_19701
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/index.html

Known issues
------------

  Here are the changes found in Patchwork_19701 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@amdgpu/amd_cs_nop@sync-compute0:
    - fi-kbl-r:           NOTRUN -> [SKIP][1] ([fdo#109271]) +20 similar issues
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/fi-kbl-r/igt@amdgpu/amd_cs_nop@sync-compute0.html

  * igt@gem_exec_suspend@basic-s0:
    - fi-cfl-8109u:       [PASS][2] -> [INCOMPLETE][3] ([i915#155])
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/fi-cfl-8109u/igt@gem_exec_suspend@basic-s0.html
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/fi-cfl-8109u/igt@gem_exec_suspend@basic-s0.html

  * igt@gem_huc_copy@huc-copy:
    - fi-kbl-r:           NOTRUN -> [SKIP][4] ([fdo#109271] / [i915#2190])
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/fi-kbl-r/igt@gem_huc_copy@huc-copy.html

  * igt@gem_linear_blits@basic:
    - fi-tgl-y:           [PASS][5] -> [DMESG-WARN][6] ([i915#402]) +1 similar issue
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/fi-tgl-y/igt@gem_linear_blits@basic.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/fi-tgl-y/igt@gem_linear_blits@basic.html

  * igt@i915_pm_rpm@module-reload:
    - fi-kbl-guc:         [PASS][7] -> [FAIL][8] ([i915#2203] / [i915#579])
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/fi-kbl-guc/igt@i915_pm_rpm@module-reload.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/fi-kbl-guc/igt@i915_pm_rpm@module-reload.html

  * igt@kms_chamelium@hdmi-edid-read:
    - fi-kbl-r:           NOTRUN -> [SKIP][9] ([fdo#109271] / [fdo#111827]) +8 similar issues
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/fi-kbl-r/igt@kms_chamelium@hdmi-edid-read.html

  * igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-d:
    - fi-kbl-r:           NOTRUN -> [SKIP][10] ([fdo#109271] / [i915#533])
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/fi-kbl-r/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-d.html

  
#### Possible fixes ####

  * igt@gem_mmap_gtt@basic:
    - fi-tgl-y:           [DMESG-WARN][11] ([i915#402]) -> [PASS][12] +1 similar issue
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/fi-tgl-y/igt@gem_mmap_gtt@basic.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/fi-tgl-y/igt@gem_mmap_gtt@basic.html

  
  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
  [i915#155]: https://gitlab.freedesktop.org/drm/intel/issues/155
  [i915#2190]: https://gitlab.freedesktop.org/drm/intel/issues/2190
  [i915#2203]: https://gitlab.freedesktop.org/drm/intel/issues/2203
  [i915#402]: https://gitlab.freedesktop.org/drm/intel/issues/402
  [i915#533]: https://gitlab.freedesktop.org/drm/intel/issues/533
  [i915#579]: https://gitlab.freedesktop.org/drm/intel/issues/579


Participating hosts (46 -> 39)
------------------------------

  Missing    (7): fi-cml-u2 fi-ilk-m540 fi-hsw-4200u fi-bsw-cyan fi-ctg-p8600 fi-ehl-2 fi-bdw-samus 


Build changes
-------------

  * Linux: CI_DRM_9786 -> Patchwork_19701

  CI-20190529: 20190529
  CI_DRM_9786: 487d534b8912194d104e05b66e3a0303800300ff @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_6008: 34ccd8e8c38587e7d46ec964d30d17863b166fda @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_19701: b9e2377b1bd55114447c010cfd7f8b4302744afa @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

b9e2377b1bd5 drm/vblank: Do not store a new vblank timestamp in drm_vblank_restore()

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/index.html

[-- Attachment #1.2: Type: text/html, Size: 5260 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx]  ✗ Fi.CI.BAT: failure for drm/vblank: Avoid storing a timestamp for the same frame twice (rev2)
  2021-02-18 19:22   ` Ville Syrjälä
@ 2021-02-18 19:51     ` Vudum, Lakshminarayana
  0 siblings, 0 replies; 43+ messages in thread
From: Vudum, Lakshminarayana @ 2021-02-18 19:51 UTC (permalink / raw)
  To: Ville Syrjälä, intel-gfx

Re-reported.

-----Original Message-----
From: Ville Syrjälä <ville.syrjala@linux.intel.com> 
Sent: Thursday, February 18, 2021 11:22 AM
To: intel-gfx@lists.freedesktop.org
Cc: Vudum, Lakshminarayana <lakshminarayana.vudum@intel.com>
Subject: Re: ✗ Fi.CI.BAT: failure for drm/vblank: Avoid storing a timestamp for the same frame twice (rev2)

On Thu, Feb 18, 2021 at 07:08:27PM -0000, Patchwork wrote:
> == Series Details ==
> 
> Series: drm/vblank: Avoid storing a timestamp for the same frame twice (rev2)
> URL   : https://patchwork.freedesktop.org/series/86672/
> State : failure
> 
> == Summary ==
> 
> CI Bug Log - changes from CI_DRM_9786 -> Patchwork_19701 
> ====================================================
> 
> Summary
> -------
> 
>   **FAILURE**
> 
>   Serious unknown changes coming with Patchwork_19701 absolutely need to be
>   verified manually.
>   
>   If you think the reported changes have nothing to do with the changes
>   introduced in Patchwork_19701, please notify your bug team to allow them
>   to document this new failure mode, which will reduce false positives in CI.
> 
>   External URL: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/index.html
> 
> Possible new issues
> -------------------
> 
>   Here are the unknown changes that may have been introduced in Patchwork_19701:
> 
> ### IGT changes ###
> 
> #### Possible regressions ####
> 
>   * igt@gem_exec_suspend@basic-s0:
>     - fi-cfl-8109u:       [PASS][1] -> [INCOMPLETE][2]
>    [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/fi-cfl-8109u/igt@gem_exec_suspend@basic-s0.html
>    [2]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/fi-cfl-8109u/
> igt@gem_exec_suspend@basic-s0.html

Looks like the machine went AWOL during suspend. Seems unrelated to the patch at hand.

--
Ville Syrjälä
Intel
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Intel-gfx] ✗ Fi.CI.IGT: failure for drm/vblank: Avoid storing a timestamp for the same frame twice (rev2)
  2021-02-04  2:04 ` [Intel-gfx] " Ville Syrjala
                   ` (7 preceding siblings ...)
  (?)
@ 2021-02-18 20:58 ` Patchwork
  -1 siblings, 0 replies; 43+ messages in thread
From: Patchwork @ 2021-02-18 20:58 UTC (permalink / raw)
  To: Ville Syrjälä; +Cc: intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 30292 bytes --]

== Series Details ==

Series: drm/vblank: Avoid storing a timestamp for the same frame twice (rev2)
URL   : https://patchwork.freedesktop.org/series/86672/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_9786_full -> Patchwork_19701_full
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with Patchwork_19701_full absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_19701_full, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_19701_full:

### IGT changes ###

#### Possible regressions ####

  * igt@kms_pipe_crc_basic@suspend-read-crc-pipe-b:
    - shard-glk:          [PASS][1] -> [INCOMPLETE][2]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-glk3/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-b.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-glk1/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-b.html

  
Known issues
------------

  Here are the changes found in Patchwork_19701_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@feature_discovery@psr2:
    - shard-iclb:         [PASS][3] -> [SKIP][4] ([i915#658])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-iclb2/igt@feature_discovery@psr2.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-iclb7/igt@feature_discovery@psr2.html

  * igt@gem_ctx_persistence@engines-hostile:
    - shard-snb:          NOTRUN -> [SKIP][5] ([fdo#109271] / [i915#1099])
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-snb7/igt@gem_ctx_persistence@engines-hostile.html

  * igt@gem_ctx_persistence@engines-mixed-process:
    - shard-hsw:          NOTRUN -> [SKIP][6] ([fdo#109271] / [i915#1099])
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-hsw5/igt@gem_ctx_persistence@engines-mixed-process.html

  * igt@gem_exec_fair@basic-deadline:
    - shard-apl:          NOTRUN -> [FAIL][7] ([i915#2846])
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-apl1/igt@gem_exec_fair@basic-deadline.html

  * igt@gem_exec_fair@basic-none-rrul@rcs0:
    - shard-glk:          [PASS][8] -> [FAIL][9] ([i915#2842])
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-glk5/igt@gem_exec_fair@basic-none-rrul@rcs0.html
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-glk8/igt@gem_exec_fair@basic-none-rrul@rcs0.html

  * igt@gem_exec_fair@basic-none@vcs0:
    - shard-apl:          [PASS][10] -> [FAIL][11] ([i915#2842])
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-apl2/igt@gem_exec_fair@basic-none@vcs0.html
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-apl2/igt@gem_exec_fair@basic-none@vcs0.html

  * igt@gem_exec_fair@basic-pace@rcs0:
    - shard-kbl:          NOTRUN -> [FAIL][12] ([i915#2842])
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-kbl4/igt@gem_exec_fair@basic-pace@rcs0.html

  * igt@gem_exec_fair@basic-pace@vcs1:
    - shard-iclb:         NOTRUN -> [FAIL][13] ([i915#2842]) +1 similar issue
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-iclb2/igt@gem_exec_fair@basic-pace@vcs1.html

  * igt@gem_exec_reloc@basic-many-active@vcs0:
    - shard-kbl:          NOTRUN -> [FAIL][14] ([i915#2389]) +4 similar issues
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-kbl4/igt@gem_exec_reloc@basic-many-active@vcs0.html

  * igt@gem_exec_reloc@basic-many-active@vcs1:
    - shard-iclb:         NOTRUN -> [FAIL][15] ([i915#2389])
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-iclb2/igt@gem_exec_reloc@basic-many-active@vcs1.html

  * igt@gem_exec_suspend@basic-s3:
    - shard-kbl:          [PASS][16] -> [DMESG-WARN][17] ([i915#180]) +2 similar issues
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-kbl4/igt@gem_exec_suspend@basic-s3.html
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-kbl6/igt@gem_exec_suspend@basic-s3.html

  * igt@gem_huc_copy@huc-copy:
    - shard-apl:          NOTRUN -> [SKIP][18] ([fdo#109271] / [i915#2190])
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-apl3/igt@gem_huc_copy@huc-copy.html

  * igt@gem_pwrite@basic-exhaustion:
    - shard-skl:          NOTRUN -> [WARN][19] ([i915#2658])
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-skl7/igt@gem_pwrite@basic-exhaustion.html
    - shard-kbl:          NOTRUN -> [WARN][20] ([i915#2658]) +1 similar issue
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-kbl1/igt@gem_pwrite@basic-exhaustion.html

  * igt@gem_workarounds@suspend-resume:
    - shard-kbl:          NOTRUN -> [DMESG-WARN][21] ([i915#180])
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-kbl6/igt@gem_workarounds@suspend-resume.html

  * igt@i915_pm_rpm@gem-execbuf-stress-pc8:
    - shard-tglb:         NOTRUN -> [SKIP][22] ([fdo#109506] / [i915#2411])
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-tglb1/igt@i915_pm_rpm@gem-execbuf-stress-pc8.html

  * igt@kms_big_fb@y-tiled-addfb-size-overflow:
    - shard-hsw:          NOTRUN -> [SKIP][23] ([fdo#109271]) +71 similar issues
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-hsw2/igt@kms_big_fb@y-tiled-addfb-size-overflow.html

  * igt@kms_big_joiner@invalid-modeset:
    - shard-kbl:          NOTRUN -> [SKIP][24] ([fdo#109271] / [i915#2705])
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-kbl7/igt@kms_big_joiner@invalid-modeset.html

  * igt@kms_ccs@pipe-a-ccs-on-another-bo:
    - shard-snb:          NOTRUN -> [SKIP][25] ([fdo#109271]) +63 similar issues
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-snb7/igt@kms_ccs@pipe-a-ccs-on-another-bo.html

  * igt@kms_ccs@pipe-c-ccs-on-another-bo:
    - shard-skl:          NOTRUN -> [SKIP][26] ([fdo#109271] / [fdo#111304]) +1 similar issue
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-skl8/igt@kms_ccs@pipe-c-ccs-on-another-bo.html

  * igt@kms_chamelium@hdmi-frame-dump:
    - shard-snb:          NOTRUN -> [SKIP][27] ([fdo#109271] / [fdo#111827]) +2 similar issues
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-snb7/igt@kms_chamelium@hdmi-frame-dump.html

  * igt@kms_chamelium@vga-hpd-for-each-pipe:
    - shard-kbl:          NOTRUN -> [SKIP][28] ([fdo#109271] / [fdo#111827]) +15 similar issues
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-kbl7/igt@kms_chamelium@vga-hpd-for-each-pipe.html

  * igt@kms_color_chamelium@pipe-b-ctm-max:
    - shard-skl:          NOTRUN -> [SKIP][29] ([fdo#109271] / [fdo#111827]) +10 similar issues
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-skl4/igt@kms_color_chamelium@pipe-b-ctm-max.html

  * igt@kms_color_chamelium@pipe-c-ctm-max:
    - shard-apl:          NOTRUN -> [SKIP][30] ([fdo#109271] / [fdo#111827]) +16 similar issues
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-apl1/igt@kms_color_chamelium@pipe-c-ctm-max.html
    - shard-hsw:          NOTRUN -> [SKIP][31] ([fdo#109271] / [fdo#111827]) +5 similar issues
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-hsw2/igt@kms_color_chamelium@pipe-c-ctm-max.html

  * igt@kms_color_chamelium@pipe-d-ctm-0-5:
    - shard-tglb:         NOTRUN -> [SKIP][32] ([fdo#109284] / [fdo#111827]) +1 similar issue
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-tglb1/igt@kms_color_chamelium@pipe-d-ctm-0-5.html

  * igt@kms_content_protection@atomic-dpms:
    - shard-apl:          NOTRUN -> [TIMEOUT][33] ([i915#1319])
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-apl8/igt@kms_content_protection@atomic-dpms.html

  * igt@kms_cursor_crc@pipe-a-cursor-128x128-onscreen:
    - shard-skl:          NOTRUN -> [FAIL][34] ([i915#54]) +1 similar issue
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-skl1/igt@kms_cursor_crc@pipe-a-cursor-128x128-onscreen.html

  * igt@kms_cursor_crc@pipe-a-cursor-64x64-sliding:
    - shard-skl:          [PASS][35] -> [FAIL][36] ([i915#54]) +6 similar issues
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-skl3/igt@kms_cursor_crc@pipe-a-cursor-64x64-sliding.html
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-skl10/igt@kms_cursor_crc@pipe-a-cursor-64x64-sliding.html

  * igt@kms_cursor_edge_walk@pipe-c-256x256-bottom-edge:
    - shard-skl:          NOTRUN -> [SKIP][37] ([fdo#109271]) +135 similar issues
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-skl4/igt@kms_cursor_edge_walk@pipe-c-256x256-bottom-edge.html

  * igt@kms_cursor_legacy@2x-long-cursor-vs-flip-legacy:
    - shard-hsw:          [PASS][38] -> [FAIL][39] ([i915#96])
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-hsw5/igt@kms_cursor_legacy@2x-long-cursor-vs-flip-legacy.html
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-hsw1/igt@kms_cursor_legacy@2x-long-cursor-vs-flip-legacy.html

  * igt@kms_flip@flip-vs-absolute-wf_vblank@a-edp1:
    - shard-skl:          [PASS][40] -> [DMESG-WARN][41] ([i915#1982])
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-skl3/igt@kms_flip@flip-vs-absolute-wf_vblank@a-edp1.html
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-skl9/igt@kms_flip@flip-vs-absolute-wf_vblank@a-edp1.html

  * igt@kms_flip@flip-vs-expired-vblank@a-edp1:
    - shard-skl:          [PASS][42] -> [FAIL][43] ([i915#79])
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-skl1/igt@kms_flip@flip-vs-expired-vblank@a-edp1.html
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-skl2/igt@kms_flip@flip-vs-expired-vblank@a-edp1.html

  * igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile:
    - shard-apl:          NOTRUN -> [SKIP][44] ([fdo#109271] / [i915#2642])
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-apl3/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile.html

  * igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilercccs:
    - shard-kbl:          NOTRUN -> [SKIP][45] ([fdo#109271] / [i915#2672])
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-kbl4/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilercccs.html

  * igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-pri-indfb-draw-mmap-cpu:
    - shard-tglb:         NOTRUN -> [SKIP][46] ([fdo#111825])
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-tglb1/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-pri-indfb-draw-mmap-cpu.html

  * igt@kms_multipipe_modeset@basic-max-pipe-crc-check:
    - shard-tglb:         NOTRUN -> [SKIP][47] ([i915#1839])
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-tglb1/igt@kms_multipipe_modeset@basic-max-pipe-crc-check.html

  * igt@kms_plane_alpha_blend@pipe-a-alpha-opaque-fb:
    - shard-apl:          NOTRUN -> [FAIL][48] ([fdo#108145] / [i915#265]) +2 similar issues
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-apl8/igt@kms_plane_alpha_blend@pipe-a-alpha-opaque-fb.html

  * igt@kms_plane_alpha_blend@pipe-b-alpha-basic:
    - shard-kbl:          NOTRUN -> [FAIL][49] ([fdo#108145] / [i915#265]) +1 similar issue
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-kbl4/igt@kms_plane_alpha_blend@pipe-b-alpha-basic.html

  * igt@kms_plane_alpha_blend@pipe-b-alpha-transparent-fb:
    - shard-apl:          NOTRUN -> [FAIL][50] ([i915#265])
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-apl1/igt@kms_plane_alpha_blend@pipe-b-alpha-transparent-fb.html

  * igt@kms_plane_alpha_blend@pipe-c-constant-alpha-min:
    - shard-skl:          [PASS][51] -> [FAIL][52] ([fdo#108145] / [i915#265]) +1 similar issue
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-skl6/igt@kms_plane_alpha_blend@pipe-c-constant-alpha-min.html
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-skl2/igt@kms_plane_alpha_blend@pipe-c-constant-alpha-min.html

  * igt@kms_plane_cursor@pipe-d-overlay-size-128:
    - shard-hsw:          NOTRUN -> [SKIP][53] ([fdo#109271] / [i915#533]) +10 similar issues
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-hsw2/igt@kms_plane_cursor@pipe-d-overlay-size-128.html

  * igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-2:
    - shard-apl:          NOTRUN -> [SKIP][54] ([fdo#109271] / [i915#658]) +5 similar issues
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-apl8/igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-2.html

  * igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-5:
    - shard-skl:          NOTRUN -> [SKIP][55] ([fdo#109271] / [i915#658]) +1 similar issue
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-skl1/igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-5.html

  * igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-3:
    - shard-kbl:          NOTRUN -> [SKIP][56] ([fdo#109271] / [i915#658]) +3 similar issues
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-kbl4/igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-3.html

  * igt@kms_psr@psr2_primary_blt:
    - shard-hsw:          NOTRUN -> [SKIP][57] ([fdo#109271] / [i915#1072]) +2 similar issues
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-hsw2/igt@kms_psr@psr2_primary_blt.html

  * igt@kms_psr@psr2_primary_mmap_cpu:
    - shard-iclb:         [PASS][58] -> [SKIP][59] ([fdo#109441]) +1 similar issue
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-iclb2/igt@kms_psr@psr2_primary_mmap_cpu.html
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-iclb7/igt@kms_psr@psr2_primary_mmap_cpu.html

  * igt@kms_sysfs_edid_timing:
    - shard-skl:          NOTRUN -> [FAIL][60] ([IGT#2])
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-skl4/igt@kms_sysfs_edid_timing.html

  * igt@kms_vblank@pipe-c-ts-continuation-suspend:
    - shard-skl:          [PASS][61] -> [INCOMPLETE][62] ([i915#198] / [i915#2828])
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-skl4/igt@kms_vblank@pipe-c-ts-continuation-suspend.html
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-skl9/igt@kms_vblank@pipe-c-ts-continuation-suspend.html

  * igt@kms_vblank@pipe-d-ts-continuation-idle:
    - shard-apl:          NOTRUN -> [SKIP][63] ([fdo#109271]) +148 similar issues
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-apl8/igt@kms_vblank@pipe-d-ts-continuation-idle.html

  * igt@kms_vblank@pipe-d-wait-idle:
    - shard-apl:          NOTRUN -> [SKIP][64] ([fdo#109271] / [i915#533]) +3 similar issues
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-apl8/igt@kms_vblank@pipe-d-wait-idle.html

  * igt@perf@blocking:
    - shard-skl:          [PASS][65] -> [FAIL][66] ([i915#1542])
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-skl8/igt@perf@blocking.html
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-skl3/igt@perf@blocking.html

  * igt@prime_nv_pcopy@test1_micro:
    - shard-tglb:         NOTRUN -> [SKIP][67] ([fdo#109291])
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-tglb1/igt@prime_nv_pcopy@test1_micro.html

  * igt@prime_nv_pcopy@test2:
    - shard-kbl:          NOTRUN -> [SKIP][68] ([fdo#109271]) +179 similar issues
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-kbl4/igt@prime_nv_pcopy@test2.html

  * igt@sysfs_clients@recycle-many:
    - shard-hsw:          [PASS][69] -> [FAIL][70] ([i915#3028])
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-hsw1/igt@sysfs_clients@recycle-many.html
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-hsw5/igt@sysfs_clients@recycle-many.html
    - shard-tglb:         [PASS][71] -> [FAIL][72] ([i915#3028])
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-tglb6/igt@sysfs_clients@recycle-many.html
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-tglb1/igt@sysfs_clients@recycle-many.html

  * igt@sysfs_clients@sema-10@vcs0:
    - shard-apl:          [PASS][73] -> [SKIP][74] ([fdo#109271] / [i915#3026]) +1 similar issue
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-apl6/igt@sysfs_clients@sema-10@vcs0.html
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-apl6/igt@sysfs_clients@sema-10@vcs0.html

  * igt@sysfs_clients@split-25@vecs0:
    - shard-skl:          [PASS][75] -> [SKIP][76] ([fdo#109271])
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-skl7/igt@sysfs_clients@split-25@vecs0.html
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-skl10/igt@sysfs_clients@split-25@vecs0.html

  
#### Possible fixes ####

  * igt@gem_exec_reloc@basic-many-active@rcs0:
    - shard-hsw:          [FAIL][77] ([i915#2389]) -> [PASS][78]
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-hsw4/igt@gem_exec_reloc@basic-many-active@rcs0.html
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-hsw1/igt@gem_exec_reloc@basic-many-active@rcs0.html

  * igt@gem_exec_schedule@u-fairslice@rcs0:
    - shard-tglb:         [DMESG-WARN][79] ([i915#2803]) -> [PASS][80]
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-tglb8/igt@gem_exec_schedule@u-fairslice@rcs0.html
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-tglb1/igt@gem_exec_schedule@u-fairslice@rcs0.html

  * igt@gem_exec_whisper@basic-fds:
    - shard-glk:          [DMESG-WARN][81] ([i915#118] / [i915#95]) -> [PASS][82] +1 similar issue
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-glk6/igt@gem_exec_whisper@basic-fds.html
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-glk1/igt@gem_exec_whisper@basic-fds.html

  * igt@gen9_exec_parse@allowed-single:
    - shard-skl:          [DMESG-WARN][83] ([i915#1436] / [i915#716]) -> [PASS][84]
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-skl4/igt@gen9_exec_parse@allowed-single.html
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-skl7/igt@gen9_exec_parse@allowed-single.html

  * igt@i915_module_load@reload-with-fault-injection:
    - shard-hsw:          [INCOMPLETE][85] ([i915#2880]) -> [PASS][86]
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-hsw4/igt@i915_module_load@reload-with-fault-injection.html
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-hsw5/igt@i915_module_load@reload-with-fault-injection.html

  * igt@i915_selftest@live@hangcheck:
    - shard-hsw:          [INCOMPLETE][87] ([i915#2782]) -> [PASS][88]
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-hsw2/igt@i915_selftest@live@hangcheck.html
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-hsw2/igt@i915_selftest@live@hangcheck.html

  * igt@i915_suspend@forcewake:
    - shard-skl:          [INCOMPLETE][89] ([i915#636]) -> [PASS][90]
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-skl6/igt@i915_suspend@forcewake.html
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-skl4/igt@i915_suspend@forcewake.html

  * igt@kms_color@pipe-a-ctm-0-25:
    - shard-skl:          [DMESG-WARN][91] ([i915#1982]) -> [PASS][92]
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-skl3/igt@kms_color@pipe-a-ctm-0-25.html
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-skl9/igt@kms_color@pipe-a-ctm-0-25.html

  * igt@kms_cursor_crc@pipe-c-cursor-64x21-random:
    - shard-skl:          [FAIL][93] ([i915#54]) -> [PASS][94] +7 similar issues
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-skl1/igt@kms_cursor_crc@pipe-c-cursor-64x21-random.html
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-skl9/igt@kms_cursor_crc@pipe-c-cursor-64x21-random.html

  * igt@kms_cursor_legacy@flip-vs-cursor-busy-crc-atomic:
    - shard-skl:          [FAIL][95] ([i915#2346]) -> [PASS][96]
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-skl5/igt@kms_cursor_legacy@flip-vs-cursor-busy-crc-atomic.html
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-skl7/igt@kms_cursor_legacy@flip-vs-cursor-busy-crc-atomic.html

  * igt@kms_flip@flip-vs-expired-vblank@b-hdmi-a1:
    - shard-glk:          [FAIL][97] ([i915#79]) -> [PASS][98] +1 similar issue
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-glk1/igt@kms_flip@flip-vs-expired-vblank@b-hdmi-a1.html
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-glk9/igt@kms_flip@flip-vs-expired-vblank@b-hdmi-a1.html

  * igt@kms_flip@flip-vs-suspend-interruptible@c-edp1:
    - shard-skl:          [INCOMPLETE][99] ([i915#198] / [i915#2295]) -> [PASS][100]
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-skl6/igt@kms_flip@flip-vs-suspend-interruptible@c-edp1.html
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-skl8/igt@kms_flip@flip-vs-suspend-interruptible@c-edp1.html

  * igt@kms_hdr@bpc-switch:
    - shard-skl:          [FAIL][101] ([i915#1188]) -> [PASS][102]
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-skl4/igt@kms_hdr@bpc-switch.html
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-skl7/igt@kms_hdr@bpc-switch.html

  * igt@kms_hdr@bpc-switch-suspend:
    - shard-skl:          [INCOMPLETE][103] ([i915#146] / [i915#198] / [i915#2828]) -> [PASS][104]
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-skl10/igt@kms_hdr@bpc-switch-suspend.html
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-skl1/igt@kms_hdr@bpc-switch-suspend.html

  * igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a:
    - shard-kbl:          [DMESG-WARN][105] ([i915#180]) -> [PASS][106]
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-kbl6/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a.html
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-kbl1/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a.html

  * igt@kms_plane_alpha_blend@pipe-c-coverage-7efc:
    - shard-skl:          [FAIL][107] ([fdo#108145] / [i915#265]) -> [PASS][108]
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-skl1/igt@kms_plane_alpha_blend@pipe-c-coverage-7efc.html
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-skl3/igt@kms_plane_alpha_blend@pipe-c-coverage-7efc.html

  * igt@kms_psr@psr2_sprite_blt:
    - shard-iclb:         [SKIP][109] ([fdo#109441]) -> [PASS][110] +1 similar issue
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-iclb3/igt@kms_psr@psr2_sprite_blt.html
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-iclb2/igt@kms_psr@psr2_sprite_blt.html

  * igt@kms_vblank@pipe-c-ts-continuation-suspend:
    - shard-apl:          [DMESG-WARN][111] ([i915#180]) -> [PASS][112]
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-apl8/igt@kms_vblank@pipe-c-ts-continuation-suspend.html
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-apl1/igt@kms_vblank@pipe-c-ts-continuation-suspend.html

  * igt@sysfs_clients@recycle:
    - shard-hsw:          [FAIL][113] ([i915#3028]) -> [PASS][114]
   [113]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-hsw7/igt@sysfs_clients@recycle.html
   [114]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-hsw7/igt@sysfs_clients@recycle.html
    - shard-iclb:         [FAIL][115] ([i915#3028]) -> [PASS][116]
   [115]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-iclb4/igt@sysfs_clients@recycle.html
   [116]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-iclb4/igt@sysfs_clients@recycle.html

  * igt@sysfs_clients@sema-10@rcs0:
    - shard-apl:          [SKIP][117] ([fdo#109271] / [i915#3026]) -> [PASS][118]
   [117]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-apl6/igt@sysfs_clients@sema-10@rcs0.html
   [118]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-apl6/igt@sysfs_clients@sema-10@rcs0.html

  * igt@sysfs_clients@split-25@rcs0:
    - shard-skl:          [SKIP][119] ([fdo#109271]) -> [PASS][120]
   [119]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-skl7/igt@sysfs_clients@split-25@rcs0.html
   [120]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-skl10/igt@sysfs_clients@split-25@rcs0.html

  
#### Warnings ####

  * igt@i915_pm_dc@dc3co-vpb-simulation:
    - shard-iclb:         [SKIP][121] ([i915#588]) -> [SKIP][122] ([i915#658])
   [121]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-iclb2/igt@i915_pm_dc@dc3co-vpb-simulation.html
   [122]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-iclb8/igt@i915_pm_dc@dc3co-vpb-simulation.html

  * igt@kms_fbcon_fbt@fbc-suspend:
    - shard-tglb:         [INCOMPLETE][123] ([i915#1602] / [i915#2411] / [i915#456]) -> [INCOMPLETE][124] ([i915#1436] / [i915#1602] / [i915#1887] / [i915#2411] / [i915#456])
   [123]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-tglb7/igt@kms_fbcon_fbt@fbc-suspend.html
   [124]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-tglb3/igt@kms_fbcon_fbt@fbc-suspend.html

  * igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-1:
    - shard-iclb:         [SKIP][125] ([i915#658]) -> [SKIP][126] ([i915#2920]) +1 similar issue
   [125]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-iclb3/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-1.html
   [126]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-iclb2/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-1.html

  * igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-4:
    - shard-iclb:         [SKIP][127] ([i915#2920]) -> [SKIP][128] ([i915#658]) +2 similar issues
   [127]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-iclb2/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-4.html
   [128]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-iclb8/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-4.html

  * igt@runner@aborted:
    - shard-kbl:          ([FAIL][129], [FAIL][130], [FAIL][131], [FAIL][132], [FAIL][133], [FAIL][134]) ([i915#1814] / [i915#2295] / [i915#2505] / [i915#3002]) -> ([FAIL][135], [FAIL][136], [FAIL][137], [FAIL][138], [FAIL][139], [FAIL][140], [FAIL][141]) ([i915#1814] / [i915#2295] / [i915#2505] / [i915#602])
   [129]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-kbl4/igt@runner@aborted.html
   [130]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-kbl6/igt@runner@aborted.html
   [131]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-kbl7/igt@runner@aborted.html
   [132]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-kbl6/igt@runner@aborted.html
   [133]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-kbl4/igt@runner@aborted.html
   [134]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-kbl6/igt@runner@aborted.html
   [135]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-kbl6/igt@runner@aborted.html
   [136]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-kbl6/igt@runner@aborted.html
   [137]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-kbl6/igt@runner@aborted.html
   [138]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-kbl7/igt@runner@aborted.html
   [139]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-kbl6/igt@runner@aborted.html
   [140]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-kbl6/igt@runner@aborted.html
   [141]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-kbl7/igt@runner@aborted.html
    - shard-apl:          ([FAIL][142], [FAIL][143], [FAIL][144]) ([i915#1814] / [i915#2295] / [i915#3002]) -> ([FAIL][145], [FAIL][146]) ([i915#2295] / [i915#3002])
   [142]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-apl8/igt@runner@aborted.html
   [143]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-apl3/igt@runner@aborted.html
   [144]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-apl7/igt@runner@aborted.html
   [145]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-apl7/igt@runner@aborted.html
   [146]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-apl8/igt@runner@aborted.html
    - shard-tglb:         ([FAIL][147], [FAIL][148], [FAIL][149], [FAIL][150], [FAIL][151]) ([i915#1602] / [i915#2295] / [i915#2426] / [i915#2667] / [i915#2803] / [i915#3002]) -> ([FAIL][152], [FAIL][153], [FAIL][154], [FAIL][155]) ([i915#1602] / [i915#2295] / [i915#2667] / [i915#3002])
   [147]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-tglb5/igt@runner@aborted.html
   [148]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-tglb6/igt@runner@aborted.html
   [149]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-tglb8/igt@runner@aborted.html
   [150]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-tglb7/igt@runner@aborted.html
   [151]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9786/shard-tglb8/igt@runner@aborted.html
   [152]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-tglb7/igt@runner@aborted.html
   [153]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/shard-tglb3/igt@runner@aborted.html
   [154]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_1970

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19701/index.html

[-- Attachment #1.2: Type: text/html, Size: 33774 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v2] drm/vblank: Do not store a new vblank timestamp in drm_vblank_restore()
  2021-02-18 16:03   ` [Intel-gfx] " Ville Syrjala
@ 2021-02-19 15:08     ` Daniel Vetter
  -1 siblings, 0 replies; 43+ messages in thread
From: Daniel Vetter @ 2021-02-19 15:08 UTC (permalink / raw)
  To: Ville Syrjala
  Cc: Daniel Vetter, intel-gfx, Dhinakaran Pandiyan, dri-devel, Rodrigo Vivi

On Thu, Feb 18, 2021 at 06:03:05PM +0200, Ville Syrjala wrote:
> From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> 
> drm_vblank_restore() exists because certain power saving states
> can clobber the hardware frame counter. The way it does this is
> by guesstimating how many frames were missed purely based on
> the difference between the last stored timestamp vs. a newly
> sampled timestamp.
> 
> If we should call this function before a full frame has
> elapsed since we sampled the last timestamp we would end up
> with a possibly slightly different timestamp value for the
> same frame. Currently we will happily overwrite the already
> stored timestamp for the frame with the new value. This
> could cause userspace to observe two different timestamps
> for the same frame (and the timestamp could even go
> backwards depending on how much error we introduce when
> correcting the timestamp based on the scanout position).
> 
> To avoid that let's not update the stored timestamp at all,
> and instead we just fix up the last recorded hw vblank counter
> value such that the already stored timestamp/seq number will
> match. Thus the next time a vblank irq happens it will calculate
> the correct diff between the current and stored hw vblank counter
> values.
> 
> Sidenote: Another possible idea that came to mind would be to
> do this correction only if the power really was removed since
> the last time we sampled the hw frame counter. But to do that
> we would need a robust way to detect when it has occurred. Some
> possibilities could involve some kind of hardare power well
> transition counter, or potentially we could store a magic value
> in a scratch register that lives in the same power well. But
> I'm not sure either of those exist, so would need an actual
> investigation to find out. All of that is very hardware specific
> of course, so would have to be done in the driver code.
> 
> Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>

Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>

For testing, there's nothing else than hsw psr that needs this, or that's
just the box you have locally?
-Daniel

> ---
>  drivers/gpu/drm/drm_vblank.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> index 2bd989688eae..3417e1ac7918 100644
> --- a/drivers/gpu/drm/drm_vblank.c
> +++ b/drivers/gpu/drm/drm_vblank.c
> @@ -1478,6 +1478,7 @@ static void drm_vblank_restore(struct drm_device *dev, unsigned int pipe)
>  	u64 diff_ns;
>  	u32 cur_vblank, diff = 1;
>  	int count = DRM_TIMESTAMP_MAXRETRIES;
> +	u32 max_vblank_count = drm_max_vblank_count(dev, pipe);
>  
>  	if (drm_WARN_ON(dev, pipe >= dev->num_crtcs))
>  		return;
> @@ -1504,7 +1505,7 @@ static void drm_vblank_restore(struct drm_device *dev, unsigned int pipe)
>  	drm_dbg_vbl(dev,
>  		    "missed %d vblanks in %lld ns, frame duration=%d ns, hw_diff=%d\n",
>  		    diff, diff_ns, framedur_ns, cur_vblank - vblank->last);
> -	store_vblank(dev, pipe, diff, t_vblank, cur_vblank);
> +	vblank->last = (cur_vblank - diff) & max_vblank_count;
>  }
>  
>  /**
> -- 
> 2.26.2
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH v2] drm/vblank: Do not store a new vblank timestamp in drm_vblank_restore()
@ 2021-02-19 15:08     ` Daniel Vetter
  0 siblings, 0 replies; 43+ messages in thread
From: Daniel Vetter @ 2021-02-19 15:08 UTC (permalink / raw)
  To: Ville Syrjala; +Cc: Daniel Vetter, intel-gfx, Dhinakaran Pandiyan, dri-devel

On Thu, Feb 18, 2021 at 06:03:05PM +0200, Ville Syrjala wrote:
> From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> 
> drm_vblank_restore() exists because certain power saving states
> can clobber the hardware frame counter. The way it does this is
> by guesstimating how many frames were missed purely based on
> the difference between the last stored timestamp vs. a newly
> sampled timestamp.
> 
> If we should call this function before a full frame has
> elapsed since we sampled the last timestamp we would end up
> with a possibly slightly different timestamp value for the
> same frame. Currently we will happily overwrite the already
> stored timestamp for the frame with the new value. This
> could cause userspace to observe two different timestamps
> for the same frame (and the timestamp could even go
> backwards depending on how much error we introduce when
> correcting the timestamp based on the scanout position).
> 
> To avoid that let's not update the stored timestamp at all,
> and instead we just fix up the last recorded hw vblank counter
> value such that the already stored timestamp/seq number will
> match. Thus the next time a vblank irq happens it will calculate
> the correct diff between the current and stored hw vblank counter
> values.
> 
> Sidenote: Another possible idea that came to mind would be to
> do this correction only if the power really was removed since
> the last time we sampled the hw frame counter. But to do that
> we would need a robust way to detect when it has occurred. Some
> possibilities could involve some kind of hardare power well
> transition counter, or potentially we could store a magic value
> in a scratch register that lives in the same power well. But
> I'm not sure either of those exist, so would need an actual
> investigation to find out. All of that is very hardware specific
> of course, so would have to be done in the driver code.
> 
> Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>

Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>

For testing, there's nothing else than hsw psr that needs this, or that's
just the box you have locally?
-Daniel

> ---
>  drivers/gpu/drm/drm_vblank.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> index 2bd989688eae..3417e1ac7918 100644
> --- a/drivers/gpu/drm/drm_vblank.c
> +++ b/drivers/gpu/drm/drm_vblank.c
> @@ -1478,6 +1478,7 @@ static void drm_vblank_restore(struct drm_device *dev, unsigned int pipe)
>  	u64 diff_ns;
>  	u32 cur_vblank, diff = 1;
>  	int count = DRM_TIMESTAMP_MAXRETRIES;
> +	u32 max_vblank_count = drm_max_vblank_count(dev, pipe);
>  
>  	if (drm_WARN_ON(dev, pipe >= dev->num_crtcs))
>  		return;
> @@ -1504,7 +1505,7 @@ static void drm_vblank_restore(struct drm_device *dev, unsigned int pipe)
>  	drm_dbg_vbl(dev,
>  		    "missed %d vblanks in %lld ns, frame duration=%d ns, hw_diff=%d\n",
>  		    diff, diff_ns, framedur_ns, cur_vblank - vblank->last);
> -	store_vblank(dev, pipe, diff, t_vblank, cur_vblank);
> +	vblank->last = (cur_vblank - diff) & max_vblank_count;
>  }
>  
>  /**
> -- 
> 2.26.2
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v2] drm/vblank: Do not store a new vblank timestamp in drm_vblank_restore()
  2021-02-19 15:08     ` [Intel-gfx] " Daniel Vetter
@ 2021-02-19 15:47       ` Ville Syrjälä
  -1 siblings, 0 replies; 43+ messages in thread
From: Ville Syrjälä @ 2021-02-19 15:47 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Daniel Vetter, intel-gfx, Dhinakaran Pandiyan, dri-devel, Rodrigo Vivi

On Fri, Feb 19, 2021 at 04:08:09PM +0100, Daniel Vetter wrote:
> On Thu, Feb 18, 2021 at 06:03:05PM +0200, Ville Syrjala wrote:
> > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > 
> > drm_vblank_restore() exists because certain power saving states
> > can clobber the hardware frame counter. The way it does this is
> > by guesstimating how many frames were missed purely based on
> > the difference between the last stored timestamp vs. a newly
> > sampled timestamp.
> > 
> > If we should call this function before a full frame has
> > elapsed since we sampled the last timestamp we would end up
> > with a possibly slightly different timestamp value for the
> > same frame. Currently we will happily overwrite the already
> > stored timestamp for the frame with the new value. This
> > could cause userspace to observe two different timestamps
> > for the same frame (and the timestamp could even go
> > backwards depending on how much error we introduce when
> > correcting the timestamp based on the scanout position).
> > 
> > To avoid that let's not update the stored timestamp at all,
> > and instead we just fix up the last recorded hw vblank counter
> > value such that the already stored timestamp/seq number will
> > match. Thus the next time a vblank irq happens it will calculate
> > the correct diff between the current and stored hw vblank counter
> > values.
> > 
> > Sidenote: Another possible idea that came to mind would be to
> > do this correction only if the power really was removed since
> > the last time we sampled the hw frame counter. But to do that
> > we would need a robust way to detect when it has occurred. Some
> > possibilities could involve some kind of hardare power well
> > transition counter, or potentially we could store a magic value
> > in a scratch register that lives in the same power well. But
> > I'm not sure either of those exist, so would need an actual
> > investigation to find out. All of that is very hardware specific
> > of course, so would have to be done in the driver code.
> > 
> > Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
> > Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> > Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> 
> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> 
> For testing, there's nothing else than hsw psr that needs this, or that's
> just the box you have locally?

Just the one I happen to have.

Any machine with PSR should be able to hit this. But now that I
refresh my memory I guess HSW/BDW don't actually fully reset the
hw frame counter since they don't have the DC5/6 stuff. But
even on HSW/BDW the frame counter would certainly stop while in
PSR, so maintaining sensible vblank seq numbers will still
require drm_vblank_restore(). Just my further idea of checking
some power well counter/scratch register would not help in cases
where DC states are not used. Instead we'd need some kind of PSR
residency counter/etc.

-- 
Ville Syrjälä
Intel
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH v2] drm/vblank: Do not store a new vblank timestamp in drm_vblank_restore()
@ 2021-02-19 15:47       ` Ville Syrjälä
  0 siblings, 0 replies; 43+ messages in thread
From: Ville Syrjälä @ 2021-02-19 15:47 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: Daniel Vetter, intel-gfx, Dhinakaran Pandiyan, dri-devel

On Fri, Feb 19, 2021 at 04:08:09PM +0100, Daniel Vetter wrote:
> On Thu, Feb 18, 2021 at 06:03:05PM +0200, Ville Syrjala wrote:
> > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > 
> > drm_vblank_restore() exists because certain power saving states
> > can clobber the hardware frame counter. The way it does this is
> > by guesstimating how many frames were missed purely based on
> > the difference between the last stored timestamp vs. a newly
> > sampled timestamp.
> > 
> > If we should call this function before a full frame has
> > elapsed since we sampled the last timestamp we would end up
> > with a possibly slightly different timestamp value for the
> > same frame. Currently we will happily overwrite the already
> > stored timestamp for the frame with the new value. This
> > could cause userspace to observe two different timestamps
> > for the same frame (and the timestamp could even go
> > backwards depending on how much error we introduce when
> > correcting the timestamp based on the scanout position).
> > 
> > To avoid that let's not update the stored timestamp at all,
> > and instead we just fix up the last recorded hw vblank counter
> > value such that the already stored timestamp/seq number will
> > match. Thus the next time a vblank irq happens it will calculate
> > the correct diff between the current and stored hw vblank counter
> > values.
> > 
> > Sidenote: Another possible idea that came to mind would be to
> > do this correction only if the power really was removed since
> > the last time we sampled the hw frame counter. But to do that
> > we would need a robust way to detect when it has occurred. Some
> > possibilities could involve some kind of hardare power well
> > transition counter, or potentially we could store a magic value
> > in a scratch register that lives in the same power well. But
> > I'm not sure either of those exist, so would need an actual
> > investigation to find out. All of that is very hardware specific
> > of course, so would have to be done in the driver code.
> > 
> > Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
> > Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> > Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> 
> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> 
> For testing, there's nothing else than hsw psr that needs this, or that's
> just the box you have locally?

Just the one I happen to have.

Any machine with PSR should be able to hit this. But now that I
refresh my memory I guess HSW/BDW don't actually fully reset the
hw frame counter since they don't have the DC5/6 stuff. But
even on HSW/BDW the frame counter would certainly stop while in
PSR, so maintaining sensible vblank seq numbers will still
require drm_vblank_restore(). Just my further idea of checking
some power well counter/scratch register would not help in cases
where DC states are not used. Instead we'd need some kind of PSR
residency counter/etc.

-- 
Ville Syrjälä
Intel
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Intel-gfx] ✓ Fi.CI.BAT: success for drm/vblank: Avoid storing a timestamp for the same frame twice (rev3)
  2021-02-04  2:04 ` [Intel-gfx] " Ville Syrjala
                   ` (8 preceding siblings ...)
  (?)
@ 2021-02-21  4:18 ` Patchwork
  -1 siblings, 0 replies; 43+ messages in thread
From: Patchwork @ 2021-02-21  4:18 UTC (permalink / raw)
  To: Ville Syrjala; +Cc: intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 3387 bytes --]

== Series Details ==

Series: drm/vblank: Avoid storing a timestamp for the same frame twice (rev3)
URL   : https://patchwork.freedesktop.org/series/86672/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_9791 -> Patchwork_19711
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/index.html

Known issues
------------

  Here are the changes found in Patchwork_19711 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@amdgpu/amd_basic@semaphore:
    - fi-bsw-nick:        NOTRUN -> [SKIP][1] ([fdo#109271]) +17 similar issues
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/fi-bsw-nick/igt@amdgpu/amd_basic@semaphore.html

  * igt@amdgpu/amd_basic@userptr:
    - fi-byt-j1900:       NOTRUN -> [SKIP][2] ([fdo#109271]) +17 similar issues
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/fi-byt-j1900/igt@amdgpu/amd_basic@userptr.html

  
#### Possible fixes ####

  * igt@i915_pm_rpm@module-reload:
    - fi-byt-j1900:       [INCOMPLETE][3] ([i915#142] / [i915#2405]) -> [PASS][4]
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/fi-byt-j1900/igt@i915_pm_rpm@module-reload.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/fi-byt-j1900/igt@i915_pm_rpm@module-reload.html

  * igt@i915_selftest@live@execlists:
    - fi-bsw-nick:        [INCOMPLETE][5] ([i915#2940]) -> [PASS][6]
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/fi-bsw-nick/igt@i915_selftest@live@execlists.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/fi-bsw-nick/igt@i915_selftest@live@execlists.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109285]: https://bugs.freedesktop.org/show_bug.cgi?id=109285
  [fdo#109315]: https://bugs.freedesktop.org/show_bug.cgi?id=109315
  [fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
  [i915#1222]: https://gitlab.freedesktop.org/drm/intel/issues/1222
  [i915#142]: https://gitlab.freedesktop.org/drm/intel/issues/142
  [i915#2190]: https://gitlab.freedesktop.org/drm/intel/issues/2190
  [i915#2405]: https://gitlab.freedesktop.org/drm/intel/issues/2405
  [i915#2940]: https://gitlab.freedesktop.org/drm/intel/issues/2940
  [i915#533]: https://gitlab.freedesktop.org/drm/intel/issues/533


Participating hosts (45 -> 40)
------------------------------

  Additional (1): fi-ehl-2 
  Missing    (6): fi-ilk-m540 fi-hsw-4200u fi-bsw-cyan fi-ctg-p8600 fi-tgl-y fi-bdw-samus 


Build changes
-------------

  * Linux: CI_DRM_9791 -> Patchwork_19711

  CI-20190529: 20190529
  CI_DRM_9791: c1991e1c98008d13d9773744a9f9da0884644917 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_6009: a4dccf189b34a55338feec9927dac57c467c4100 @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_19711: 201c60e18cf69bd374d5c01f894d129ac7f1d170 @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

201c60e18cf6 drm/vblank: Do not store a new vblank timestamp in drm_vblank_restore()

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/index.html

[-- Attachment #1.2: Type: text/html, Size: 3753 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Intel-gfx] ✓ Fi.CI.IGT: success for drm/vblank: Avoid storing a timestamp for the same frame twice (rev3)
  2021-02-04  2:04 ` [Intel-gfx] " Ville Syrjala
                   ` (9 preceding siblings ...)
  (?)
@ 2021-02-21  5:41 ` Patchwork
  -1 siblings, 0 replies; 43+ messages in thread
From: Patchwork @ 2021-02-21  5:41 UTC (permalink / raw)
  To: Ville Syrjala; +Cc: intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 30292 bytes --]

== Series Details ==

Series: drm/vblank: Avoid storing a timestamp for the same frame twice (rev3)
URL   : https://patchwork.freedesktop.org/series/86672/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_9791_full -> Patchwork_19711_full
====================================================

Summary
-------

  **WARNING**

  Minor unknown changes coming with Patchwork_19711_full need to be verified
  manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_19711_full, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_19711_full:

### IGT changes ###

#### Warnings ####

  * igt@runner@aborted:
    - shard-kbl:          ([FAIL][1], [FAIL][2], [FAIL][3]) ([i915#1814] / [i915#2505] / [i915#92]) -> ([FAIL][4], [FAIL][5], [FAIL][6]) ([i915#1814] / [i915#2505] / [i915#602])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-kbl1/igt@runner@aborted.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-kbl1/igt@runner@aborted.html
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-kbl6/igt@runner@aborted.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-kbl1/igt@runner@aborted.html
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-kbl7/igt@runner@aborted.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-kbl6/igt@runner@aborted.html
    - shard-apl:          ([FAIL][7], [FAIL][8]) -> ([FAIL][9], [FAIL][10], [FAIL][11]) ([i915#3002])
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-apl8/igt@runner@aborted.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-apl8/igt@runner@aborted.html
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-apl1/igt@runner@aborted.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-apl2/igt@runner@aborted.html
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-apl3/igt@runner@aborted.html
    - shard-skl:          ([FAIL][12], [FAIL][13]) ([i915#3002]) -> ([FAIL][14], [FAIL][15], [FAIL][16], [FAIL][17]) ([i915#1436] / [i915#2426] / [i915#3002])
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-skl7/igt@runner@aborted.html
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-skl8/igt@runner@aborted.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-skl5/igt@runner@aborted.html
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-skl10/igt@runner@aborted.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-skl6/igt@runner@aborted.html
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-skl4/igt@runner@aborted.html

  
Known issues
------------

  Here are the changes found in Patchwork_19711_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@feature_discovery@psr2:
    - shard-iclb:         [PASS][18] -> [SKIP][19] ([i915#658])
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-iclb2/igt@feature_discovery@psr2.html
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-iclb6/igt@feature_discovery@psr2.html

  * igt@gem_create@create-massive:
    - shard-apl:          NOTRUN -> [DMESG-WARN][20] ([i915#3002])
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-apl2/igt@gem_create@create-massive.html

  * igt@gem_ctx_persistence@close-replace-race:
    - shard-glk:          NOTRUN -> [TIMEOUT][21] ([i915#2918])
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-glk4/igt@gem_ctx_persistence@close-replace-race.html

  * igt@gem_ctx_persistence@smoketest:
    - shard-snb:          NOTRUN -> [SKIP][22] ([fdo#109271] / [i915#1099]) +4 similar issues
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-snb6/igt@gem_ctx_persistence@smoketest.html

  * igt@gem_exec_fair@basic-none-rrul@rcs0:
    - shard-glk:          [PASS][23] -> [FAIL][24] ([i915#2842])
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-glk5/igt@gem_exec_fair@basic-none-rrul@rcs0.html
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-glk9/igt@gem_exec_fair@basic-none-rrul@rcs0.html

  * igt@gem_exec_fair@basic-none-share@rcs0:
    - shard-tglb:         [PASS][25] -> [FAIL][26] ([i915#2842])
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-tglb2/igt@gem_exec_fair@basic-none-share@rcs0.html
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-tglb5/igt@gem_exec_fair@basic-none-share@rcs0.html

  * igt@gem_exec_fair@basic-pace@vcs1:
    - shard-kbl:          [PASS][27] -> [SKIP][28] ([fdo#109271])
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-kbl6/igt@gem_exec_fair@basic-pace@vcs1.html
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-kbl6/igt@gem_exec_fair@basic-pace@vcs1.html

  * igt@gem_exec_reloc@basic-many-active@rcs0:
    - shard-snb:          NOTRUN -> [FAIL][29] ([i915#2389]) +2 similar issues
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-snb6/igt@gem_exec_reloc@basic-many-active@rcs0.html

  * igt@gem_exec_reloc@basic-parallel:
    - shard-kbl:          NOTRUN -> [TIMEOUT][30] ([i915#1729])
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-kbl2/igt@gem_exec_reloc@basic-parallel.html

  * igt@gem_exec_schedule@u-fairslice@rcs0:
    - shard-skl:          NOTRUN -> [DMESG-WARN][31] ([i915#1610] / [i915#2803])
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-skl6/igt@gem_exec_schedule@u-fairslice@rcs0.html

  * igt@gem_exec_whisper@basic-contexts-priority:
    - shard-iclb:         [PASS][32] -> [INCOMPLETE][33] ([i915#2461])
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-iclb7/igt@gem_exec_whisper@basic-contexts-priority.html
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-iclb2/igt@gem_exec_whisper@basic-contexts-priority.html

  * igt@gem_exec_whisper@basic-queues-forked-all:
    - shard-glk:          [PASS][34] -> [DMESG-WARN][35] ([i915#118] / [i915#95])
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-glk6/igt@gem_exec_whisper@basic-queues-forked-all.html
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-glk3/igt@gem_exec_whisper@basic-queues-forked-all.html

  * igt@gem_pwrite@basic-exhaustion:
    - shard-snb:          NOTRUN -> [WARN][36] ([i915#2658])
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-snb6/igt@gem_pwrite@basic-exhaustion.html

  * igt@gem_userptr_blits@input-checking:
    - shard-snb:          NOTRUN -> [DMESG-WARN][37] ([i915#3002])
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-snb6/igt@gem_userptr_blits@input-checking.html

  * igt@gem_userptr_blits@process-exit-mmap@wb:
    - shard-glk:          NOTRUN -> [SKIP][38] ([fdo#109271] / [i915#1699]) +3 similar issues
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-glk4/igt@gem_userptr_blits@process-exit-mmap@wb.html

  * igt@gem_userptr_blits@vma-merge:
    - shard-snb:          NOTRUN -> [FAIL][39] ([i915#2724])
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-snb6/igt@gem_userptr_blits@vma-merge.html
    - shard-apl:          NOTRUN -> [INCOMPLETE][40] ([i915#2502] / [i915#2667])
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-apl3/igt@gem_userptr_blits@vma-merge.html

  * igt@gen9_exec_parse@allowed-single:
    - shard-skl:          [PASS][41] -> [DMESG-WARN][42] ([i915#1436] / [i915#716])
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-skl4/igt@gen9_exec_parse@allowed-single.html
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-skl5/igt@gen9_exec_parse@allowed-single.html

  * igt@gen9_exec_parse@batch-invalid-length:
    - shard-snb:          NOTRUN -> [SKIP][43] ([fdo#109271]) +214 similar issues
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-snb6/igt@gen9_exec_parse@batch-invalid-length.html

  * igt@i915_hangman@engine-error@vecs0:
    - shard-kbl:          NOTRUN -> [SKIP][44] ([fdo#109271]) +158 similar issues
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-kbl2/igt@i915_hangman@engine-error@vecs0.html

  * igt@i915_pm_lpsp@kms-lpsp@kms-lpsp-dp:
    - shard-kbl:          NOTRUN -> [SKIP][45] ([fdo#109271] / [i915#1937])
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-kbl4/igt@i915_pm_lpsp@kms-lpsp@kms-lpsp-dp.html

  * igt@i915_suspend@debugfs-reader:
    - shard-apl:          [PASS][46] -> [DMESG-WARN][47] ([i915#180])
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-apl2/igt@i915_suspend@debugfs-reader.html
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-apl1/igt@i915_suspend@debugfs-reader.html

  * igt@kms_chamelium@dp-edid-change-during-suspend:
    - shard-apl:          NOTRUN -> [SKIP][48] ([fdo#109271] / [fdo#111827]) +7 similar issues
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-apl8/igt@kms_chamelium@dp-edid-change-during-suspend.html

  * igt@kms_chamelium@vga-hpd-for-each-pipe:
    - shard-kbl:          NOTRUN -> [SKIP][49] ([fdo#109271] / [fdo#111827]) +16 similar issues
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-kbl4/igt@kms_chamelium@vga-hpd-for-each-pipe.html

  * igt@kms_color@pipe-a-ctm-0-75:
    - shard-skl:          [PASS][50] -> [DMESG-WARN][51] ([i915#1982])
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-skl10/igt@kms_color@pipe-a-ctm-0-75.html
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-skl4/igt@kms_color@pipe-a-ctm-0-75.html

  * igt@kms_color_chamelium@pipe-a-ctm-0-5:
    - shard-glk:          NOTRUN -> [SKIP][52] ([fdo#109271] / [fdo#111827]) +4 similar issues
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-glk2/igt@kms_color_chamelium@pipe-a-ctm-0-5.html

  * igt@kms_color_chamelium@pipe-b-ctm-negative:
    - shard-skl:          NOTRUN -> [SKIP][53] ([fdo#109271] / [fdo#111827]) +4 similar issues
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-skl8/igt@kms_color_chamelium@pipe-b-ctm-negative.html

  * igt@kms_color_chamelium@pipe-invalid-ctm-matrix-sizes:
    - shard-snb:          NOTRUN -> [SKIP][54] ([fdo#109271] / [fdo#111827]) +11 similar issues
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-snb6/igt@kms_color_chamelium@pipe-invalid-ctm-matrix-sizes.html

  * igt@kms_content_protection@atomic:
    - shard-apl:          NOTRUN -> [TIMEOUT][55] ([i915#1319]) +2 similar issues
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-apl1/igt@kms_content_protection@atomic.html

  * igt@kms_content_protection@atomic-dpms:
    - shard-kbl:          NOTRUN -> [TIMEOUT][56] ([i915#1319])
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-kbl2/igt@kms_content_protection@atomic-dpms.html

  * igt@kms_content_protection@uevent:
    - shard-apl:          NOTRUN -> [FAIL][57] ([i915#2105])
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-apl7/igt@kms_content_protection@uevent.html

  * igt@kms_cursor_crc@pipe-b-cursor-128x42-sliding:
    - shard-skl:          NOTRUN -> [FAIL][58] ([i915#54])
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-skl8/igt@kms_cursor_crc@pipe-b-cursor-128x42-sliding.html

  * igt@kms_cursor_crc@pipe-b-cursor-256x256-sliding:
    - shard-skl:          [PASS][59] -> [FAIL][60] ([i915#54]) +10 similar issues
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-skl10/igt@kms_cursor_crc@pipe-b-cursor-256x256-sliding.html
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-skl2/igt@kms_cursor_crc@pipe-b-cursor-256x256-sliding.html

  * igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size:
    - shard-skl:          [PASS][61] -> [FAIL][62] ([i915#2346] / [i915#533])
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-skl7/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size.html
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-skl6/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size.html

  * igt@kms_draw_crc@draw-method-xrgb2101010-mmap-gtt-untiled:
    - shard-skl:          NOTRUN -> [DMESG-WARN][63] ([i915#1982])
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-skl3/igt@kms_draw_crc@draw-method-xrgb2101010-mmap-gtt-untiled.html

  * igt@kms_flip@flip-vs-suspend@c-dp1:
    - shard-kbl:          [PASS][64] -> [DMESG-WARN][65] ([i915#180]) +2 similar issues
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-kbl2/igt@kms_flip@flip-vs-suspend@c-dp1.html
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-kbl7/igt@kms_flip@flip-vs-suspend@c-dp1.html

  * igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs:
    - shard-apl:          NOTRUN -> [SKIP][66] ([fdo#109271] / [i915#2672])
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-apl2/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs.html

  * igt@kms_flip_scaled_crc@flip-32bpp-ytileccs-to-64bpp-ytile:
    - shard-kbl:          NOTRUN -> [SKIP][67] ([fdo#109271] / [i915#2642])
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-kbl2/igt@kms_flip_scaled_crc@flip-32bpp-ytileccs-to-64bpp-ytile.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-offscren-pri-shrfb-draw-render:
    - shard-skl:          NOTRUN -> [SKIP][68] ([fdo#109271]) +43 similar issues
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-skl8/igt@kms_frontbuffer_tracking@fbcpsr-1p-offscren-pri-shrfb-draw-render.html

  * igt@kms_frontbuffer_tracking@psr-2p-scndscrn-spr-indfb-draw-pwrite:
    - shard-glk:          NOTRUN -> [SKIP][69] ([fdo#109271]) +43 similar issues
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-glk4/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-spr-indfb-draw-pwrite.html

  * igt@kms_hdr@bpc-switch-dpms:
    - shard-skl:          [PASS][70] -> [FAIL][71] ([i915#1188])
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-skl10/igt@kms_hdr@bpc-switch-dpms.html
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-skl2/igt@kms_hdr@bpc-switch-dpms.html

  * igt@kms_pipe_b_c_ivb@disable-pipe-b-enable-pipe-c:
    - shard-apl:          NOTRUN -> [SKIP][72] ([fdo#109271]) +87 similar issues
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-apl7/igt@kms_pipe_b_c_ivb@disable-pipe-b-enable-pipe-c.html

  * igt@kms_pipe_crc_basic@read-crc-pipe-d-frame-sequence:
    - shard-glk:          NOTRUN -> [SKIP][73] ([fdo#109271] / [i915#533])
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-glk4/igt@kms_pipe_crc_basic@read-crc-pipe-d-frame-sequence.html

  * igt@kms_plane_alpha_blend@pipe-b-alpha-opaque-fb:
    - shard-skl:          NOTRUN -> [FAIL][74] ([fdo#108145] / [i915#265])
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-skl8/igt@kms_plane_alpha_blend@pipe-b-alpha-opaque-fb.html

  * igt@kms_plane_alpha_blend@pipe-b-constant-alpha-min:
    - shard-skl:          [PASS][75] -> [FAIL][76] ([fdo#108145] / [i915#265])
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-skl8/igt@kms_plane_alpha_blend@pipe-b-constant-alpha-min.html
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-skl10/igt@kms_plane_alpha_blend@pipe-b-constant-alpha-min.html

  * igt@kms_plane_alpha_blend@pipe-c-alpha-7efc:
    - shard-kbl:          NOTRUN -> [FAIL][77] ([fdo#108145] / [i915#265]) +2 similar issues
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-kbl2/igt@kms_plane_alpha_blend@pipe-c-alpha-7efc.html

  * igt@kms_plane_alpha_blend@pipe-c-alpha-opaque-fb:
    - shard-glk:          NOTRUN -> [FAIL][78] ([fdo#108145] / [i915#265])
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-glk4/igt@kms_plane_alpha_blend@pipe-c-alpha-opaque-fb.html

  * igt@kms_plane_alpha_blend@pipe-c-alpha-transparent-fb:
    - shard-kbl:          NOTRUN -> [FAIL][79] ([i915#265])
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-kbl4/igt@kms_plane_alpha_blend@pipe-c-alpha-transparent-fb.html

  * igt@kms_plane_scaling@scaler-with-clipping-clamping@pipe-c-scaler-with-clipping-clamping:
    - shard-skl:          NOTRUN -> [SKIP][80] ([fdo#109271] / [i915#2733])
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-skl8/igt@kms_plane_scaling@scaler-with-clipping-clamping@pipe-c-scaler-with-clipping-clamping.html

  * igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-1:
    - shard-apl:          NOTRUN -> [SKIP][81] ([fdo#109271] / [i915#658]) +1 similar issue
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-apl2/igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-1.html

  * igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-3:
    - shard-kbl:          NOTRUN -> [SKIP][82] ([fdo#109271] / [i915#658]) +2 similar issues
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-kbl4/igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-3.html

  * igt@kms_psr2_sf@plane-move-sf-dmg-area-2:
    - shard-skl:          NOTRUN -> [SKIP][83] ([fdo#109271] / [i915#658]) +1 similar issue
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-skl6/igt@kms_psr2_sf@plane-move-sf-dmg-area-2.html

  * igt@kms_psr2_sf@plane-move-sf-dmg-area-3:
    - shard-glk:          NOTRUN -> [SKIP][84] ([fdo#109271] / [i915#658])
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-glk4/igt@kms_psr2_sf@plane-move-sf-dmg-area-3.html

  * igt@kms_psr@psr2_cursor_plane_onoff:
    - shard-iclb:         [PASS][85] -> [SKIP][86] ([fdo#109441]) +2 similar issues
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-iclb2/igt@kms_psr@psr2_cursor_plane_onoff.html
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-iclb5/igt@kms_psr@psr2_cursor_plane_onoff.html

  * igt@kms_vblank@pipe-b-ts-continuation-suspend:
    - shard-kbl:          NOTRUN -> [DMESG-WARN][87] ([i915#180])
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-kbl1/igt@kms_vblank@pipe-b-ts-continuation-suspend.html

  * igt@kms_vblank@pipe-d-wait-idle:
    - shard-kbl:          NOTRUN -> [SKIP][88] ([fdo#109271] / [i915#533]) +1 similar issue
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-kbl2/igt@kms_vblank@pipe-d-wait-idle.html

  * igt@perf@polling-parameterized:
    - shard-tglb:         [PASS][89] -> [FAIL][90] ([i915#1542])
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-tglb2/igt@perf@polling-parameterized.html
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-tglb5/igt@perf@polling-parameterized.html

  * igt@sysfs_clients@recycle:
    - shard-iclb:         [PASS][91] -> [FAIL][92] ([i915#3028])
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-iclb6/igt@sysfs_clients@recycle.html
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-iclb7/igt@sysfs_clients@recycle.html

  * igt@sysfs_clients@recycle-many:
    - shard-hsw:          [PASS][93] -> [FAIL][94] ([i915#3028])
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-hsw4/igt@sysfs_clients@recycle-many.html
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-hsw8/igt@sysfs_clients@recycle-many.html

  * igt@sysfs_clients@split-25@vecs0:
    - shard-skl:          [PASS][95] -> [SKIP][96] ([fdo#109271])
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-skl5/igt@sysfs_clients@split-25@vecs0.html
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-skl3/igt@sysfs_clients@split-25@vecs0.html

  
#### Possible fixes ####

  * igt@gem_eio@in-flight-contexts-10ms:
    - shard-tglb:         [TIMEOUT][97] ([i915#3063]) -> [PASS][98]
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-tglb6/igt@gem_eio@in-flight-contexts-10ms.html
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-tglb1/igt@gem_eio@in-flight-contexts-10ms.html

  * igt@gem_eio@in-flight-contexts-immediate:
    - shard-iclb:         [TIMEOUT][99] ([i915#3070]) -> [PASS][100]
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-iclb2/igt@gem_eio@in-flight-contexts-immediate.html
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-iclb5/igt@gem_eio@in-flight-contexts-immediate.html

  * igt@gem_exec_fair@basic-flow@rcs0:
    - shard-tglb:         [FAIL][101] ([i915#2842]) -> [PASS][102]
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-tglb1/igt@gem_exec_fair@basic-flow@rcs0.html
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-tglb7/igt@gem_exec_fair@basic-flow@rcs0.html

  * igt@gem_exec_fair@basic-none-share@rcs0:
    - shard-iclb:         [FAIL][103] ([i915#2842]) -> [PASS][104]
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-iclb2/igt@gem_exec_fair@basic-none-share@rcs0.html
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-iclb7/igt@gem_exec_fair@basic-none-share@rcs0.html

  * igt@gem_exec_schedule@u-fairslice@vecs0:
    - shard-glk:          [DMESG-WARN][105] ([i915#1610] / [i915#2803]) -> [PASS][106]
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-glk7/igt@gem_exec_schedule@u-fairslice@vecs0.html
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-glk2/igt@gem_exec_schedule@u-fairslice@vecs0.html

  * igt@gem_exec_whisper@basic-forked:
    - shard-glk:          [DMESG-WARN][107] ([i915#118] / [i915#95]) -> [PASS][108]
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-glk2/igt@gem_exec_whisper@basic-forked.html
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-glk4/igt@gem_exec_whisper@basic-forked.html

  * igt@i915_selftest@live@client:
    - shard-apl:          [DMESG-FAIL][109] ([i915#3047]) -> [PASS][110]
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-apl1/igt@i915_selftest@live@client.html
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-apl1/igt@i915_selftest@live@client.html

  * igt@i915_suspend@fence-restore-tiled2untiled:
    - shard-skl:          [INCOMPLETE][111] ([i915#198]) -> [PASS][112]
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-skl7/igt@i915_suspend@fence-restore-tiled2untiled.html
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-skl6/igt@i915_suspend@fence-restore-tiled2untiled.html

  * igt@kms_color@pipe-a-ctm-max:
    - shard-skl:          [DMESG-WARN][113] ([i915#1982]) -> [PASS][114] +1 similar issue
   [113]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-skl8/igt@kms_color@pipe-a-ctm-max.html
   [114]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-skl10/igt@kms_color@pipe-a-ctm-max.html

  * igt@kms_cursor_crc@pipe-a-cursor-256x85-onscreen:
    - shard-skl:          [FAIL][115] ([i915#54]) -> [PASS][116] +2 similar issues
   [115]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-skl6/igt@kms_cursor_crc@pipe-a-cursor-256x85-onscreen.html
   [116]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-skl7/igt@kms_cursor_crc@pipe-a-cursor-256x85-onscreen.html

  * igt@kms_cursor_crc@pipe-c-cursor-suspend:
    - shard-kbl:          [DMESG-WARN][117] ([i915#180]) -> [PASS][118]
   [117]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-kbl1/igt@kms_cursor_crc@pipe-c-cursor-suspend.html
   [118]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-kbl2/igt@kms_cursor_crc@pipe-c-cursor-suspend.html

  * igt@kms_cursor_edge_walk@pipe-b-256x256-top-edge:
    - shard-kbl:          [FAIL][119] ([i915#70]) -> [PASS][120]
   [119]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-kbl1/igt@kms_cursor_edge_walk@pipe-b-256x256-top-edge.html
   [120]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-kbl1/igt@kms_cursor_edge_walk@pipe-b-256x256-top-edge.html

  * igt@kms_cursor_legacy@flip-vs-cursor-busy-crc-legacy:
    - shard-skl:          [FAIL][121] ([i915#2346]) -> [PASS][122]
   [121]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-skl2/igt@kms_cursor_legacy@flip-vs-cursor-busy-crc-legacy.html
   [122]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-skl2/igt@kms_cursor_legacy@flip-vs-cursor-busy-crc-legacy.html

  * igt@kms_fbcon_fbt@fbc-suspend:
    - shard-kbl:          [INCOMPLETE][123] ([i915#155] / [i915#180] / [i915#636]) -> [PASS][124]
   [123]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-kbl1/igt@kms_fbcon_fbt@fbc-suspend.html
   [124]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-kbl4/igt@kms_fbcon_fbt@fbc-suspend.html

  * igt@kms_flip@flip-vs-expired-vblank@a-edp1:
    - shard-skl:          [FAIL][125] ([i915#79]) -> [PASS][126]
   [125]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-skl2/igt@kms_flip@flip-vs-expired-vblank@a-edp1.html
   [126]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-skl1/igt@kms_flip@flip-vs-expired-vblank@a-edp1.html

  * igt@kms_flip@flip-vs-expired-vblank@b-edp1:
    - shard-skl:          [FAIL][127] ([i915#2122]) -> [PASS][128]
   [127]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-skl2/igt@kms_flip@flip-vs-expired-vblank@b-edp1.html
   [128]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-skl1/igt@kms_flip@flip-vs-expired-vblank@b-edp1.html

  * igt@kms_flip@flip-vs-suspend@a-dp1:
    - shard-apl:          [DMESG-WARN][129] ([i915#180]) -> [PASS][130] +1 similar issue
   [129]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-apl8/igt@kms_flip@flip-vs-suspend@a-dp1.html
   [130]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-apl1/igt@kms_flip@flip-vs-suspend@a-dp1.html

  * igt@kms_hdr@bpc-switch-suspend:
    - shard-skl:          [FAIL][131] ([i915#1188]) -> [PASS][132]
   [131]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-skl2/igt@kms_hdr@bpc-switch-suspend.html
   [132]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-skl2/igt@kms_hdr@bpc-switch-suspend.html

  * igt@kms_plane_alpha_blend@pipe-a-coverage-7efc:
    - shard-skl:          [FAIL][133] ([fdo#108145] / [i915#265]) -> [PASS][134]
   [133]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-skl2/igt@kms_plane_alpha_blend@pipe-a-coverage-7efc.html
   [134]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-skl1/igt@kms_plane_alpha_blend@pipe-a-coverage-7efc.html

  * igt@kms_psr@psr2_primary_page_flip:
    - shard-iclb:         [SKIP][135] ([fdo#109441]) -> [PASS][136] +1 similar issue
   [135]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-iclb1/igt@kms_psr@psr2_primary_page_flip.html
   [136]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-iclb2/igt@kms_psr@psr2_primary_page_flip.html

  * igt@sysfs_clients@recycle-many:
    - shard-apl:          [FAIL][137] ([i915#3028]) -> [PASS][138]
   [137]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-apl7/igt@sysfs_clients@recycle-many.html
   [138]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-apl6/igt@sysfs_clients@recycle-many.html
    - shard-iclb:         [FAIL][139] ([i915#3028]) -> [PASS][140]
   [139]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-iclb6/igt@sysfs_clients@recycle-many.html
   [140]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-iclb6/igt@sysfs_clients@recycle-many.html
    - shard-tglb:         [FAIL][141] ([i915#3028]) -> [PASS][142]
   [141]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-tglb8/igt@sysfs_clients@recycle-many.html
   [142]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-tglb7/igt@sysfs_clients@recycle-many.html

  
#### Warnings ####

  * igt@gem_exec_fair@basic-none-rrul@rcs0:
    - shard-iclb:         [FAIL][143] ([i915#2852]) -> [FAIL][144] ([i915#2842])
   [143]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-iclb8/igt@gem_exec_fair@basic-none-rrul@rcs0.html
   [144]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-iclb7/igt@gem_exec_fair@basic-none-rrul@rcs0.html

  * igt@i915_pm_rc6_residency@rc6-fence:
    - shard-iclb:         [WARN][145] ([i915#1804] / [i915#2684]) -> [WARN][146] ([i915#2684])
   [145]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-iclb4/igt@i915_pm_rc6_residency@rc6-fence.html
   [146]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-iclb5/igt@i915_pm_rc6_residency@rc6-fence.html

  * igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-3:
    - shard-iclb:         [SKIP][147] ([i915#658]) -> [SKIP][148] ([i915#2920])
   [147]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-iclb1/igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-3.html
   [148]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-iclb2/igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-3.html

  * igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-2:
    - shard-iclb:         [SKIP][149] ([i915#2920]) -> [SKIP][150] ([i915#658]) +1 similar issue
   [149]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9791/shard-iclb2/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-2.html
   [150]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/shard-iclb7/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-2.html

  * igt@runner@aborted:
    - shard-glk:          ([FAI

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19711/index.html

[-- Attachment #1.2: Type: text/html, Size: 33308 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

end of thread, other threads:[~2021-02-21  5:41 UTC | newest]

Thread overview: 43+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-04  2:04 [PATCH] drm/vblank: Avoid storing a timestamp for the same frame twice Ville Syrjala
2021-02-04  2:04 ` [Intel-gfx] " Ville Syrjala
2021-02-04  3:12 ` [Intel-gfx] ✓ Fi.CI.BAT: success for " Patchwork
2021-02-04  5:44 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
2021-02-04 15:32 ` [PATCH] " Daniel Vetter
2021-02-04 15:32   ` [Intel-gfx] " Daniel Vetter
2021-02-04 15:55   ` Ville Syrjälä
2021-02-04 15:55     ` [Intel-gfx] " Ville Syrjälä
2021-02-05 15:46     ` Daniel Vetter
2021-02-05 15:46       ` [Intel-gfx] " Daniel Vetter
2021-02-05 16:24       ` Ville Syrjälä
2021-02-05 16:24         ` [Intel-gfx] " Ville Syrjälä
2021-02-05 21:19         ` Ville Syrjälä
2021-02-05 21:19           ` [Intel-gfx] " Ville Syrjälä
2021-02-08  9:56           ` Daniel Vetter
2021-02-08  9:56             ` [Intel-gfx] " Daniel Vetter
2021-02-08 16:58             ` Ville Syrjälä
2021-02-08 16:58               ` [Intel-gfx] " Ville Syrjälä
2021-02-08 17:43               ` Daniel Vetter
2021-02-08 17:43                 ` [Intel-gfx] " Daniel Vetter
2021-02-08 18:05                 ` Ville Syrjälä
2021-02-08 18:05                   ` [Intel-gfx] " Ville Syrjälä
2021-02-09 10:07 ` Daniel Vetter
2021-02-09 10:07   ` [Intel-gfx] " Daniel Vetter
2021-02-09 15:40   ` Ville Syrjälä
2021-02-09 15:40     ` [Intel-gfx] " Ville Syrjälä
2021-02-09 16:44     ` Daniel Vetter
2021-02-09 16:44       ` [Intel-gfx] " Daniel Vetter
2021-02-18 16:03 ` [PATCH v2] drm/vblank: Do not store a new vblank timestamp in drm_vblank_restore() Ville Syrjala
2021-02-18 16:03   ` [Intel-gfx] " Ville Syrjala
2021-02-18 16:10   ` Ville Syrjälä
2021-02-18 16:10     ` [Intel-gfx] " Ville Syrjälä
2021-02-19 15:08   ` Daniel Vetter
2021-02-19 15:08     ` [Intel-gfx] " Daniel Vetter
2021-02-19 15:47     ` Ville Syrjälä
2021-02-19 15:47       ` [Intel-gfx] " Ville Syrjälä
2021-02-18 19:08 ` [Intel-gfx] ✗ Fi.CI.BAT: failure for drm/vblank: Avoid storing a timestamp for the same frame twice (rev2) Patchwork
2021-02-18 19:22   ` Ville Syrjälä
2021-02-18 19:51     ` Vudum, Lakshminarayana
2021-02-18 19:29 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-02-18 20:58 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
2021-02-21  4:18 ` [Intel-gfx] ✓ Fi.CI.BAT: success for drm/vblank: Avoid storing a timestamp for the same frame twice (rev3) Patchwork
2021-02-21  5:41 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.