All of lore.kernel.org
 help / color / mirror / Atom feed
* [Intel-gfx] [PATCH] drm/i915: program wm blocks to at least blocks required per line
@ 2022-04-04 13:49 Vinod Govindapillai
  2022-04-04 19:09 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for " Patchwork
                   ` (4 more replies)
  0 siblings, 5 replies; 15+ messages in thread
From: Vinod Govindapillai @ 2022-04-04 13:49 UTC (permalink / raw)
  To: intel-gfx

In configurations with single DRAM channel, for usecases like
4K 60 Hz, FIFO underruns are observed quite frequently. Looks
like the wm0 watermark values need to bumped up because the wm0
memory latency calculations are probably not taking the DRAM
channel's impact into account.

As per the Bspec 49325, if the ddb allocation can hold at least
one plane_blocks_per_line we should have selected method2.
Assuming that modern HW versions have enough dbuf to hold
at least one line, set the wm blocks to equivalent to blocks
per line.

cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>

Signed-off-by: Vinod Govindapillai <vinod.govindapillai@intel.com>
---
 drivers/gpu/drm/i915/intel_pm.c | 19 ++++++++++++++++++-
 1 file changed, 18 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index 8824f269e5f5..ae28a8c63ca4 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -5474,7 +5474,24 @@ static void skl_compute_plane_wm(const struct intel_crtc_state *crtc_state,
 		}
 	}
 
-	blocks = fixed16_to_u32_round_up(selected_result) + 1;
+	/*
+	 * Lets have blocks at minimum equivalent to plane_blocks_per_line
+	 * as there will be at minimum one line for lines configuration.
+	 *
+	 * As per the Bspec 49325, if the ddb allocation can hold at least
+	 * one plane_blocks_per_line, we should have selected method2 in
+	 * the above logic. Assuming that modern versions have enough dbuf
+	 * and method2 guarantees blocks equivalent to at least 1 line,
+	 * select the blocks as plane_blocks_per_line.
+	 *
+	 * TODO: Revisit the logic when we have better understanding on DRAM
+	 * channels' impact on the level 0 memory latency and the relevant
+	 * wm calculations.
+	 */
+	blocks = skl_wm_has_lines(dev_priv, level) ?
+			max_t(u32, fixed16_to_u32_round_up(selected_result) + 1,
+				  fixed16_to_u32_round_up(wp->plane_blocks_per_line)) :
+			fixed16_to_u32_round_up(selected_result) + 1;
 	lines = div_round_up_fixed16(selected_result,
 				     wp->plane_blocks_per_line);
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915: program wm blocks to at least blocks required per line
  2022-04-04 13:49 [Intel-gfx] [PATCH] drm/i915: program wm blocks to at least blocks required per line Vinod Govindapillai
@ 2022-04-04 19:09 ` Patchwork
  2022-04-04 19:42 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 15+ messages in thread
From: Patchwork @ 2022-04-04 19:09 UTC (permalink / raw)
  To: Vinod Govindapillai; +Cc: intel-gfx

== Series Details ==

Series: drm/i915: program wm blocks to at least blocks required per line
URL   : https://patchwork.freedesktop.org/series/102149/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
821ecf698dfe drm/i915: program wm blocks to at least blocks required per line
-:52: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#52: FILE: drivers/gpu/drm/i915/intel_pm.c:5493:
+			max_t(u32, fixed16_to_u32_round_up(selected_result) + 1,
+				  fixed16_to_u32_round_up(wp->plane_blocks_per_line)) :

total: 0 errors, 0 warnings, 1 checks, 25 lines checked



^ permalink raw reply	[flat|nested] 15+ messages in thread

* [Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915: program wm blocks to at least blocks required per line
  2022-04-04 13:49 [Intel-gfx] [PATCH] drm/i915: program wm blocks to at least blocks required per line Vinod Govindapillai
  2022-04-04 19:09 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for " Patchwork
@ 2022-04-04 19:42 ` Patchwork
  2022-04-05  0:14 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 15+ messages in thread
From: Patchwork @ 2022-04-04 19:42 UTC (permalink / raw)
  To: Vinod Govindapillai; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 7441 bytes --]

== Series Details ==

Series: drm/i915: program wm blocks to at least blocks required per line
URL   : https://patchwork.freedesktop.org/series/102149/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_11449 -> Patchwork_22773
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/index.html

Participating hosts (50 -> 46)
------------------------------

  Additional (1): fi-icl-u2 
  Missing    (5): shard-tglu fi-bsw-cyan shard-rkl shard-dg1 fi-bdw-samus 

Known issues
------------

  Here are the changes found in Patchwork_22773 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@amdgpu/amd_cs_nop@fork-gfx0:
    - fi-icl-u2:          NOTRUN -> [SKIP][1] ([fdo#109315]) +17 similar issues
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/fi-icl-u2/igt@amdgpu/amd_cs_nop@fork-gfx0.html
    - fi-bsw-n3050:       NOTRUN -> [SKIP][2] ([fdo#109271]) +17 similar issues
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/fi-bsw-n3050/igt@amdgpu/amd_cs_nop@fork-gfx0.html

  * igt@core_auth@basic-auth:
    - fi-kbl-soraka:      [PASS][3] -> [DMESG-WARN][4] ([i915#1982])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/fi-kbl-soraka/igt@core_auth@basic-auth.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/fi-kbl-soraka/igt@core_auth@basic-auth.html

  * igt@gem_huc_copy@huc-copy:
    - fi-icl-u2:          NOTRUN -> [SKIP][5] ([i915#2190])
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/fi-icl-u2/igt@gem_huc_copy@huc-copy.html

  * igt@gem_lmem_swapping@parallel-random-engines:
    - fi-icl-u2:          NOTRUN -> [SKIP][6] ([i915#4613]) +3 similar issues
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/fi-icl-u2/igt@gem_lmem_swapping@parallel-random-engines.html

  * igt@i915_selftest@live@hangcheck:
    - bat-dg1-6:          NOTRUN -> [DMESG-FAIL][7] ([i915#4494] / [i915#4957])
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/bat-dg1-6/igt@i915_selftest@live@hangcheck.html

  * igt@kms_chamelium@hdmi-hpd-fast:
    - fi-icl-u2:          NOTRUN -> [SKIP][8] ([fdo#111827]) +8 similar issues
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/fi-icl-u2/igt@kms_chamelium@hdmi-hpd-fast.html

  * igt@kms_cursor_legacy@basic-busy-flip-before-cursor-atomic:
    - fi-icl-u2:          NOTRUN -> [SKIP][9] ([fdo#109278]) +2 similar issues
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/fi-icl-u2/igt@kms_cursor_legacy@basic-busy-flip-before-cursor-atomic.html

  * igt@kms_flip@basic-flip-vs-wf_vblank@a-edp1:
    - fi-tgl-u2:          [PASS][10] -> [DMESG-WARN][11] ([i915#402])
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/fi-tgl-u2/igt@kms_flip@basic-flip-vs-wf_vblank@a-edp1.html
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/fi-tgl-u2/igt@kms_flip@basic-flip-vs-wf_vblank@a-edp1.html

  * igt@kms_force_connector_basic@force-load-detect:
    - fi-icl-u2:          NOTRUN -> [SKIP][12] ([fdo#109285])
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/fi-icl-u2/igt@kms_force_connector_basic@force-load-detect.html

  * igt@kms_setmode@basic-clone-single-crtc:
    - fi-icl-u2:          NOTRUN -> [SKIP][13] ([i915#3555])
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/fi-icl-u2/igt@kms_setmode@basic-clone-single-crtc.html

  * igt@prime_vgem@basic-userptr:
    - fi-icl-u2:          NOTRUN -> [SKIP][14] ([i915#3301])
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/fi-icl-u2/igt@prime_vgem@basic-userptr.html

  
#### Possible fixes ####

  * igt@i915_pm_rps@basic-api:
    - {fi-jsl-1}:         [DMESG-WARN][15] ([i915#5482]) -> [PASS][16]
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/fi-jsl-1/igt@i915_pm_rps@basic-api.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/fi-jsl-1/igt@i915_pm_rps@basic-api.html

  * igt@i915_selftest@live@active:
    - fi-bsw-n3050:       [DMESG-FAIL][17] ([i915#2927]) -> [PASS][18]
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/fi-bsw-n3050/igt@i915_selftest@live@active.html
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/fi-bsw-n3050/igt@i915_selftest@live@active.html

  * igt@i915_selftest@live@gt_engines:
    - bat-dg1-6:          [INCOMPLETE][19] ([i915#4418]) -> [PASS][20]
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/bat-dg1-6/igt@i915_selftest@live@gt_engines.html
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/bat-dg1-6/igt@i915_selftest@live@gt_engines.html

  * igt@i915_selftest@live@migrate:
    - fi-bsw-n3050:       [DMESG-WARN][21] -> [PASS][22]
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/fi-bsw-n3050/igt@i915_selftest@live@migrate.html
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/fi-bsw-n3050/igt@i915_selftest@live@migrate.html

  * igt@kms_flip@basic-flip-vs-modeset@a-edp1:
    - fi-tgl-u2:          [DMESG-WARN][23] ([i915#402]) -> [PASS][24] +1 similar issue
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/fi-tgl-u2/igt@kms_flip@basic-flip-vs-modeset@a-edp1.html
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/fi-tgl-u2/igt@kms_flip@basic-flip-vs-modeset@a-edp1.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109278]: https://bugs.freedesktop.org/show_bug.cgi?id=109278
  [fdo#109285]: https://bugs.freedesktop.org/show_bug.cgi?id=109285
  [fdo#109315]: https://bugs.freedesktop.org/show_bug.cgi?id=109315
  [fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
  [i915#1982]: https://gitlab.freedesktop.org/drm/intel/issues/1982
  [i915#2190]: https://gitlab.freedesktop.org/drm/intel/issues/2190
  [i915#2927]: https://gitlab.freedesktop.org/drm/intel/issues/2927
  [i915#3301]: https://gitlab.freedesktop.org/drm/intel/issues/3301
  [i915#3555]: https://gitlab.freedesktop.org/drm/intel/issues/3555
  [i915#402]: https://gitlab.freedesktop.org/drm/intel/issues/402
  [i915#4312]: https://gitlab.freedesktop.org/drm/intel/issues/4312
  [i915#4418]: https://gitlab.freedesktop.org/drm/intel/issues/4418
  [i915#4494]: https://gitlab.freedesktop.org/drm/intel/issues/4494
  [i915#4613]: https://gitlab.freedesktop.org/drm/intel/issues/4613
  [i915#4957]: https://gitlab.freedesktop.org/drm/intel/issues/4957
  [i915#4983]: https://gitlab.freedesktop.org/drm/intel/issues/4983
  [i915#5482]: https://gitlab.freedesktop.org/drm/intel/issues/5482


Build changes
-------------

  * Linux: CI_DRM_11449 -> Patchwork_22773

  CI-20190529: 20190529
  CI_DRM_11449: 7f954433d09e65d55ca3ba81e1eb5eced93d4203 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_6409: 13700f4a3ffaac3a825fe59b014c7c6c48a0a5f1 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  Patchwork_22773: 821ecf698dfe1dd75671bc34fd639e1b78f1d877 @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

821ecf698dfe drm/i915: program wm blocks to at least blocks required per line

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/index.html

[-- Attachment #2: Type: text/html, Size: 8411 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915: program wm blocks to at least blocks required per line
  2022-04-04 13:49 [Intel-gfx] [PATCH] drm/i915: program wm blocks to at least blocks required per line Vinod Govindapillai
  2022-04-04 19:09 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for " Patchwork
  2022-04-04 19:42 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
@ 2022-04-05  0:14 ` Patchwork
  2022-04-06  8:14 ` [Intel-gfx] [PATCH] " Lisovskiy, Stanislav
  2022-04-06 12:48 ` Ville Syrjälä
  4 siblings, 0 replies; 15+ messages in thread
From: Patchwork @ 2022-04-05  0:14 UTC (permalink / raw)
  To: Vinod Govindapillai; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 30288 bytes --]

== Series Details ==

Series: drm/i915: program wm blocks to at least blocks required per line
URL   : https://patchwork.freedesktop.org/series/102149/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_11449_full -> Patchwork_22773_full
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  

Participating hosts (13 -> 12)
------------------------------

  Missing    (1): shard-dg1 

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_22773_full:

### IGT changes ###

#### Suppressed ####

  The following results come from untrusted machines, tests, or statuses.
  They do not affect the overall result.

  * igt@gem_exec_schedule@u-submit-golden-slice@rcs0:
    - {shard-rkl}:        [PASS][1] -> [INCOMPLETE][2]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-rkl-6/igt@gem_exec_schedule@u-submit-golden-slice@rcs0.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-rkl-5/igt@gem_exec_schedule@u-submit-golden-slice@rcs0.html

  
Known issues
------------

  Here are the changes found in Patchwork_22773_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_create@create-massive:
    - shard-kbl:          NOTRUN -> [DMESG-WARN][3] ([i915#4991])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-kbl3/igt@gem_create@create-massive.html
    - shard-apl:          NOTRUN -> [DMESG-WARN][4] ([i915#4991])
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-apl2/igt@gem_create@create-massive.html

  * igt@gem_exec_balancer@parallel-contexts:
    - shard-kbl:          NOTRUN -> [DMESG-WARN][5] ([i915#5076])
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-kbl3/igt@gem_exec_balancer@parallel-contexts.html

  * igt@gem_exec_capture@pi@rcs0:
    - shard-skl:          [PASS][6] -> [INCOMPLETE][7] ([i915#4547])
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-skl4/igt@gem_exec_capture@pi@rcs0.html
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-skl9/igt@gem_exec_capture@pi@rcs0.html

  * igt@gem_exec_fair@basic-deadline:
    - shard-skl:          NOTRUN -> [FAIL][8] ([i915#2846])
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-skl8/igt@gem_exec_fair@basic-deadline.html
    - shard-glk:          [PASS][9] -> [FAIL][10] ([i915#2846])
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-glk4/igt@gem_exec_fair@basic-deadline.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-glk3/igt@gem_exec_fair@basic-deadline.html

  * igt@gem_exec_fair@basic-none-solo@rcs0:
    - shard-kbl:          NOTRUN -> [FAIL][11] ([i915#2842])
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-kbl3/igt@gem_exec_fair@basic-none-solo@rcs0.html

  * igt@gem_lmem_swapping@heavy-random:
    - shard-skl:          NOTRUN -> [SKIP][12] ([fdo#109271] / [i915#4613]) +1 similar issue
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-skl10/igt@gem_lmem_swapping@heavy-random.html

  * igt@gem_lmem_swapping@heavy-verify-multi:
    - shard-apl:          NOTRUN -> [SKIP][13] ([fdo#109271] / [i915#4613])
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-apl4/igt@gem_lmem_swapping@heavy-verify-multi.html

  * igt@gem_lmem_swapping@parallel-random-verify:
    - shard-kbl:          NOTRUN -> [SKIP][14] ([fdo#109271] / [i915#4613]) +2 similar issues
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-kbl3/igt@gem_lmem_swapping@parallel-random-verify.html

  * igt@gem_lmem_swapping@random:
    - shard-iclb:         NOTRUN -> [SKIP][15] ([i915#4613])
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-iclb3/igt@gem_lmem_swapping@random.html

  * igt@gem_ppgtt@flink-and-close-vma-leak:
    - shard-glk:          [PASS][16] -> [FAIL][17] ([i915#644])
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-glk8/igt@gem_ppgtt@flink-and-close-vma-leak.html
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-glk9/igt@gem_ppgtt@flink-and-close-vma-leak.html

  * igt@gem_userptr_blits@input-checking:
    - shard-iclb:         NOTRUN -> [DMESG-WARN][18] ([i915#4991])
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-iclb3/igt@gem_userptr_blits@input-checking.html

  * igt@i915_pm_dc@dc6-dpms:
    - shard-skl:          NOTRUN -> [FAIL][19] ([i915#454]) +1 similar issue
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-skl8/igt@i915_pm_dc@dc6-dpms.html

  * igt@i915_pm_rpm@modeset-lpsp-stress-no-wait:
    - shard-kbl:          NOTRUN -> [SKIP][20] ([fdo#109271]) +117 similar issues
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-kbl4/igt@i915_pm_rpm@modeset-lpsp-stress-no-wait.html

  * igt@kms_big_fb@x-tiled-32bpp-rotate-180:
    - shard-glk:          [PASS][21] -> [DMESG-WARN][22] ([i915#118])
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-glk7/igt@kms_big_fb@x-tiled-32bpp-rotate-180.html
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-glk5/igt@kms_big_fb@x-tiled-32bpp-rotate-180.html

  * igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-0-hflip:
    - shard-apl:          NOTRUN -> [SKIP][23] ([fdo#109271] / [i915#3777])
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-apl4/igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-0-hflip.html

  * igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0-hflip-async-flip:
    - shard-skl:          NOTRUN -> [SKIP][24] ([fdo#109271] / [i915#3777]) +5 similar issues
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-skl8/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0-hflip-async-flip.html

  * igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180-async-flip:
    - shard-skl:          NOTRUN -> [FAIL][25] ([i915#3763])
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-skl10/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180-async-flip.html

  * igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-180-async-flip:
    - shard-skl:          NOTRUN -> [FAIL][26] ([i915#3743])
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-skl8/igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-180-async-flip.html

  * igt@kms_ccs@pipe-a-crc-primary-basic-y_tiled_gen12_rc_ccs_cc:
    - shard-snb:          NOTRUN -> [SKIP][27] ([fdo#109271]) +48 similar issues
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-snb7/igt@kms_ccs@pipe-a-crc-primary-basic-y_tiled_gen12_rc_ccs_cc.html

  * igt@kms_ccs@pipe-a-missing-ccs-buffer-y_tiled_gen12_rc_ccs_cc:
    - shard-iclb:         NOTRUN -> [SKIP][28] ([fdo#109278] / [i915#3886])
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-iclb3/igt@kms_ccs@pipe-a-missing-ccs-buffer-y_tiled_gen12_rc_ccs_cc.html

  * igt@kms_ccs@pipe-c-bad-rotation-90-y_tiled_gen12_rc_ccs_cc:
    - shard-skl:          NOTRUN -> [SKIP][29] ([fdo#109271] / [i915#3886]) +11 similar issues
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-skl9/igt@kms_ccs@pipe-c-bad-rotation-90-y_tiled_gen12_rc_ccs_cc.html

  * igt@kms_ccs@pipe-c-ccs-on-another-bo-y_tiled_gen12_mc_ccs:
    - shard-kbl:          NOTRUN -> [SKIP][30] ([fdo#109271] / [i915#3886]) +2 similar issues
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-kbl7/igt@kms_ccs@pipe-c-ccs-on-another-bo-y_tiled_gen12_mc_ccs.html

  * igt@kms_chamelium@hdmi-mode-timings:
    - shard-kbl:          NOTRUN -> [SKIP][31] ([fdo#109271] / [fdo#111827]) +7 similar issues
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-kbl4/igt@kms_chamelium@hdmi-mode-timings.html

  * igt@kms_chamelium@vga-hpd:
    - shard-skl:          NOTRUN -> [SKIP][32] ([fdo#109271] / [fdo#111827]) +20 similar issues
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-skl1/igt@kms_chamelium@vga-hpd.html

  * igt@kms_color_chamelium@pipe-b-degamma:
    - shard-snb:          NOTRUN -> [SKIP][33] ([fdo#109271] / [fdo#111827])
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-snb7/igt@kms_color_chamelium@pipe-b-degamma.html

  * igt@kms_color_chamelium@pipe-c-degamma:
    - shard-apl:          NOTRUN -> [SKIP][34] ([fdo#109271] / [fdo#111827]) +3 similar issues
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-apl4/igt@kms_color_chamelium@pipe-c-degamma.html

  * igt@kms_draw_crc@draw-method-rgb565-blt-4tiled:
    - shard-iclb:         NOTRUN -> [SKIP][35] ([i915#5287])
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-iclb3/igt@kms_draw_crc@draw-method-rgb565-blt-4tiled.html

  * igt@kms_flip@flip-vs-expired-vblank-interruptible@b-hdmi-a2:
    - shard-glk:          [PASS][36] -> [FAIL][37] ([i915#79])
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-glk3/igt@kms_flip@flip-vs-expired-vblank-interruptible@b-hdmi-a2.html
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-glk5/igt@kms_flip@flip-vs-expired-vblank-interruptible@b-hdmi-a2.html

  * igt@kms_flip@flip-vs-expired-vblank@c-edp1:
    - shard-skl:          [PASS][38] -> [FAIL][39] ([i915#79])
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-skl1/igt@kms_flip@flip-vs-expired-vblank@c-edp1.html
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-skl7/igt@kms_flip@flip-vs-expired-vblank@c-edp1.html

  * igt@kms_flip@flip-vs-suspend-interruptible@c-dp1:
    - shard-apl:          [PASS][40] -> [DMESG-WARN][41] ([i915#180]) +1 similar issue
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-apl7/igt@kms_flip@flip-vs-suspend-interruptible@c-dp1.html
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-apl6/igt@kms_flip@flip-vs-suspend-interruptible@c-dp1.html

  * igt@kms_flip@flip-vs-suspend@c-edp1:
    - shard-tglb:         [PASS][42] -> [DMESG-WARN][43] ([i915#2411] / [i915#2867])
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-tglb2/igt@kms_flip@flip-vs-suspend@c-edp1.html
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-tglb2/igt@kms_flip@flip-vs-suspend@c-edp1.html

  * igt@kms_flip@plain-flip-fb-recreate-interruptible@a-edp1:
    - shard-skl:          [PASS][44] -> [FAIL][45] ([i915#2122])
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-skl4/igt@kms_flip@plain-flip-fb-recreate-interruptible@a-edp1.html
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-skl6/igt@kms_flip@plain-flip-fb-recreate-interruptible@a-edp1.html

  * igt@kms_flip_scaled_crc@flip-32bpp-ytileccs-to-64bpp-ytile-downscaling:
    - shard-iclb:         [PASS][46] -> [SKIP][47] ([i915#3701])
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-iclb7/igt@kms_flip_scaled_crc@flip-32bpp-ytileccs-to-64bpp-ytile-downscaling.html
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-iclb2/igt@kms_flip_scaled_crc@flip-32bpp-ytileccs-to-64bpp-ytile-downscaling.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-pri-shrfb-draw-mmap-gtt:
    - shard-apl:          NOTRUN -> [SKIP][48] ([fdo#109271]) +65 similar issues
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-apl2/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-pri-shrfb-draw-mmap-gtt.html

  * igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-indfb-draw-mmap-wc:
    - shard-iclb:         NOTRUN -> [SKIP][49] ([fdo#109280])
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-iclb3/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-indfb-draw-mmap-wc.html

  * igt@kms_hdr@bpc-switch@bpc-switch-edp-1-pipe-a:
    - shard-skl:          NOTRUN -> [FAIL][50] ([i915#1188])
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-skl9/igt@kms_hdr@bpc-switch@bpc-switch-edp-1-pipe-a.html

  * igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a:
    - shard-kbl:          [PASS][51] -> [DMESG-WARN][52] ([i915#180]) +1 similar issue
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-kbl4/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a.html
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-kbl1/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a.html

  * igt@kms_plane_alpha_blend@pipe-a-alpha-7efc:
    - shard-skl:          NOTRUN -> [FAIL][53] ([fdo#108145] / [i915#265]) +4 similar issues
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-skl10/igt@kms_plane_alpha_blend@pipe-a-alpha-7efc.html

  * igt@kms_plane_alpha_blend@pipe-b-alpha-transparent-fb:
    - shard-apl:          NOTRUN -> [FAIL][54] ([i915#265]) +2 similar issues
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-apl4/igt@kms_plane_alpha_blend@pipe-b-alpha-transparent-fb.html
    - shard-skl:          NOTRUN -> [FAIL][55] ([i915#265])
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-skl9/igt@kms_plane_alpha_blend@pipe-b-alpha-transparent-fb.html

  * igt@kms_plane_alpha_blend@pipe-c-alpha-basic:
    - shard-kbl:          NOTRUN -> [FAIL][56] ([fdo#108145] / [i915#265]) +1 similar issue
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-kbl7/igt@kms_plane_alpha_blend@pipe-c-alpha-basic.html

  * igt@kms_plane_alpha_blend@pipe-c-alpha-transparent-fb:
    - shard-kbl:          NOTRUN -> [FAIL][57] ([i915#265]) +1 similar issue
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-kbl4/igt@kms_plane_alpha_blend@pipe-c-alpha-transparent-fb.html

  * igt@kms_plane_scaling@scaler-with-clipping-clamping@pipe-b-edp-1-scaler-with-clipping-clamping:
    - shard-iclb:         [PASS][58] -> [SKIP][59] ([i915#5176]) +1 similar issue
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-iclb4/igt@kms_plane_scaling@scaler-with-clipping-clamping@pipe-b-edp-1-scaler-with-clipping-clamping.html
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-iclb3/igt@kms_plane_scaling@scaler-with-clipping-clamping@pipe-b-edp-1-scaler-with-clipping-clamping.html

  * igt@kms_psr2_su@page_flip-xrgb8888:
    - shard-skl:          NOTRUN -> [SKIP][60] ([fdo#109271] / [i915#658]) +2 similar issues
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-skl8/igt@kms_psr2_su@page_flip-xrgb8888.html

  * igt@kms_psr@psr2_primary_page_flip:
    - shard-iclb:         [PASS][61] -> [SKIP][62] ([fdo#109441]) +1 similar issue
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-iclb2/igt@kms_psr@psr2_primary_page_flip.html
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-iclb4/igt@kms_psr@psr2_primary_page_flip.html

  * igt@kms_scaling_modes@scaling-mode-none@edp-1-pipe-a:
    - shard-skl:          NOTRUN -> [SKIP][63] ([fdo#109271]) +311 similar issues
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-skl10/igt@kms_scaling_modes@scaling-mode-none@edp-1-pipe-a.html

  * igt@kms_vblank@pipe-d-wait-forked-hang:
    - shard-iclb:         NOTRUN -> [SKIP][64] ([fdo#109278])
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-iclb3/igt@kms_vblank@pipe-d-wait-forked-hang.html

  * igt@kms_vblank@pipe-d-wait-idle:
    - shard-skl:          NOTRUN -> [SKIP][65] ([fdo#109271] / [i915#533]) +1 similar issue
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-skl10/igt@kms_vblank@pipe-d-wait-idle.html

  * igt@kms_writeback@writeback-check-output:
    - shard-skl:          NOTRUN -> [SKIP][66] ([fdo#109271] / [i915#2437])
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-skl1/igt@kms_writeback@writeback-check-output.html

  * igt@kms_writeback@writeback-pixel-formats:
    - shard-kbl:          NOTRUN -> [SKIP][67] ([fdo#109271] / [i915#2437])
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-kbl7/igt@kms_writeback@writeback-pixel-formats.html

  * igt@sysfs_clients@fair-0:
    - shard-skl:          NOTRUN -> [SKIP][68] ([fdo#109271] / [i915#2994]) +4 similar issues
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-skl8/igt@sysfs_clients@fair-0.html

  
#### Possible fixes ####

  * igt@fbdev@pan:
    - {shard-rkl}:        [SKIP][69] ([i915#2582]) -> [PASS][70]
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-rkl-2/igt@fbdev@pan.html
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-rkl-6/igt@fbdev@pan.html

  * igt@gem_eio@in-flight-contexts-1us:
    - shard-tglb:         [TIMEOUT][71] ([i915#3063]) -> [PASS][72] +1 similar issue
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-tglb5/igt@gem_eio@in-flight-contexts-1us.html
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-tglb7/igt@gem_eio@in-flight-contexts-1us.html

  * igt@gem_eio@unwedge-stress:
    - shard-tglb:         [FAIL][73] ([i915#232]) -> [PASS][74]
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-tglb5/igt@gem_eio@unwedge-stress.html
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-tglb7/igt@gem_eio@unwedge-stress.html

  * igt@gem_exec_fair@basic-none-share@rcs0:
    - {shard-tglu}:       [FAIL][75] ([i915#2842]) -> [PASS][76]
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-tglu-6/igt@gem_exec_fair@basic-none-share@rcs0.html
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-tglu-1/igt@gem_exec_fair@basic-none-share@rcs0.html

  * igt@gem_exec_fair@basic-none@vcs0:
    - shard-apl:          [FAIL][77] ([i915#2842]) -> [PASS][78]
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-apl3/igt@gem_exec_fair@basic-none@vcs0.html
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-apl7/igt@gem_exec_fair@basic-none@vcs0.html

  * igt@gem_exec_fair@basic-pace@rcs0:
    - shard-glk:          [FAIL][79] ([i915#2842]) -> [PASS][80]
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-glk2/igt@gem_exec_fair@basic-pace@rcs0.html
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-glk6/igt@gem_exec_fair@basic-pace@rcs0.html

  * igt@gem_exec_fair@basic-throttle@rcs0:
    - shard-iclb:         [FAIL][81] ([i915#2849]) -> [PASS][82]
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-iclb5/igt@gem_exec_fair@basic-throttle@rcs0.html
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-iclb8/igt@gem_exec_fair@basic-throttle@rcs0.html

  * igt@gem_exec_gttfill@all:
    - {shard-rkl}:        [INCOMPLETE][83] ([i915#5080]) -> ([PASS][84], [PASS][85])
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-rkl-5/igt@gem_exec_gttfill@all.html
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-rkl-2/igt@gem_exec_gttfill@all.html
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-rkl-4/igt@gem_exec_gttfill@all.html

  * igt@gem_exec_whisper@basic-normal-all:
    - shard-glk:          [DMESG-WARN][86] ([i915#118]) -> [PASS][87]
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-glk3/igt@gem_exec_whisper@basic-normal-all.html
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-glk5/igt@gem_exec_whisper@basic-normal-all.html

  * igt@gem_softpin@invalid:
    - shard-skl:          [DMESG-WARN][88] ([i915#1982]) -> [PASS][89]
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-skl4/igt@gem_softpin@invalid.html
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-skl6/igt@gem_softpin@invalid.html

  * igt@gem_softpin@noreloc-s3:
    - shard-skl:          [INCOMPLETE][90] ([i915#1373] / [i915#4939] / [i915#5230]) -> [PASS][91]
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-skl9/igt@gem_softpin@noreloc-s3.html
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-skl1/igt@gem_softpin@noreloc-s3.html

  * igt@i915_pm_backlight@fade_with_suspend:
    - shard-skl:          [INCOMPLETE][92] ([i915#4939]) -> [PASS][93]
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-skl4/igt@i915_pm_backlight@fade_with_suspend.html
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-skl8/igt@i915_pm_backlight@fade_with_suspend.html
    - {shard-rkl}:        [SKIP][94] ([i915#3012]) -> [PASS][95] +1 similar issue
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-rkl-5/igt@i915_pm_backlight@fade_with_suspend.html
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-rkl-6/igt@i915_pm_backlight@fade_with_suspend.html

  * igt@i915_pm_dc@dc6-psr:
    - shard-iclb:         [FAIL][96] ([i915#454]) -> [PASS][97]
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-iclb6/igt@i915_pm_dc@dc6-psr.html
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-iclb7/igt@i915_pm_dc@dc6-psr.html

  * igt@i915_pm_rpm@system-suspend-devices:
    - {shard-rkl}:        [FAIL][98] -> [PASS][99]
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-rkl-5/igt@i915_pm_rpm@system-suspend-devices.html
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-rkl-6/igt@i915_pm_rpm@system-suspend-devices.html

  * igt@i915_selftest@live@hangcheck:
    - shard-snb:          [INCOMPLETE][100] ([i915#3921]) -> [PASS][101]
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-snb7/igt@i915_selftest@live@hangcheck.html
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-snb7/igt@i915_selftest@live@hangcheck.html

  * igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-0:
    - {shard-rkl}:        [SKIP][102] ([i915#1845] / [i915#4098]) -> [PASS][103] +23 similar issues
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-rkl-1/igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-0.html
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-rkl-6/igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-0.html

  * igt@kms_color@pipe-a-ctm-0-25:
    - {shard-rkl}:        [SKIP][104] ([i915#1149] / [i915#1849] / [i915#4070] / [i915#4098]) -> [PASS][105] +2 similar issues
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-rkl-1/igt@kms_color@pipe-a-ctm-0-25.html
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-rkl-6/igt@kms_color@pipe-a-ctm-0-25.html

  * igt@kms_cursor_crc@pipe-a-cursor-64x64-rapid-movement:
    - {shard-rkl}:        [SKIP][106] ([fdo#112022] / [i915#4070]) -> [PASS][107] +7 similar issues
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-rkl-5/igt@kms_cursor_crc@pipe-a-cursor-64x64-rapid-movement.html
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-rkl-6/igt@kms_cursor_crc@pipe-a-cursor-64x64-rapid-movement.html

  * igt@kms_cursor_edge_walk@pipe-a-128x128-bottom-edge:
    - {shard-rkl}:        [SKIP][108] ([i915#1849] / [i915#4070] / [i915#4098]) -> [PASS][109] +3 similar issues
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-rkl-2/igt@kms_cursor_edge_walk@pipe-a-128x128-bottom-edge.html
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-rkl-6/igt@kms_cursor_edge_walk@pipe-a-128x128-bottom-edge.html

  * igt@kms_cursor_legacy@cursora-vs-flipa-atomic:
    - {shard-rkl}:        [SKIP][110] ([fdo#111825] / [i915#4070]) -> [PASS][111] +2 similar issues
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-rkl-5/igt@kms_cursor_legacy@cursora-vs-flipa-atomic.html
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-rkl-6/igt@kms_cursor_legacy@cursora-vs-flipa-atomic.html

  * igt@kms_cursor_legacy@flip-vs-cursor-varying-size:
    - shard-iclb:         [FAIL][112] ([i915#2346]) -> [PASS][113]
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-iclb7/igt@kms_cursor_legacy@flip-vs-cursor-varying-size.html
   [113]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-iclb4/igt@kms_cursor_legacy@flip-vs-cursor-varying-size.html

  * igt@kms_draw_crc@draw-method-xrgb2101010-mmap-gtt-xtiled:
    - {shard-rkl}:        [SKIP][114] ([fdo#111314] / [i915#4098] / [i915#4369]) -> [PASS][115] +4 similar issues
   [114]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-rkl-5/igt@kms_draw_crc@draw-method-xrgb2101010-mmap-gtt-xtiled.html
   [115]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-rkl-6/igt@kms_draw_crc@draw-method-xrgb2101010-mmap-gtt-xtiled.html

  * igt@kms_flip@flip-vs-expired-vblank@a-edp1:
    - shard-skl:          [FAIL][116] ([i915#79]) -> [PASS][117]
   [116]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-skl1/igt@kms_flip@flip-vs-expired-vblank@a-edp1.html
   [117]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-skl7/igt@kms_flip@flip-vs-expired-vblank@a-edp1.html

  * igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs-downscaling:
    - shard-iclb:         [SKIP][118] ([i915#3701]) -> [PASS][119]
   [118]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-iclb2/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs-downscaling.html
   [119]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-iclb8/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs-downscaling.html

  * igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile-upscaling:
    - shard-glk:          [FAIL][120] ([i915#4911]) -> [PASS][121]
   [120]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-glk8/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile-upscaling.html
   [121]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-glk9/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile-upscaling.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-spr-indfb-fullscreen:
    - {shard-rkl}:        [SKIP][122] ([i915#1849] / [i915#4098]) -> [PASS][123] +18 similar issues
   [122]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-rkl-1/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-spr-indfb-fullscreen.html
   [123]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-rkl-6/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-spr-indfb-fullscreen.html

  * igt@kms_hdr@bpc-switch-suspend@bpc-switch-suspend-edp-1-pipe-a:
    - shard-skl:          [FAIL][124] ([i915#1188]) -> [PASS][125]
   [124]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-skl9/igt@kms_hdr@bpc-switch-suspend@bpc-switch-suspend-edp-1-pipe-a.html
   [125]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-skl10/igt@kms_hdr@bpc-switch-suspend@bpc-switch-suspend-edp-1-pipe-a.html

  * igt@kms_invalid_mode@bad-hsync-end:
    - {shard-rkl}:        [SKIP][126] ([i915#4278]) -> [PASS][127] +1 similar issue
   [126]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-rkl-2/igt@kms_invalid_mode@bad-hsync-end.html
   [127]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-rkl-6/igt@kms_invalid_mode@bad-hsync-end.html

  * igt@kms_plane@plane-panning-bottom-right-suspend@pipe-b-planes:
    - shard-kbl:          [DMESG-WARN][128] ([i915#180]) -> [PASS][129] +6 similar issues
   [128]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-kbl7/igt@kms_plane@plane-panning-bottom-right-suspend@pipe-b-planes.html
   [129]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-kbl4/igt@kms_plane@plane-panning-bottom-right-suspend@pipe-b-planes.html

  * igt@kms_plane@plane-position-hole@pipe-b-planes:
    - {shard-rkl}:        [SKIP][130] ([i915#1849] / [i915#3558]) -> [PASS][131] +1 similar issue
   [130]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-rkl-5/igt@kms_plane@plane-position-hole@pipe-b-planes.html
   [131]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-rkl-6/igt@kms_plane@plane-position-hole@pipe-b-planes.html

  * igt@kms_plane_alpha_blend@pipe-b-coverage-7efc:
    - shard-skl:          [FAIL][132] ([fdo#108145] / [i915#265]) -> [PASS][133]
   [132]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-skl10/igt@kms_plane_alpha_blend@pipe-b-coverage-7efc.html
   [133]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-skl7/igt@kms_plane_alpha_blend@pipe-b-coverage-7efc.html

  * igt@kms_plane_multiple@atomic-pipe-b-tiling-x:
    - {shard-rkl}:        [SKIP][134] ([i915#1849] / [i915#3558] / [i915#4070]) -> [PASS][135]
   [134]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-rkl-5/igt@kms_plane_multiple@atomic-pipe-b-tiling-x.html
   [135]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-rkl-6/igt@kms_plane_multiple@atomic-pipe-b-tiling-x.html

  * igt@kms_plane_multiple@atomic-pipe-b-tiling-y:
    - {shard-rkl}:        [SKIP][136] ([i915#3558] / [i915#4070]) -> [PASS][137]
   [136]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-rkl-1/igt@kms_plane_multiple@atomic-pipe-b-tiling-y.html
   [137]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-rkl-6/igt@kms_plane_multiple@atomic-pipe-b-tiling-y.html

  * igt@kms_properties@crtc-properties-legacy:
    - {shard-rkl}:        [SKIP][138] ([i915#1849]) -> [PASS][139]
   [138]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-rkl-2/igt@kms_properties@crtc-properties-legacy.html
   [139]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-rkl-6/igt@kms_properties@crtc-properties-legacy.html

  * igt@kms_psr@psr2_primary_render:
    - shard-iclb:         [SKIP][140] ([fdo#109441]) -> [PASS][141]
   [140]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11449/shard-iclb6/igt@kms_psr@psr2_primary_render.html
   [141]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/shard-iclb2/igt@kms_psr@psr2_primary_render.html

  * igt@kms_psr@sprite_mmap_cpu:
    - {shard-rkl}:        [SKIP][142] ([i915#1072]) -> [PASS]

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_22773/index.html

[-- Attachment #2: Type: text/html, Size: 33318 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: program wm blocks to at least blocks required per line
  2022-04-04 13:49 [Intel-gfx] [PATCH] drm/i915: program wm blocks to at least blocks required per line Vinod Govindapillai
                   ` (2 preceding siblings ...)
  2022-04-05  0:14 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
@ 2022-04-06  8:14 ` Lisovskiy, Stanislav
  2022-04-06  9:21   ` Govindapillai, Vinod
  2022-04-06 12:48 ` Ville Syrjälä
  4 siblings, 1 reply; 15+ messages in thread
From: Lisovskiy, Stanislav @ 2022-04-06  8:14 UTC (permalink / raw)
  To: Vinod Govindapillai; +Cc: intel-gfx

On Mon, Apr 04, 2022 at 04:49:18PM +0300, Vinod Govindapillai wrote:
> In configurations with single DRAM channel, for usecases like
> 4K 60 Hz, FIFO underruns are observed quite frequently. Looks
> like the wm0 watermark values need to bumped up because the wm0
> memory latency calculations are probably not taking the DRAM
> channel's impact into account.
> 
> As per the Bspec 49325, if the ddb allocation can hold at least
> one plane_blocks_per_line we should have selected method2.
> Assuming that modern HW versions have enough dbuf to hold
> at least one line, set the wm blocks to equivalent to blocks
> per line.
> 
> cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
> cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> 
> Signed-off-by: Vinod Govindapillai <vinod.govindapillai@intel.com>
> ---
>  drivers/gpu/drm/i915/intel_pm.c | 19 ++++++++++++++++++-
>  1 file changed, 18 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> index 8824f269e5f5..ae28a8c63ca4 100644
> --- a/drivers/gpu/drm/i915/intel_pm.c
> +++ b/drivers/gpu/drm/i915/intel_pm.c
> @@ -5474,7 +5474,24 @@ static void skl_compute_plane_wm(const struct intel_crtc_state *crtc_state,
>  		}
>  	}
>  
> -	blocks = fixed16_to_u32_round_up(selected_result) + 1;
> +	/*
> +	 * Lets have blocks at minimum equivalent to plane_blocks_per_line
> +	 * as there will be at minimum one line for lines configuration.
> +	 *
> +	 * As per the Bspec 49325, if the ddb allocation can hold at least
> +	 * one plane_blocks_per_line, we should have selected method2 in
> +	 * the above logic. Assuming that modern versions have enough dbuf
> +	 * and method2 guarantees blocks equivalent to at least 1 line,
> +	 * select the blocks as plane_blocks_per_line.
> +	 *
> +	 * TODO: Revisit the logic when we have better understanding on DRAM
> +	 * channels' impact on the level 0 memory latency and the relevant
> +	 * wm calculations.
> +	 */
> +	blocks = skl_wm_has_lines(dev_priv, level) ?
> +			max_t(u32, fixed16_to_u32_round_up(selected_result) + 1,
> +				  fixed16_to_u32_round_up(wp->plane_blocks_per_line)) :
> +			fixed16_to_u32_round_up(selected_result) + 1;
>  	lines = div_round_up_fixed16(selected_result,
>  				     wp->plane_blocks_per_line);

I think this is a good fix, no IGT/BAT regressions are visible, also 
it fixes some of the current issues at customer side. So don't see any reason
for it not to be merged.

Reviewed-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>

P.S: there is some checkpatch warning, which probably needs to be addressed :)

Stan

>  
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: program wm blocks to at least blocks required per line
  2022-04-06  8:14 ` [Intel-gfx] [PATCH] " Lisovskiy, Stanislav
@ 2022-04-06  9:21   ` Govindapillai, Vinod
  0 siblings, 0 replies; 15+ messages in thread
From: Govindapillai, Vinod @ 2022-04-06  9:21 UTC (permalink / raw)
  To: Lisovskiy, Stanislav; +Cc: intel-gfx

On Wed, 2022-04-06 at 11:14 +0300, Lisovskiy, Stanislav wrote:
> On Mon, Apr 04, 2022 at 04:49:18PM +0300, Vinod Govindapillai wrote:
> > In configurations with single DRAM channel, for usecases like
> > 4K 60 Hz, FIFO underruns are observed quite frequently. Looks
> > like the wm0 watermark values need to bumped up because the wm0
> > memory latency calculations are probably not taking the DRAM
> > channel's impact into account.
> > 
> > As per the Bspec 49325, if the ddb allocation can hold at least
> > one plane_blocks_per_line we should have selected method2.
> > Assuming that modern HW versions have enough dbuf to hold
> > at least one line, set the wm blocks to equivalent to blocks
> > per line.
> > 
> > cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> > 
> > Signed-off-by: Vinod Govindapillai <vinod.govindapillai@intel.com>
> > ---
> >  drivers/gpu/drm/i915/intel_pm.c | 19 ++++++++++++++++++-
> >  1 file changed, 18 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> > index 8824f269e5f5..ae28a8c63ca4 100644
> > --- a/drivers/gpu/drm/i915/intel_pm.c
> > +++ b/drivers/gpu/drm/i915/intel_pm.c
> > @@ -5474,7 +5474,24 @@ static void skl_compute_plane_wm(const struct intel_crtc_state
> > *crtc_state,
> >  		}
> >  	}
> >  
> > -	blocks = fixed16_to_u32_round_up(selected_result) + 1;
> > +	/*
> > +	 * Lets have blocks at minimum equivalent to plane_blocks_per_line
> > +	 * as there will be at minimum one line for lines configuration.
> > +	 *
> > +	 * As per the Bspec 49325, if the ddb allocation can hold at least
> > +	 * one plane_blocks_per_line, we should have selected method2 in
> > +	 * the above logic. Assuming that modern versions have enough dbuf
> > +	 * and method2 guarantees blocks equivalent to at least 1 line,
> > +	 * select the blocks as plane_blocks_per_line.
> > +	 *
> > +	 * TODO: Revisit the logic when we have better understanding on DRAM
> > +	 * channels' impact on the level 0 memory latency and the relevant
> > +	 * wm calculations.
> > +	 */
> > +	blocks = skl_wm_has_lines(dev_priv, level) ?
> > +			max_t(u32, fixed16_to_u32_round_up(selected_result) + 1,
> > +				  fixed16_to_u32_round_up(wp->plane_blocks_per_line)) :
> > +			fixed16_to_u32_round_up(selected_result) + 1;
> >  	lines = div_round_up_fixed16(selected_result,
> >  				     wp->plane_blocks_per_line);
> 
> I think this is a good fix, no IGT/BAT regressions are visible, also 
> it fixes some of the current issues at customer side. So don't see any reason
> for it not to be merged.
> 
> Reviewed-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> 
> P.S: there is some checkpatch warning, which probably needs to be addressed :)

Thanks Stan. I will check this and update.

BR
vinod
> 
> Stan
> 
> >  
> > -- 
> > 2.25.1
> > 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: program wm blocks to at least blocks required per line
  2022-04-04 13:49 [Intel-gfx] [PATCH] drm/i915: program wm blocks to at least blocks required per line Vinod Govindapillai
                   ` (3 preceding siblings ...)
  2022-04-06  8:14 ` [Intel-gfx] [PATCH] " Lisovskiy, Stanislav
@ 2022-04-06 12:48 ` Ville Syrjälä
  2022-04-06 13:45   ` Lisovskiy, Stanislav
  4 siblings, 1 reply; 15+ messages in thread
From: Ville Syrjälä @ 2022-04-06 12:48 UTC (permalink / raw)
  To: Vinod Govindapillai; +Cc: intel-gfx

On Mon, Apr 04, 2022 at 04:49:18PM +0300, Vinod Govindapillai wrote:
> In configurations with single DRAM channel, for usecases like
> 4K 60 Hz, FIFO underruns are observed quite frequently. Looks
> like the wm0 watermark values need to bumped up because the wm0
> memory latency calculations are probably not taking the DRAM
> channel's impact into account.
> 
> As per the Bspec 49325, if the ddb allocation can hold at least
> one plane_blocks_per_line we should have selected method2.
> Assuming that modern HW versions have enough dbuf to hold
> at least one line, set the wm blocks to equivalent to blocks
> per line.
> 
> cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
> cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> 
> Signed-off-by: Vinod Govindapillai <vinod.govindapillai@intel.com>
> ---
>  drivers/gpu/drm/i915/intel_pm.c | 19 ++++++++++++++++++-
>  1 file changed, 18 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> index 8824f269e5f5..ae28a8c63ca4 100644
> --- a/drivers/gpu/drm/i915/intel_pm.c
> +++ b/drivers/gpu/drm/i915/intel_pm.c
> @@ -5474,7 +5474,24 @@ static void skl_compute_plane_wm(const struct intel_crtc_state *crtc_state,
>  		}
>  	}
>  
> -	blocks = fixed16_to_u32_round_up(selected_result) + 1;
> +	/*
> +	 * Lets have blocks at minimum equivalent to plane_blocks_per_line
> +	 * as there will be at minimum one line for lines configuration.
> +	 *
> +	 * As per the Bspec 49325, if the ddb allocation can hold at least
> +	 * one plane_blocks_per_line, we should have selected method2 in
> +	 * the above logic. Assuming that modern versions have enough dbuf
> +	 * and method2 guarantees blocks equivalent to at least 1 line,
> +	 * select the blocks as plane_blocks_per_line.
> +	 *
> +	 * TODO: Revisit the logic when we have better understanding on DRAM
> +	 * channels' impact on the level 0 memory latency and the relevant
> +	 * wm calculations.
> +	 */
> +	blocks = skl_wm_has_lines(dev_priv, level) ?
> +			max_t(u32, fixed16_to_u32_round_up(selected_result) + 1,
> +				  fixed16_to_u32_round_up(wp->plane_blocks_per_line)) :
> +			fixed16_to_u32_round_up(selected_result) + 1;

That's looks rather convoluted.

  blocks = fixed16_to_u32_round_up(selected_result) + 1;
+ /* blah */
+ if (has_lines)
+	blocks = max(blocks, fixed16_to_u32_round_up(wp->plane_blocks_per_line));

Also since Art said nothing like this should actually be needed
I think the comment should make it a bit more clear that this
is just a hack to work around the underruns with some single
memory channel configurations.


>  	lines = div_round_up_fixed16(selected_result,
>  				     wp->plane_blocks_per_line);
>  
> -- 
> 2.25.1

-- 
Ville Syrjälä
Intel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: program wm blocks to at least blocks required per line
  2022-04-06 12:48 ` Ville Syrjälä
@ 2022-04-06 13:45   ` Lisovskiy, Stanislav
  2022-04-06 14:01     ` Ville Syrjälä
  0 siblings, 1 reply; 15+ messages in thread
From: Lisovskiy, Stanislav @ 2022-04-06 13:45 UTC (permalink / raw)
  To: Ville Syrjälä; +Cc: intel-gfx

On Wed, Apr 06, 2022 at 03:48:02PM +0300, Ville Syrjälä wrote:
> On Mon, Apr 04, 2022 at 04:49:18PM +0300, Vinod Govindapillai wrote:
> > In configurations with single DRAM channel, for usecases like
> > 4K 60 Hz, FIFO underruns are observed quite frequently. Looks
> > like the wm0 watermark values need to bumped up because the wm0
> > memory latency calculations are probably not taking the DRAM
> > channel's impact into account.
> > 
> > As per the Bspec 49325, if the ddb allocation can hold at least
> > one plane_blocks_per_line we should have selected method2.
> > Assuming that modern HW versions have enough dbuf to hold
> > at least one line, set the wm blocks to equivalent to blocks
> > per line.
> > 
> > cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> > 
> > Signed-off-by: Vinod Govindapillai <vinod.govindapillai@intel.com>
> > ---
> >  drivers/gpu/drm/i915/intel_pm.c | 19 ++++++++++++++++++-
> >  1 file changed, 18 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> > index 8824f269e5f5..ae28a8c63ca4 100644
> > --- a/drivers/gpu/drm/i915/intel_pm.c
> > +++ b/drivers/gpu/drm/i915/intel_pm.c
> > @@ -5474,7 +5474,24 @@ static void skl_compute_plane_wm(const struct intel_crtc_state *crtc_state,
> >  		}
> >  	}
> >  
> > -	blocks = fixed16_to_u32_round_up(selected_result) + 1;
> > +	/*
> > +	 * Lets have blocks at minimum equivalent to plane_blocks_per_line
> > +	 * as there will be at minimum one line for lines configuration.
> > +	 *
> > +	 * As per the Bspec 49325, if the ddb allocation can hold at least
> > +	 * one plane_blocks_per_line, we should have selected method2 in
> > +	 * the above logic. Assuming that modern versions have enough dbuf
> > +	 * and method2 guarantees blocks equivalent to at least 1 line,
> > +	 * select the blocks as plane_blocks_per_line.
> > +	 *
> > +	 * TODO: Revisit the logic when we have better understanding on DRAM
> > +	 * channels' impact on the level 0 memory latency and the relevant
> > +	 * wm calculations.
> > +	 */
> > +	blocks = skl_wm_has_lines(dev_priv, level) ?
> > +			max_t(u32, fixed16_to_u32_round_up(selected_result) + 1,
> > +				  fixed16_to_u32_round_up(wp->plane_blocks_per_line)) :
> > +			fixed16_to_u32_round_up(selected_result) + 1;
> 
> That's looks rather convoluted.
> 
>   blocks = fixed16_to_u32_round_up(selected_result) + 1;
> + /* blah */
> + if (has_lines)
> +	blocks = max(blocks, fixed16_to_u32_round_up(wp->plane_blocks_per_line));

We probably need to do similar refactoring in the whole function ;-)

> 
> Also since Art said nothing like this should actually be needed
> I think the comment should make it a bit more clear that this
> is just a hack to work around the underruns with some single
> memory channel configurations.

It is actually not quite a hack, because we are missing that condition
implementation from BSpec 49325, which instructs us to select method2
when ddb blocks allocation is known and that ratio is >= 1.

Mean this one:

"If ('plane buffer allocation' is known and (plane buffer allocation / plane blocks per line) >=1)
Selected Result Blocks = Method 2"

Stan

> 
> 
> >  	lines = div_round_up_fixed16(selected_result,
> >  				     wp->plane_blocks_per_line);
> >  
> > -- 
> > 2.25.1
> 
> -- 
> Ville Syrjälä
> Intel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: program wm blocks to at least blocks required per line
  2022-04-06 13:45   ` Lisovskiy, Stanislav
@ 2022-04-06 14:01     ` Ville Syrjälä
  2022-04-06 14:15       ` Govindapillai, Vinod
  2022-04-06 17:14       ` Lisovskiy, Stanislav
  0 siblings, 2 replies; 15+ messages in thread
From: Ville Syrjälä @ 2022-04-06 14:01 UTC (permalink / raw)
  To: Lisovskiy, Stanislav; +Cc: intel-gfx

On Wed, Apr 06, 2022 at 04:45:26PM +0300, Lisovskiy, Stanislav wrote:
> On Wed, Apr 06, 2022 at 03:48:02PM +0300, Ville Syrjälä wrote:
> > On Mon, Apr 04, 2022 at 04:49:18PM +0300, Vinod Govindapillai wrote:
> > > In configurations with single DRAM channel, for usecases like
> > > 4K 60 Hz, FIFO underruns are observed quite frequently. Looks
> > > like the wm0 watermark values need to bumped up because the wm0
> > > memory latency calculations are probably not taking the DRAM
> > > channel's impact into account.
> > > 
> > > As per the Bspec 49325, if the ddb allocation can hold at least
> > > one plane_blocks_per_line we should have selected method2.
> > > Assuming that modern HW versions have enough dbuf to hold
> > > at least one line, set the wm blocks to equivalent to blocks
> > > per line.
> > > 
> > > cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> > > 
> > > Signed-off-by: Vinod Govindapillai <vinod.govindapillai@intel.com>
> > > ---
> > >  drivers/gpu/drm/i915/intel_pm.c | 19 ++++++++++++++++++-
> > >  1 file changed, 18 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> > > index 8824f269e5f5..ae28a8c63ca4 100644
> > > --- a/drivers/gpu/drm/i915/intel_pm.c
> > > +++ b/drivers/gpu/drm/i915/intel_pm.c
> > > @@ -5474,7 +5474,24 @@ static void skl_compute_plane_wm(const struct intel_crtc_state *crtc_state,
> > >  		}
> > >  	}
> > >  
> > > -	blocks = fixed16_to_u32_round_up(selected_result) + 1;
> > > +	/*
> > > +	 * Lets have blocks at minimum equivalent to plane_blocks_per_line
> > > +	 * as there will be at minimum one line for lines configuration.
> > > +	 *
> > > +	 * As per the Bspec 49325, if the ddb allocation can hold at least
> > > +	 * one plane_blocks_per_line, we should have selected method2 in
> > > +	 * the above logic. Assuming that modern versions have enough dbuf
> > > +	 * and method2 guarantees blocks equivalent to at least 1 line,
> > > +	 * select the blocks as plane_blocks_per_line.
> > > +	 *
> > > +	 * TODO: Revisit the logic when we have better understanding on DRAM
> > > +	 * channels' impact on the level 0 memory latency and the relevant
> > > +	 * wm calculations.
> > > +	 */
> > > +	blocks = skl_wm_has_lines(dev_priv, level) ?
> > > +			max_t(u32, fixed16_to_u32_round_up(selected_result) + 1,
> > > +				  fixed16_to_u32_round_up(wp->plane_blocks_per_line)) :
> > > +			fixed16_to_u32_round_up(selected_result) + 1;
> > 
> > That's looks rather convoluted.
> > 
> >   blocks = fixed16_to_u32_round_up(selected_result) + 1;
> > + /* blah */
> > + if (has_lines)
> > +	blocks = max(blocks, fixed16_to_u32_round_up(wp->plane_blocks_per_line));
> 
> We probably need to do similar refactoring in the whole function ;-)
> 
> > 
> > Also since Art said nothing like this should actually be needed
> > I think the comment should make it a bit more clear that this
> > is just a hack to work around the underruns with some single
> > memory channel configurations.
> 
> It is actually not quite a hack, because we are missing that condition
> implementation from BSpec 49325, which instructs us to select method2
> when ddb blocks allocation is known and that ratio is >= 1.

The ddb allocation is not yet known, so we're implementing the
algorithm 100% correctly.

And this patch does not implement that misisng part anyway.

> 
> Mean this one:
> 
> "If ('plane buffer allocation' is known and (plane buffer allocation / plane blocks per line) >=1)
> Selected Result Blocks = Method 2"
> 
> Stan
> 
> > 
> > 
> > >  	lines = div_round_up_fixed16(selected_result,
> > >  				     wp->plane_blocks_per_line);
> > >  
> > > -- 
> > > 2.25.1
> > 
> > -- 
> > Ville Syrjälä
> > Intel

-- 
Ville Syrjälä
Intel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: program wm blocks to at least blocks required per line
  2022-04-06 14:01     ` Ville Syrjälä
@ 2022-04-06 14:15       ` Govindapillai, Vinod
  2022-04-06 17:14       ` Lisovskiy, Stanislav
  1 sibling, 0 replies; 15+ messages in thread
From: Govindapillai, Vinod @ 2022-04-06 14:15 UTC (permalink / raw)
  To: ville.syrjala, Lisovskiy, Stanislav; +Cc: intel-gfx

On Wed, 2022-04-06 at 17:01 +0300, Ville Syrjälä wrote:
> On Wed, Apr 06, 2022 at 04:45:26PM +0300, Lisovskiy, Stanislav wrote:
> > On Wed, Apr 06, 2022 at 03:48:02PM +0300, Ville Syrjälä wrote:
> > > On Mon, Apr 04, 2022 at 04:49:18PM +0300, Vinod Govindapillai wrote:
> > > > In configurations with single DRAM channel, for usecases like
> > > > 4K 60 Hz, FIFO underruns are observed quite frequently. Looks
> > > > like the wm0 watermark values need to bumped up because the wm0
> > > > memory latency calculations are probably not taking the DRAM
> > > > channel's impact into account.
> > > > 
> > > > As per the Bspec 49325, if the ddb allocation can hold at least
> > > > one plane_blocks_per_line we should have selected method2.
> > > > Assuming that modern HW versions have enough dbuf to hold
> > > > at least one line, set the wm blocks to equivalent to blocks
> > > > per line.
> > > > 
> > > > cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > > cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> > > > 
> > > > Signed-off-by: Vinod Govindapillai <vinod.govindapillai@intel.com>
> > > > ---
> > > >  drivers/gpu/drm/i915/intel_pm.c | 19 ++++++++++++++++++-
> > > >  1 file changed, 18 insertions(+), 1 deletion(-)
> > > > 
> > > > diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> > > > index 8824f269e5f5..ae28a8c63ca4 100644
> > > > --- a/drivers/gpu/drm/i915/intel_pm.c
> > > > +++ b/drivers/gpu/drm/i915/intel_pm.c
> > > > @@ -5474,7 +5474,24 @@ static void skl_compute_plane_wm(const struct intel_crtc_state
> > > > *crtc_state,
> > > >  		}
> > > >  	}
> > > >  
> > > > -	blocks = fixed16_to_u32_round_up(selected_result) + 1;
> > > > +	/*
> > > > +	 * Lets have blocks at minimum equivalent to plane_blocks_per_line
> > > > +	 * as there will be at minimum one line for lines configuration.
> > > > +	 *
> > > > +	 * As per the Bspec 49325, if the ddb allocation can hold at least
> > > > +	 * one plane_blocks_per_line, we should have selected method2 in
> > > > +	 * the above logic. Assuming that modern versions have enough dbuf
> > > > +	 * and method2 guarantees blocks equivalent to at least 1 line,
> > > > +	 * select the blocks as plane_blocks_per_line.
> > > > +	 *
> > > > +	 * TODO: Revisit the logic when we have better understanding on DRAM
> > > > +	 * channels' impact on the level 0 memory latency and the relevant
> > > > +	 * wm calculations.
> > > > +	 */
> > > > +	blocks = skl_wm_has_lines(dev_priv, level) ?
> > > > +			max_t(u32, fixed16_to_u32_round_up(selected_result) + 1,
> > > > +				  fixed16_to_u32_round_up(wp->plane_blocks_per_line)) :
> > > > +			fixed16_to_u32_round_up(selected_result) + 1;
> > > 
> > > That's looks rather convoluted.
> > > 
> > >   blocks = fixed16_to_u32_round_up(selected_result) + 1;
> > > + /* blah */
> > > + if (has_lines)
> > > +	blocks = max(blocks, fixed16_to_u32_round_up(wp->plane_blocks_per_line));
> > 
> > We probably need to do similar refactoring in the whole function ;-)
> > 
> > > Also since Art said nothing like this should actually be needed
> > > I think the comment should make it a bit more clear that this
> > > is just a hack to work around the underruns with some single
> > > memory channel configurations.
> > 
> > It is actually not quite a hack, because we are missing that condition
> > implementation from BSpec 49325, which instructs us to select method2
> > when ddb blocks allocation is known and that ratio is >= 1.

In the slides sent by Art, It is mentioned that driver should be using the wm results to arrive at
optimum ddb allocations. So I guess the best solution would be to identify the extra latency because
of the single DRAM channel and calculate the wm.

> 
> The ddb allocation is not yet known, so we're implementing the
> algorithm 100% correctly.
> 
> And this patch does not implement that misisng part anyway.

Thanks. Updated the patch as per your comments and V2 sent.

BR
Vinod

> > Mean this one:
> > 
> > "If ('plane buffer allocation' is known and (plane buffer allocation / plane blocks per line)
> > >=1)
> > Selected Result Blocks = Method 2"
> > 
> > Stan
> > 
> > > 
> > > >  	lines = div_round_up_fixed16(selected_result,
> > > >  				     wp->plane_blocks_per_line);
> > > >  
> > > > -- 
> > > > 2.25.1
> > > 
> > > -- 
> > > Ville Syrjälä
> > > Intel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: program wm blocks to at least blocks required per line
  2022-04-06 14:01     ` Ville Syrjälä
  2022-04-06 14:15       ` Govindapillai, Vinod
@ 2022-04-06 17:14       ` Lisovskiy, Stanislav
  2022-04-06 18:09         ` Ville Syrjälä
  1 sibling, 1 reply; 15+ messages in thread
From: Lisovskiy, Stanislav @ 2022-04-06 17:14 UTC (permalink / raw)
  To: Ville Syrjälä; +Cc: intel-gfx

On Wed, Apr 06, 2022 at 05:01:39PM +0300, Ville Syrjälä wrote:
> On Wed, Apr 06, 2022 at 04:45:26PM +0300, Lisovskiy, Stanislav wrote:
> > On Wed, Apr 06, 2022 at 03:48:02PM +0300, Ville Syrjälä wrote:
> > > On Mon, Apr 04, 2022 at 04:49:18PM +0300, Vinod Govindapillai wrote:
> > > > In configurations with single DRAM channel, for usecases like
> > > > 4K 60 Hz, FIFO underruns are observed quite frequently. Looks
> > > > like the wm0 watermark values need to bumped up because the wm0
> > > > memory latency calculations are probably not taking the DRAM
> > > > channel's impact into account.
> > > > 
> > > > As per the Bspec 49325, if the ddb allocation can hold at least
> > > > one plane_blocks_per_line we should have selected method2.
> > > > Assuming that modern HW versions have enough dbuf to hold
> > > > at least one line, set the wm blocks to equivalent to blocks
> > > > per line.
> > > > 
> > > > cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > > cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> > > > 
> > > > Signed-off-by: Vinod Govindapillai <vinod.govindapillai@intel.com>
> > > > ---
> > > >  drivers/gpu/drm/i915/intel_pm.c | 19 ++++++++++++++++++-
> > > >  1 file changed, 18 insertions(+), 1 deletion(-)
> > > > 
> > > > diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> > > > index 8824f269e5f5..ae28a8c63ca4 100644
> > > > --- a/drivers/gpu/drm/i915/intel_pm.c
> > > > +++ b/drivers/gpu/drm/i915/intel_pm.c
> > > > @@ -5474,7 +5474,24 @@ static void skl_compute_plane_wm(const struct intel_crtc_state *crtc_state,
> > > >  		}
> > > >  	}
> > > >  
> > > > -	blocks = fixed16_to_u32_round_up(selected_result) + 1;
> > > > +	/*
> > > > +	 * Lets have blocks at minimum equivalent to plane_blocks_per_line
> > > > +	 * as there will be at minimum one line for lines configuration.
> > > > +	 *
> > > > +	 * As per the Bspec 49325, if the ddb allocation can hold at least
> > > > +	 * one plane_blocks_per_line, we should have selected method2 in
> > > > +	 * the above logic. Assuming that modern versions have enough dbuf
> > > > +	 * and method2 guarantees blocks equivalent to at least 1 line,
> > > > +	 * select the blocks as plane_blocks_per_line.
> > > > +	 *
> > > > +	 * TODO: Revisit the logic when we have better understanding on DRAM
> > > > +	 * channels' impact on the level 0 memory latency and the relevant
> > > > +	 * wm calculations.
> > > > +	 */
> > > > +	blocks = skl_wm_has_lines(dev_priv, level) ?
> > > > +			max_t(u32, fixed16_to_u32_round_up(selected_result) + 1,
> > > > +				  fixed16_to_u32_round_up(wp->plane_blocks_per_line)) :
> > > > +			fixed16_to_u32_round_up(selected_result) + 1;
> > > 
> > > That's looks rather convoluted.
> > > 
> > >   blocks = fixed16_to_u32_round_up(selected_result) + 1;
> > > + /* blah */
> > > + if (has_lines)
> > > +	blocks = max(blocks, fixed16_to_u32_round_up(wp->plane_blocks_per_line));
> > 
> > We probably need to do similar refactoring in the whole function ;-)
> > 
> > > 
> > > Also since Art said nothing like this should actually be needed
> > > I think the comment should make it a bit more clear that this
> > > is just a hack to work around the underruns with some single
> > > memory channel configurations.
> > 
> > It is actually not quite a hack, because we are missing that condition
> > implementation from BSpec 49325, which instructs us to select method2
> > when ddb blocks allocation is known and that ratio is >= 1.
> 
> The ddb allocation is not yet known, so we're implementing the
> algorithm 100% correctly.
> 
> And this patch does not implement that misisng part anyway.

Yes, as I understood method2 would just give amount of blocks to be
at least as dbuf blocks per line.

Wonder whether should we actually fully implement this BSpec clause 
and add it to the point where ddb allocation is known or are there 
any obstacles to do that, besides having to reshuffle this function a bit?

Stan

> 
> > 
> > Mean this one:
> > 
> > "If ('plane buffer allocation' is known and (plane buffer allocation / plane blocks per line) >=1)
> > Selected Result Blocks = Method 2"
> > 
> > Stan
> > 
> > > 
> > > 
> > > >  	lines = div_round_up_fixed16(selected_result,
> > > >  				     wp->plane_blocks_per_line);
> > > >  
> > > > -- 
> > > > 2.25.1
> > > 
> > > -- 
> > > Ville Syrjälä
> > > Intel
> 
> -- 
> Ville Syrjälä
> Intel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: program wm blocks to at least blocks required per line
  2022-04-06 17:14       ` Lisovskiy, Stanislav
@ 2022-04-06 18:09         ` Ville Syrjälä
  2022-04-07  6:43           ` Lisovskiy, Stanislav
  0 siblings, 1 reply; 15+ messages in thread
From: Ville Syrjälä @ 2022-04-06 18:09 UTC (permalink / raw)
  To: Lisovskiy, Stanislav; +Cc: intel-gfx

On Wed, Apr 06, 2022 at 08:14:58PM +0300, Lisovskiy, Stanislav wrote:
> On Wed, Apr 06, 2022 at 05:01:39PM +0300, Ville Syrjälä wrote:
> > On Wed, Apr 06, 2022 at 04:45:26PM +0300, Lisovskiy, Stanislav wrote:
> > > On Wed, Apr 06, 2022 at 03:48:02PM +0300, Ville Syrjälä wrote:
> > > > On Mon, Apr 04, 2022 at 04:49:18PM +0300, Vinod Govindapillai wrote:
> > > > > In configurations with single DRAM channel, for usecases like
> > > > > 4K 60 Hz, FIFO underruns are observed quite frequently. Looks
> > > > > like the wm0 watermark values need to bumped up because the wm0
> > > > > memory latency calculations are probably not taking the DRAM
> > > > > channel's impact into account.
> > > > > 
> > > > > As per the Bspec 49325, if the ddb allocation can hold at least
> > > > > one plane_blocks_per_line we should have selected method2.
> > > > > Assuming that modern HW versions have enough dbuf to hold
> > > > > at least one line, set the wm blocks to equivalent to blocks
> > > > > per line.
> > > > > 
> > > > > cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > > > cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> > > > > 
> > > > > Signed-off-by: Vinod Govindapillai <vinod.govindapillai@intel.com>
> > > > > ---
> > > > >  drivers/gpu/drm/i915/intel_pm.c | 19 ++++++++++++++++++-
> > > > >  1 file changed, 18 insertions(+), 1 deletion(-)
> > > > > 
> > > > > diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> > > > > index 8824f269e5f5..ae28a8c63ca4 100644
> > > > > --- a/drivers/gpu/drm/i915/intel_pm.c
> > > > > +++ b/drivers/gpu/drm/i915/intel_pm.c
> > > > > @@ -5474,7 +5474,24 @@ static void skl_compute_plane_wm(const struct intel_crtc_state *crtc_state,
> > > > >  		}
> > > > >  	}
> > > > >  
> > > > > -	blocks = fixed16_to_u32_round_up(selected_result) + 1;
> > > > > +	/*
> > > > > +	 * Lets have blocks at minimum equivalent to plane_blocks_per_line
> > > > > +	 * as there will be at minimum one line for lines configuration.
> > > > > +	 *
> > > > > +	 * As per the Bspec 49325, if the ddb allocation can hold at least
> > > > > +	 * one plane_blocks_per_line, we should have selected method2 in
> > > > > +	 * the above logic. Assuming that modern versions have enough dbuf
> > > > > +	 * and method2 guarantees blocks equivalent to at least 1 line,
> > > > > +	 * select the blocks as plane_blocks_per_line.
> > > > > +	 *
> > > > > +	 * TODO: Revisit the logic when we have better understanding on DRAM
> > > > > +	 * channels' impact on the level 0 memory latency and the relevant
> > > > > +	 * wm calculations.
> > > > > +	 */
> > > > > +	blocks = skl_wm_has_lines(dev_priv, level) ?
> > > > > +			max_t(u32, fixed16_to_u32_round_up(selected_result) + 1,
> > > > > +				  fixed16_to_u32_round_up(wp->plane_blocks_per_line)) :
> > > > > +			fixed16_to_u32_round_up(selected_result) + 1;
> > > > 
> > > > That's looks rather convoluted.
> > > > 
> > > >   blocks = fixed16_to_u32_round_up(selected_result) + 1;
> > > > + /* blah */
> > > > + if (has_lines)
> > > > +	blocks = max(blocks, fixed16_to_u32_round_up(wp->plane_blocks_per_line));
> > > 
> > > We probably need to do similar refactoring in the whole function ;-)
> > > 
> > > > 
> > > > Also since Art said nothing like this should actually be needed
> > > > I think the comment should make it a bit more clear that this
> > > > is just a hack to work around the underruns with some single
> > > > memory channel configurations.
> > > 
> > > It is actually not quite a hack, because we are missing that condition
> > > implementation from BSpec 49325, which instructs us to select method2
> > > when ddb blocks allocation is known and that ratio is >= 1.
> > 
> > The ddb allocation is not yet known, so we're implementing the
> > algorithm 100% correctly.
> > 
> > And this patch does not implement that misisng part anyway.
> 
> Yes, as I understood method2 would just give amount of blocks to be
> at least as dbuf blocks per line.
> 
> Wonder whether should we actually fully implement this BSpec clause 
> and add it to the point where ddb allocation is known or are there 
> any obstacles to do that, besides having to reshuffle this function a bit?

We need to calculate the wm to figure out how much ddb to allocate,
and then we'd need the ddb allocation to figure out how to calculate
the wm. Very much chicken vs. egg right there. We'd have to do some
kind of hideous loop where we'd calculate everything twice. I don't
really want to do that since I'd actually like to move the wm
calculation to happen already much earlier during .check_plane()
as that could reduce the amount of redundant wm calculations we
are currently doing.

-- 
Ville Syrjälä
Intel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: program wm blocks to at least blocks required per line
  2022-04-06 18:09         ` Ville Syrjälä
@ 2022-04-07  6:43           ` Lisovskiy, Stanislav
  2022-04-07 12:09             ` Govindapillai, Vinod
  0 siblings, 1 reply; 15+ messages in thread
From: Lisovskiy, Stanislav @ 2022-04-07  6:43 UTC (permalink / raw)
  To: Ville Syrjälä; +Cc: intel-gfx

On Wed, Apr 06, 2022 at 09:09:06PM +0300, Ville Syrjälä wrote:
> On Wed, Apr 06, 2022 at 08:14:58PM +0300, Lisovskiy, Stanislav wrote:
> > On Wed, Apr 06, 2022 at 05:01:39PM +0300, Ville Syrjälä wrote:
> > > On Wed, Apr 06, 2022 at 04:45:26PM +0300, Lisovskiy, Stanislav wrote:
> > > > On Wed, Apr 06, 2022 at 03:48:02PM +0300, Ville Syrjälä wrote:
> > > > > On Mon, Apr 04, 2022 at 04:49:18PM +0300, Vinod Govindapillai wrote:
> > > > > > In configurations with single DRAM channel, for usecases like
> > > > > > 4K 60 Hz, FIFO underruns are observed quite frequently. Looks
> > > > > > like the wm0 watermark values need to bumped up because the wm0
> > > > > > memory latency calculations are probably not taking the DRAM
> > > > > > channel's impact into account.
> > > > > > 
> > > > > > As per the Bspec 49325, if the ddb allocation can hold at least
> > > > > > one plane_blocks_per_line we should have selected method2.
> > > > > > Assuming that modern HW versions have enough dbuf to hold
> > > > > > at least one line, set the wm blocks to equivalent to blocks
> > > > > > per line.
> > > > > > 
> > > > > > cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > > > > cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> > > > > > 
> > > > > > Signed-off-by: Vinod Govindapillai <vinod.govindapillai@intel.com>
> > > > > > ---
> > > > > >  drivers/gpu/drm/i915/intel_pm.c | 19 ++++++++++++++++++-
> > > > > >  1 file changed, 18 insertions(+), 1 deletion(-)
> > > > > > 
> > > > > > diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> > > > > > index 8824f269e5f5..ae28a8c63ca4 100644
> > > > > > --- a/drivers/gpu/drm/i915/intel_pm.c
> > > > > > +++ b/drivers/gpu/drm/i915/intel_pm.c
> > > > > > @@ -5474,7 +5474,24 @@ static void skl_compute_plane_wm(const struct intel_crtc_state *crtc_state,
> > > > > >  		}
> > > > > >  	}
> > > > > >  
> > > > > > -	blocks = fixed16_to_u32_round_up(selected_result) + 1;
> > > > > > +	/*
> > > > > > +	 * Lets have blocks at minimum equivalent to plane_blocks_per_line
> > > > > > +	 * as there will be at minimum one line for lines configuration.
> > > > > > +	 *
> > > > > > +	 * As per the Bspec 49325, if the ddb allocation can hold at least
> > > > > > +	 * one plane_blocks_per_line, we should have selected method2 in
> > > > > > +	 * the above logic. Assuming that modern versions have enough dbuf
> > > > > > +	 * and method2 guarantees blocks equivalent to at least 1 line,
> > > > > > +	 * select the blocks as plane_blocks_per_line.
> > > > > > +	 *
> > > > > > +	 * TODO: Revisit the logic when we have better understanding on DRAM
> > > > > > +	 * channels' impact on the level 0 memory latency and the relevant
> > > > > > +	 * wm calculations.
> > > > > > +	 */
> > > > > > +	blocks = skl_wm_has_lines(dev_priv, level) ?
> > > > > > +			max_t(u32, fixed16_to_u32_round_up(selected_result) + 1,
> > > > > > +				  fixed16_to_u32_round_up(wp->plane_blocks_per_line)) :
> > > > > > +			fixed16_to_u32_round_up(selected_result) + 1;
> > > > > 
> > > > > That's looks rather convoluted.
> > > > > 
> > > > >   blocks = fixed16_to_u32_round_up(selected_result) + 1;
> > > > > + /* blah */
> > > > > + if (has_lines)
> > > > > +	blocks = max(blocks, fixed16_to_u32_round_up(wp->plane_blocks_per_line));
> > > > 
> > > > We probably need to do similar refactoring in the whole function ;-)
> > > > 
> > > > > 
> > > > > Also since Art said nothing like this should actually be needed
> > > > > I think the comment should make it a bit more clear that this
> > > > > is just a hack to work around the underruns with some single
> > > > > memory channel configurations.
> > > > 
> > > > It is actually not quite a hack, because we are missing that condition
> > > > implementation from BSpec 49325, which instructs us to select method2
> > > > when ddb blocks allocation is known and that ratio is >= 1.
> > > 
> > > The ddb allocation is not yet known, so we're implementing the
> > > algorithm 100% correctly.
> > > 
> > > And this patch does not implement that misisng part anyway.
> > 
> > Yes, as I understood method2 would just give amount of blocks to be
> > at least as dbuf blocks per line.
> > 
> > Wonder whether should we actually fully implement this BSpec clause 
> > and add it to the point where ddb allocation is known or are there 
> > any obstacles to do that, besides having to reshuffle this function a bit?
> 
> We need to calculate the wm to figure out how much ddb to allocate,
> and then we'd need the ddb allocation to figure out how to calculate
> the wm. Very much chicken vs. egg right there. We'd have to do some
> kind of hideous loop where we'd calculate everything twice. I don't
> really want to do that since I'd actually like to move the wm
> calculation to happen already much earlier during .check_plane()
> as that could reduce the amount of redundant wm calculations we
> are currently doing.

I might be missing some details right now, but why do we need a ddb
allocation to count wms?

I thought its like we usually calculate wm levels + min_ddb_allocation,
then based on that we do allocate min_ddb + extra for each plane.
This is correct that by this moment when we calculate wms we have only
min_ddb available, so if this level would be even enabled, we would
at least need min_ddb blocks.

I think we could just use that min_ddb value here for that purpose,
because the condition anyway checks if 
(plane buffer allocation / plane blocks per line) >=1 so, even if
if this wm level would be enabled plane buffer allocation would
be at least min_ddb _or higher_ - however that won't affect this 
condition because even if it happens to be "plane buffer allocation
+ some extra" the ratio would still be valid.
So if it executes for min_ddb / plane blocks per line, we can
probably safely state, further it will be also true.

Stan

> 
> -- 
> Ville Syrjälä
> Intel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: program wm blocks to at least blocks required per line
  2022-04-07  6:43           ` Lisovskiy, Stanislav
@ 2022-04-07 12:09             ` Govindapillai, Vinod
  2022-04-07 12:31               ` Lisovskiy, Stanislav
  0 siblings, 1 reply; 15+ messages in thread
From: Govindapillai, Vinod @ 2022-04-07 12:09 UTC (permalink / raw)
  To: ville.syrjala, Lisovskiy, Stanislav; +Cc: intel-gfx

On Thu, 2022-04-07 at 09:43 +0300, Lisovskiy, Stanislav wrote:
> On Wed, Apr 06, 2022 at 09:09:06PM +0300, Ville Syrjälä wrote:
> > On Wed, Apr 06, 2022 at 08:14:58PM +0300, Lisovskiy, Stanislav wrote:
> > > On Wed, Apr 06, 2022 at 05:01:39PM +0300, Ville Syrjälä wrote:
> > > > On Wed, Apr 06, 2022 at 04:45:26PM +0300, Lisovskiy, Stanislav wrote:
> > > > > On Wed, Apr 06, 2022 at 03:48:02PM +0300, Ville Syrjälä wrote:
> > > > > > On Mon, Apr 04, 2022 at 04:49:18PM +0300, Vinod Govindapillai wrote:
> > > > > > > In configurations with single DRAM channel, for usecases like
> > > > > > > 4K 60 Hz, FIFO underruns are observed quite frequently. Looks
> > > > > > > like the wm0 watermark values need to bumped up because the wm0
> > > > > > > memory latency calculations are probably not taking the DRAM
> > > > > > > channel's impact into account.
> > > > > > > 
> > > > > > > As per the Bspec 49325, if the ddb allocation can hold at least
> > > > > > > one plane_blocks_per_line we should have selected method2.
> > > > > > > Assuming that modern HW versions have enough dbuf to hold
> > > > > > > at least one line, set the wm blocks to equivalent to blocks
> > > > > > > per line.
> > > > > > > 
> > > > > > > cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > > > > > cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> > > > > > > 
> > > > > > > Signed-off-by: Vinod Govindapillai <vinod.govindapillai@intel.com>
> > > > > > > ---
> > > > > > >  drivers/gpu/drm/i915/intel_pm.c | 19 ++++++++++++++++++-
> > > > > > >  1 file changed, 18 insertions(+), 1 deletion(-)
> > > > > > > 
> > > > > > > diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> > > > > > > index 8824f269e5f5..ae28a8c63ca4 100644
> > > > > > > --- a/drivers/gpu/drm/i915/intel_pm.c
> > > > > > > +++ b/drivers/gpu/drm/i915/intel_pm.c
> > > > > > > @@ -5474,7 +5474,24 @@ static void skl_compute_plane_wm(const struct intel_crtc_state
> > > > > > > *crtc_state,
> > > > > > >  		}
> > > > > > >  	}
> > > > > > >  
> > > > > > > -	blocks = fixed16_to_u32_round_up(selected_result) + 1;
> > > > > > > +	/*
> > > > > > > +	 * Lets have blocks at minimum equivalent to plane_blocks_per_line
> > > > > > > +	 * as there will be at minimum one line for lines configuration.
> > > > > > > +	 *
> > > > > > > +	 * As per the Bspec 49325, if the ddb allocation can hold at least
> > > > > > > +	 * one plane_blocks_per_line, we should have selected method2 in
> > > > > > > +	 * the above logic. Assuming that modern versions have enough dbuf
> > > > > > > +	 * and method2 guarantees blocks equivalent to at least 1 line,
> > > > > > > +	 * select the blocks as plane_blocks_per_line.
> > > > > > > +	 *
> > > > > > > +	 * TODO: Revisit the logic when we have better understanding on DRAM
> > > > > > > +	 * channels' impact on the level 0 memory latency and the relevant
> > > > > > > +	 * wm calculations.
> > > > > > > +	 */
> > > > > > > +	blocks = skl_wm_has_lines(dev_priv, level) ?
> > > > > > > +			max_t(u32, fixed16_to_u32_round_up(selected_result) + 1,
> > > > > > > +				  fixed16_to_u32_round_up(wp->plane_blocks_per_line)) :
> > > > > > > +			fixed16_to_u32_round_up(selected_result) + 1;
> > > > > > 
> > > > > > That's looks rather convoluted.
> > > > > > 
> > > > > >   blocks = fixed16_to_u32_round_up(selected_result) + 1;
> > > > > > + /* blah */
> > > > > > + if (has_lines)
> > > > > > +	blocks = max(blocks, fixed16_to_u32_round_up(wp->plane_blocks_per_line));
> > > > > 
> > > > > We probably need to do similar refactoring in the whole function ;-)
> > > > > 
> > > > > > Also since Art said nothing like this should actually be needed
> > > > > > I think the comment should make it a bit more clear that this
> > > > > > is just a hack to work around the underruns with some single
> > > > > > memory channel configurations.
> > > > > 
> > > > > It is actually not quite a hack, because we are missing that condition
> > > > > implementation from BSpec 49325, which instructs us to select method2
> > > > > when ddb blocks allocation is known and that ratio is >= 1.
> > > > 
> > > > The ddb allocation is not yet known, so we're implementing the
> > > > algorithm 100% correctly.
> > > > 
> > > > And this patch does not implement that misisng part anyway.
> > > 
> > > Yes, as I understood method2 would just give amount of blocks to be
> > > at least as dbuf blocks per line.
> > > 
> > > Wonder whether should we actually fully implement this BSpec clause 
> > > and add it to the point where ddb allocation is known or are there 
> > > any obstacles to do that, besides having to reshuffle this function a bit?
> > 
> > We need to calculate the wm to figure out how much ddb to allocate,
> > and then we'd need the ddb allocation to figure out how to calculate
> > the wm. Very much chicken vs. egg right there. We'd have to do some
> > kind of hideous loop where we'd calculate everything twice. I don't
> > really want to do that since I'd actually like to move the wm
> > calculation to happen already much earlier during .check_plane()
> > as that could reduce the amount of redundant wm calculations we
> > are currently doing.
> 
> I might be missing some details right now, but why do we need a ddb
> allocation to count wms?
> 
> I thought its like we usually calculate wm levels + min_ddb_allocation,
> then based on that we do allocate min_ddb + extra for each plane.
> This is correct that by this moment when we calculate wms we have only
> min_ddb available, so if this level would be even enabled, we would
> at least need min_ddb blocks.
> 
> I think we could just use that min_ddb value here for that purpose,
> because the condition anyway checks if 
> (plane buffer allocation / plane blocks per line) >=1 so, even if
> if this wm level would be enabled plane buffer allocation would
> be at least min_ddb _or higher_ - however that won't affect this 
> condition because even if it happens to be "plane buffer allocation
> + some extra" the ratio would still be valid.
> So if it executes for min_ddb / plane blocks per line, we can
> probably safely state, further it will be also true.

min_ddb = 110% of the blocks calculated from the 2 methods (blocks + 10%)
It depends on what method we choose. So I dont think we can use it for any assumptions.

But in any case, I think this patch do not cause any harm in most of the usecases expected out of
skl+ platforms which have enough dbuf!

Per plane ddb allocation happens based on the highest wm level min_ddb which can fit into the
allocation. If one level is not fit, then that level + above package C state transitions are
disabled. 
Now if you look at the logic to select which method to use - if the latency >= linetime, we select
the large buffer method which guantees that there is atleast plane_blocks_per_line. So I think we
can safely assume that latency for wake wm level will be mostly higher, which implies using the
"large buffer" method.

So this change mostly limits to wm0. And hence should not impact ddb allocation, but the memory
fetch bursts might happen slightly more frequently when the processor is in C0?

BR
vinod

> 
> Stan
> 
> > -- 
> > Ville Syrjälä
> > Intel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: program wm blocks to at least blocks required per line
  2022-04-07 12:09             ` Govindapillai, Vinod
@ 2022-04-07 12:31               ` Lisovskiy, Stanislav
  0 siblings, 0 replies; 15+ messages in thread
From: Lisovskiy, Stanislav @ 2022-04-07 12:31 UTC (permalink / raw)
  To: Govindapillai, Vinod; +Cc: intel-gfx

On Thu, Apr 07, 2022 at 03:09:48PM +0300, Govindapillai, Vinod wrote:
> On Thu, 2022-04-07 at 09:43 +0300, Lisovskiy, Stanislav wrote:
> > On Wed, Apr 06, 2022 at 09:09:06PM +0300, Ville Syrjälä wrote:
> > > On Wed, Apr 06, 2022 at 08:14:58PM +0300, Lisovskiy, Stanislav wrote:
> > > > On Wed, Apr 06, 2022 at 05:01:39PM +0300, Ville Syrjälä wrote:
> > > > > On Wed, Apr 06, 2022 at 04:45:26PM +0300, Lisovskiy, Stanislav wrote:
> > > > > > On Wed, Apr 06, 2022 at 03:48:02PM +0300, Ville Syrjälä wrote:
> > > > > > > On Mon, Apr 04, 2022 at 04:49:18PM +0300, Vinod Govindapillai wrote:
> > > > > > > > In configurations with single DRAM channel, for usecases like
> > > > > > > > 4K 60 Hz, FIFO underruns are observed quite frequently. Looks
> > > > > > > > like the wm0 watermark values need to bumped up because the wm0
> > > > > > > > memory latency calculations are probably not taking the DRAM
> > > > > > > > channel's impact into account.
> > > > > > > >
> > > > > > > > As per the Bspec 49325, if the ddb allocation can hold at least
> > > > > > > > one plane_blocks_per_line we should have selected method2.
> > > > > > > > Assuming that modern HW versions have enough dbuf to hold
> > > > > > > > at least one line, set the wm blocks to equivalent to blocks
> > > > > > > > per line.
> > > > > > > >
> > > > > > > > cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > > > > > > cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> > > > > > > >
> > > > > > > > Signed-off-by: Vinod Govindapillai <vinod.govindapillai@intel.com>
> > > > > > > > ---
> > > > > > > >  drivers/gpu/drm/i915/intel_pm.c | 19 ++++++++++++++++++-
> > > > > > > >  1 file changed, 18 insertions(+), 1 deletion(-)
> > > > > > > >
> > > > > > > > diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> > > > > > > > index 8824f269e5f5..ae28a8c63ca4 100644
> > > > > > > > --- a/drivers/gpu/drm/i915/intel_pm.c
> > > > > > > > +++ b/drivers/gpu/drm/i915/intel_pm.c
> > > > > > > > @@ -5474,7 +5474,24 @@ static void skl_compute_plane_wm(const struct intel_crtc_state
> > > > > > > > *crtc_state,
> > > > > > > >           }
> > > > > > > >   }
> > > > > > > >
> > > > > > > > - blocks = fixed16_to_u32_round_up(selected_result) + 1;
> > > > > > > > + /*
> > > > > > > > +  * Lets have blocks at minimum equivalent to plane_blocks_per_line
> > > > > > > > +  * as there will be at minimum one line for lines configuration.
> > > > > > > > +  *
> > > > > > > > +  * As per the Bspec 49325, if the ddb allocation can hold at least
> > > > > > > > +  * one plane_blocks_per_line, we should have selected method2 in
> > > > > > > > +  * the above logic. Assuming that modern versions have enough dbuf
> > > > > > > > +  * and method2 guarantees blocks equivalent to at least 1 line,
> > > > > > > > +  * select the blocks as plane_blocks_per_line.
> > > > > > > > +  *
> > > > > > > > +  * TODO: Revisit the logic when we have better understanding on DRAM
> > > > > > > > +  * channels' impact on the level 0 memory latency and the relevant
> > > > > > > > +  * wm calculations.
> > > > > > > > +  */
> > > > > > > > + blocks = skl_wm_has_lines(dev_priv, level) ?
> > > > > > > > +                 max_t(u32, fixed16_to_u32_round_up(selected_result) + 1,
> > > > > > > > +                           fixed16_to_u32_round_up(wp->plane_blocks_per_line)) :
> > > > > > > > +                 fixed16_to_u32_round_up(selected_result) + 1;
> > > > > > >
> > > > > > > That's looks rather convoluted.
> > > > > > >
> > > > > > >   blocks = fixed16_to_u32_round_up(selected_result) + 1;
> > > > > > > + /* blah */
> > > > > > > + if (has_lines)
> > > > > > > +   blocks = max(blocks, fixed16_to_u32_round_up(wp->plane_blocks_per_line));
> > > > > >
> > > > > > We probably need to do similar refactoring in the whole function ;-)
> > > > > >
> > > > > > > Also since Art said nothing like this should actually be needed
> > > > > > > I think the comment should make it a bit more clear that this
> > > > > > > is just a hack to work around the underruns with some single
> > > > > > > memory channel configurations.
> > > > > >
> > > > > > It is actually not quite a hack, because we are missing that condition
> > > > > > implementation from BSpec 49325, which instructs us to select method2
> > > > > > when ddb blocks allocation is known and that ratio is >= 1.
> > > > >
> > > > > The ddb allocation is not yet known, so we're implementing the
> > > > > algorithm 100% correctly.
> > > > >
> > > > > And this patch does not implement that misisng part anyway.
> > > >
> > > > Yes, as I understood method2 would just give amount of blocks to be
> > > > at least as dbuf blocks per line.
> > > >
> > > > Wonder whether should we actually fully implement this BSpec clause
> > > > and add it to the point where ddb allocation is known or are there
> > > > any obstacles to do that, besides having to reshuffle this function a bit?
> > >
> > > We need to calculate the wm to figure out how much ddb to allocate,
> > > and then we'd need the ddb allocation to figure out how to calculate
> > > the wm. Very much chicken vs. egg right there. We'd have to do some
> > > kind of hideous loop where we'd calculate everything twice. I don't
> > > really want to do that since I'd actually like to move the wm
> > > calculation to happen already much earlier during .check_plane()
> > > as that could reduce the amount of redundant wm calculations we
> > > are currently doing.
> >
> > I might be missing some details right now, but why do we need a ddb
> > allocation to count wms?
> >
> > I thought its like we usually calculate wm levels + min_ddb_allocation,
> > then based on that we do allocate min_ddb + extra for each plane.
> > This is correct that by this moment when we calculate wms we have only
> > min_ddb available, so if this level would be even enabled, we would
> > at least need min_ddb blocks.
> >
> > I think we could just use that min_ddb value here for that purpose,
> > because the condition anyway checks if
> > (plane buffer allocation / plane blocks per line) >=1 so, even if
> > if this wm level would be enabled plane buffer allocation would
> > be at least min_ddb _or higher_ - however that won't affect this
> > condition because even if it happens to be "plane buffer allocation
> > + some extra" the ratio would still be valid.
> > So if it executes for min_ddb / plane blocks per line, we can
> > probably safely state, further it will be also true.
> 
> min_ddb = 110% of the blocks calculated from the 2 methods (blocks + 10%)
> It depends on what method we choose. So I dont think we can use it for any assumptions.

Min_ddb is what matters for us because it is an actual ddb allocation we use,
but not the wm level.
As I understand (plane buffer allocation / plane blocks per line) >=1 validity depends
only if min_ddb can get lower after we do full allocation in skl_allocate_plane_ddb,
which can't be smaller than min_ddb.

The allocation algorithm works in such way that it tries to allocate at least min_ddb
, if it can't - wm level would be disabled.
However if it succeeds it might try to add some extra blocks to the allocation
(see skl_allocate_plane_ddb). 
So yes, even though we don't know the exact allocation in skl_compute_plane_wm - 
we can for sure assume it won't be less than min_ddb anyway, which means
that if min_ddb / plane_blocks_per_line >= 1 is true, it will be true also in further,
if that wm level would be at all enabled.

Stan


> 
> But in any case, I think this patch do not cause any harm in most of the usecases expected out of
> skl+ platforms which have enough dbuf!
> 
> Per plane ddb allocation happens based on the highest wm level min_ddb which can fit into the
> allocation. If one level is not fit, then that level + above package C state transitions are
> disabled.
> Now if you look at the logic to select which method to use - if the latency >= linetime, we select
> the large buffer method which guantees that there is atleast plane_blocks_per_line. So I think we
> can safely assume that latency for wake wm level will be mostly higher, which implies using the
> "large buffer" method.
> 
> So this change mostly limits to wm0. And hence should not impact ddb allocation, but the memory
> fetch bursts might happen slightly more frequently when the processor is in C0?
> 
> BR
> vinod
> 
> >
> > Stan
> >
> > > --
> > > Ville Syrjälä
> > > Intel

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2022-04-07 12:30 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-04 13:49 [Intel-gfx] [PATCH] drm/i915: program wm blocks to at least blocks required per line Vinod Govindapillai
2022-04-04 19:09 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for " Patchwork
2022-04-04 19:42 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2022-04-05  0:14 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
2022-04-06  8:14 ` [Intel-gfx] [PATCH] " Lisovskiy, Stanislav
2022-04-06  9:21   ` Govindapillai, Vinod
2022-04-06 12:48 ` Ville Syrjälä
2022-04-06 13:45   ` Lisovskiy, Stanislav
2022-04-06 14:01     ` Ville Syrjälä
2022-04-06 14:15       ` Govindapillai, Vinod
2022-04-06 17:14       ` Lisovskiy, Stanislav
2022-04-06 18:09         ` Ville Syrjälä
2022-04-07  6:43           ` Lisovskiy, Stanislav
2022-04-07 12:09             ` Govindapillai, Vinod
2022-04-07 12:31               ` Lisovskiy, Stanislav

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.